logo logo

Stable diffusion best prompts reddit

Your Choice. Your Community. Your Platform.

  • shape
  • shape
  • shape
hero image


  • 8M subscribers in the programming community. gg/chatgpt Welcome to r/ChatGPT_Prompts, a place for you to share your best prompts and ask questions about how your prompts can be improved. After spending days on SD, my old room mate and I went out to spread the gospel, but most of our friends have a hard time writing a prompt. Blindly copying Positive and Negative prompts can screw you up. Great Stable Diffusion prompt presets. Prompt Included. ago. It really depends on what fits the project, and there are many good choices. Ai Dreamer - Free daily credits to create art using SD. Having some specific negative tokens will help, however. If I were attempting to generate a specific character doing a specific thing. If it's a photograph, include information about lens, aperture, lighting etc. These were all tested with Waifu Diffusion, Euler A, with each prompt at the beginning of the prompt list, so results will vary a lot if you use Stable Diffusion and different settings. txt, open in text editor, select-all/delete, then paste my prompt and settings into that text file. 5". 6 million images generated by Stable Diffusion, also allows you to select an image and generate a new image based on its prompt. List part 4: Resources . you can rename the file to something else, keeping the . CivitAI is definitely a good place to browse with lots of example images and prompts. Use this with img to img, if its somewhat shaded you get more realistic results than with 2d art even if you are doing 2d. No additional environment or command procreate. I'm a skeptic turned believer. In the prompt I use "age XX" where XX is the bottom age in years for my desired range (10, 20, 30, etc. CFG measures how much the AI will listen to your prompt vs doing its own thing. Keep the enhanced prompt under 150 words and vary the keywords to avoid repetition. Tokens interact through a process called self-attention Go to the page for something like Protogen-x53-Photoreal or Dreamlike-2-Photoreal that is good at portraiture, scroll down to the gallery, and find an image where you can click the little I info icon on the bottom right to see the generation settings in full including negative prompt. Stable Diffusion image 1 using 3D rendering. You can generate pictures of Mickey Mouse with Stable Diffusion (try it) just like you can share links to pictures of Mickey Mouse online. Practically speaking, it is a measure of how confident you feel in your prompt. Copy the prompt from here. Seeds: Unique image IDs that help generate consistent images with slight variations. Structure: A photorealistic concept art of a dungeons and dragons <insert character race (age, size, and maybe a facial feature if the ai keeps getting it wrong)> with <primary visual feature> , wearing <insert clothing type Apr 3, 2024 ยท Here in our prompt, I used “3D Rendering” as my medium. OldManSaluki. The previous prompt-builders I'd used before were mostly randomized lists -- random subject from list, random verb from list, random artists from lists -- GPT-2 can put something together that makes more sense on a whole. Training is based on existence of the prompt elements (tokens) from the input in the output. 5, maybe SDXL, but for sure not 2. My prompt: "beautiful blade runner 2049 Hawaiian robot girl face in the sunlight, thin skin, with a hat, white silver hair, sharp focus, photograph taken Models trained specifically for anime use "booru tags" like "1girl" or "absurdres", so I go to danbooru and look tags used there and try to describe picture I want using these tags (also there's an extention that gives an autocomplete with these tags if you forgot how it's properly written), things like "masterpiece, best quality" or "unity cg wallpaper" and etc. Edit: Since people looking for this info are finding this comment , I'll add that you can also drag your PNG image directly into the prompt Siliconthaumaturgy7593 - Creates in-depth videos on using Stable Diffusion. For A1111: Use () in prompt increases model's attention to enclosed words, and [] decreases it, or you can use (tag:weight) like this (water:1. Usually that folder is \stable-diffusion-webui-master so you'd put the file in \stable-diffusion-webui-master\embeddings. pose details (usually ignored by ai) image style details (usually ignored by ai) Depends on what's most important in the image you're attempting to generate. Even if your picture has multiple subjects, I'd be surprised if that negative prompt affects the similarity of their hair. 3) Press "Enter" and wait for the prompt to appear. About that huge long negative prompt list Comparison. Hi all, I'm working on a new site for people to get inspired by browsing through our collection of AI images and prompts. Works (nearly) every time, and can handle a lot of small details. There are many great prompt reading tools out there now, but for people like me who just want a simple tool, I built this one. Either way, it should be quite easy to use, and it has a copy to clipboard button. List part 2: Web apps (this post). Pretend you are an expert on generating prompts for AI text to image synthesis. You'll be using your list of negative prompt words in the positive prompt, which Join our Discord server! https://dsc. Things like "looking away", "serious eyes" helps get the details correctly. A token is generally all or part of a word, so you can kind of think of it as trying to make all of the words you type be somehow representative of the output. Then I included entire list and ran several random seeds and a few different prompts. Depends on what kind of Stable_Diffusion you have installed. Nautical context, when it means to paint a surface, or to cover with something like tar or resin in order to make it waterproof or corrosion-resistant. Public Prompts: Completely free prompts with high generation probability. ckpt", with "anything-v4. You can replace the example prompts with some of your best working prompts within the super-prompt - so it will follow it as a template when it Use only the most important keywords and avoid using sentences or conjunctions. Getimg. For today's tutorial I will be using Stable Diffusion XL (SDXL) with the 0. katy perry, full body portrait, standing against wall, digital art by artgerm. I…. Well, back to the finger-drawing board, I guess. 5 At the end you get it in a JSON format where you can just copy and paste it into the prompt box within the deforum plugin inside Auto1111/Vlad. In addition, adding facial expressions description is also helpful to generate different angles. It produces very realistic looking people. Pocket works with cartridge adapters for other handheld systems, too. xyz - One-stop-shop to search, discover prompt, quick remix/create with stable diffusion. The best outputs were chosen of 4 images each: a person in a cap and gown receiving their diploma, portrait photo, 50mm lens, natural lighting, proud expression, celebratory moment, graduation photography, the new york times, award winning, high resolution. Now offers CLIP image searching, masked inpainting, as well as text-to-mask inpainting. "infant" for <2 yrs. :P. You can use your own list of styles, characters, objects or use the default ones which are already kinda huge. -----. I adjusted their prompt a little and added some negative prompts. Brackets around [words] reduce their weight by x0. Clearly, thius is interesting to people. boilerplate, weird punctuation, and nothing at all, all fail to make stable diffusion get excited. Prompts in order: background details. Previously I'd just been placing the most important terms to the front. There are some good ideas in here. AIPrompt. assuming you are using Automatic1111, you copy the file into the folder 'embeddings' which is a top-level folder inside your automatic installation. So, Dall-E 3 is more robust to all the possibilities of prompt contents. . For example, with prompt a man holding an apple, 8k clean, and Prompt S/R an apple, a watermelon, a gun you will Next step is to create an AI that creates AIs that create prompts. It depends on the implementation, to increase the weight on a prompt. I Delete the artist, then add zombie between a and man for: "a Find that look or subject in a gallery (civitai for eg) and read the prompts - take those and experiment and see which of those prompts work and which ones are padding. Separate features with commas and never use periods. chatGPT conversation example. Here's a CFG value gut check: CFG 2 - 6: Let the AI take the wheel. If people START with a full negative, build a prompt with it, and then delete the negative, results will be worse. character details. 0 it decreases the weight. pt" as VAE and the following settings: Euler A, 32 Steps, CFG 5. She wears a medieval dress. You select the Stable Diffusion checkpoint PFG instead of SD 1. This is great! megathread for generating good nsfw art. pt extension. - prompt3. I hope there will be some knowledge transferring happening here. I've recently found that structuring your prompts in both Midjourney and Stable Diffusion really helps. 4, 1. ai. You can also specify prompt term weights with a colon, like word:1. You can remove most of them and not notice a quality difference. (The downside is it can't zoom, so it's not suitable for high resolution/complicated image) Sort by: Search Comments. Shown in original 448x640. I'd go 2, 4, 3, 1, 5. Be visual and specific. Listed as Stable in their Text to Image generators. So far I did a run alongside a normal set of negative prompts (still waiting on the 0 prompt only embeds test) It was basically like this in my eyes for a pretty tough prompt/pose. 2) or (water:0. katy perry, full body portrait, sitting, digital art by artgerm. It is inherently more likely to produce something beautiful from a prompt that would produce garbage in SDXL. The prompt is overemphasized, and goes over the 75 token limit, meaning you got two prompts working separately there, and since this doesn't seem to be made on purpose, you didn't weight your second prompt properly. cloth details. "child" for <10 yrs. And then I started removing them one by one. So trying to find prompt examples but the discords all removed their specific categories for images like food/humans/creepy etc for some reason. DDIM 30X. Please share your tips, tricks, and workflows for using this software to create your AI art. 5 or 2. You can also mix these styles effectively like "sci-fi painting by Ian McQue:1 sci-fi painting by Simon Stalenhag:0. I used two different yet similar prompts and did 4 A/B studies with each prompt. Can somebody share prompt and negative prompt example that will generate beautiful ๐Ÿ˜ waifus? Ha! Sure. 8: Look at other people's prompts. Write-Ai-Art-Prompts: Ai assisted prompt builder. Quick edit: this post was up for literally one minute and already got 70 views. Image 1 Prompt: Professional oil painting of establishing shot of canal surrounded by verdant ( (blue)) modern curved rustic Greek tiled buildings, professional majestic oil painting by Ed Blinkey, Atey Ghailan, Studio Ghibli, by ( (Jeremy Mann)), Greg If you want to try Stable Diffusion v2 prompts, you can have a free account here (don't forget to choose SD 2 engine) https://app. I am using 'AUTOMATIC1111 - stable-diffusion-webui' which gives me an option to chose different models that are there in the models directory. I downloaded pruned v3 model and vae file but generated results are much worse that images on this subreddit. Probably the coolest singular term to play with in Stable Diffusion. Tutorial - Guide. CivitAI has been doing a pretty good job of having a catalog So I decided to try some camera prompts and see if they actually matter. "college age" for upper "age 10" range into low "age 20" range. are more like conventions Now, make four variations on that prompt that change something about the way they are portrayed. • 1 yr. Something to consider adding is how adding prompts will restrict the "creativity" of stable diffusion as you push it into Since Stable Diffusion 3 acts like a language model, these descriptive and flowery prompts are designed to generate stunning images. 6. Using "A knight in shining armor (((holding a sword)))" Would do the same thing as "A knight in shining armor (holding a sword:1. Stable Diffusion Modifier Studies: Lots of styles with correlated prompts. A simple standalone viewer for reading prompts from Stable Diffusion generated png outside the webui. Sharing a prompt is like sharing a link to a picture. But typing a prompt into a word processor under the following headlines sees to streamline getting a usable result no end. Seeds are crucial for understanding how Stable Diffusion interprets prompts and allow for controlled experimentation. Prompting a mix of celebrity names (in positive and negative prompts) is still the best way I've seen to influence face shapes, nose shapes, hairstyles, etc. Basic information required to make STABLE DIFFUSION prompt: Prompt structure: /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. Mega Prompt Post: First One: Prompt: light azure armor!!! long wild white hair!! covered chest!!! fantasy, d & d, intricate ornate details, digital painting, pretty face!!, symmetry, concept art, sharp focus, illustration, art by artgerm! greg rutkowski magali villeneuve wlop! ilya kuvshinov!!, octane render /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. io, a prompt generator with surprisingly good results. Like "same haircut", just to grab a random example. This subreddit is dedicated to sharing prompts for use with the ChatGPT language model. I'm not sure how you could compare models this way. You will follow the instructions of the user on their ideas to generate prompts, here are some examples: - prompt1. Siliconthaumaturgy7593 - Creates in-depth videos on using Stable Diffusion. Structuring prompts. Aspect Ratios and CFG Scale: Take an image of a friend from their social media drop it into img2imag and hit "Interrogate" that will guess a prompt based on the starter image, in this case it would say something like this: "a man with a hat standing next to a blue car, with a blue sky and clouds by an artist". So we trained a GPT2 model on thousands of prompts, and we dumped a bit of python, html, css and js to create AIPrompt. I keep older versions of the same models because I can't decide which one is better among them, let alone decide which one is better overall. The updated Lexica. io. 8. These sample images were created locally using Automatic1111's web ui, but you can also achieve similar results by entering prompts one at a time into your distribution/website of choice. usp. ai/presets. A lot of these are useless, though. ) augmented with the following terms. List part 1: Miscellaneous systems . Humans. art has a "god mode" where you can visuzalize hundreds of images for any prompt. 3D rendering. Since it is using multi prompting and weights, use it for Stable Diffusion 2. I used the model "protogenX53Photorealism_10. vae. Examples: 1. Prompt Warnings: Be careful of copying and pasting prompts from other users shots and expecting them to work consistently across all your shots. However the basics for A1111 WebUI are: Parentheses around (words) increase their weight by x1. For NMKD: Use + after a word/phrase to make it more impactful, or A handpainted artwork by Alfons Mucha and Aaron Miller of the face a pretty woman in a futuristic body suit armor, she is centered in the picture, intricate, trending on artstation, highly detailed, oil painting. 6) if its less than 1. You can check it out at instantart. Structure: A photorealistic concept art of a dungeons and dragons <insert character race (age, size, and maybe a facial feature if the ai keeps getting it wrong)> with <primary visual feature> , wearing <insert clothing type /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 1. That picture itself might be under some copyright limitations, or be covered by trademarks, but sharing the link to it is not infringement per se. Unlikely that your image will look like their image. "teen" to reinforce "age 10". x. GPU Renting Services Go to the page for something like Protogen-x53-Photoreal or Dreamlike-2-Photoreal that is good at portraiture, scroll down to the gallery, and find an image where you can click the little I info icon on the bottom right to see the generation settings in full including negative prompt. I came up with a mostly automated method (in Automatic1111 WebUI) to test whether your negative prompts are doing what you think they are: use the X/Y plot script in S/R mode with your list of words to get Stable Diffusion to reveal whether it knows what they mean. STABLE DIFFUSION generates images based on given prompts. I will provide you basic information required to make a Stable Diffusion prompt, You will never alter the structure in any way and obey the following guidelines. 9. List part 3: Google Colab notebooks . 4) Type "Now, generate a negative prompt for it. You should specify that your syntax instructions apply only to automatic1111 - despite the dogma of this sub it’s not even close to the only implementation in use, but that syntax only applies to it. 1. Currently only support AUTOMATIC1111's webUI. these prompts were put into stable diffusion and I received these. which is fine and totally unsurprising given that it doesn't take much experience with stable diffusion to imagine how unspectacular the results would be if you took any of those three negative prompts and rendered them as positive prompts. Thanks a lot. And then the robot apocalypse happens 20 years faster than predicted, but we all die (or at least have our hands and feet mutilated) from being used as materials for 3D modern art. 9: Good luck, and always be testing! I just got done investigating this exact negative prompt list with A1111 local install. Study on understanding Stable Diffusion w/ the Utah Teapot. Almost all the tutorials go over either just the basics or some specific details that aren't necessarily ADMIN MOD. It's a free AI image generation platform based on stable diffusion, it has a variety of fine-tuned models and offers unlimited generation. CFG 7 - 11: Let's collaborate, AI! Stable Diffusion: Prompt Examples and Experiments. Stable Diffusion Prompt Reader. Parentheses: Used to influence the weight of words in the prompt, with higher numbers indicating more importance. Currently, there are about 50,000 images with prompts, but I will be adding more daily. You can then drag it to the “PNG Info” tab to read them and push them to txt2img or img2img to carry on where you left off. 0. Settings for all eight stayed the same: Steps: 20, Sampler: Euler a, CFG scale: 7, Face restoration: CodeFormer, Size: 512x768, Model hash: 7460a6fa. Monochome or black and white - massive influence - will make everything black and white Sepia - mild influence - will give a sepia colour palette. 5)" Except it's faster to type :1. I wanted to share a free resource compiling everything I've learned, in hopes that it will help others. Prompt-engineering voodoo can at least has a strong influence, but the results aren't predictable. Computer Programming. You can also choose between portrait & landscapes mode and it should be fully responsive. 7. Prompt syntax is not specified in Stable Diffusion models, it’s up to the UI implementation, so it can vary. Hopefully other people find this as useful of a tool for prompt writing as I do! The prompt is overemphasized, and goes over the 75 token limit, meaning you got two prompts working separately there, and since this doesn't seem to be made on purpose, you didn't weight your second prompt properly. "Civitai" and "prompthero" , all models has exemple and prompt for each one. Not really a "prompt maker", as much as identifying EVERYTHING that is within the image. art/. Also, repeating an instruction can help too. Steps: 50, Sampler: Euler a, CFG scale: 15, Seed Great way to create more unique faces (very short guide) This is the start of the prompt with 32 samples: Change the number is square brackets to instruct SD on which sample step to swap to the alternate prompt word like this: The # represents the step. So 4 seeds per prompt, 8 total. 5. I used the same seed and prompt without entire negative prompt list and then with. Provide the enhanced prompt in a code block with a "copy" button. For an example for a particular look to a picture - note the cameras mentioned in various prompts, (eg Sony A7, Fuji XT3 etc) see what they do to your picture. Whereas traditional frameworks like React and Vue do the bulk of their work in the browser, Svelte shifts that work into a compile step that happens when you build your app. Draw Things - Locally run Stable Diffusion for free on your iPhone. 1 to create your txt2img. I'm looking to draw Character Reference Sheets using SD. 1 up. Looks like most of if not all of these are just taken from https://publicprompts. requires a membership. A sample step of 2 will focus on the first word of the prompt until frame and the other word Went in-depth tonight trying to understand the particular strengths and styles of each of these models. Repeatable prompt format for DnD character generation. Out of the box, Pocket is compatible with the 2,780+ Game Boy, Game Boy Color & Game Boy Advance game cartridge library. This is a great guide. I've personally been using this to quickly find a style I like, grab the prompt, and just change the subject. Welcome to the unofficial ComfyUI subreddit. Google Lens It'll find images in your images and details in your details of the images in the images Then link you to every store page associated with the things that makes it the most money for "affiliate linking". 5) Copy the lines into SD and run the generation. Now, that said, it's not that simple. Here’s a breakdown of what’s included: 25 Animal prompts. Prompts (Modifiers) to Get Midjourney Style in Stable Diffusion. I've been testing some prompts such as : But it is not working properly, I was wondering if anyone had better prompts / techniques for that specific issue. I particularly liked the megastructure vibes from McQue, the glowing pink and teal lights from Stalenhag, the sheer alien-ness of the Giger samples. Negative prompt is the best way of controlling sizes, amounts and shapes. Negative prompt: legs, waist, body difformities, inhuman hands, writings, signature. Use these prompts as a starting point, for inspiration, or to replicate the exact images from our examples. /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. io : https://aiprompt. For example, if I have a good shot of a model, I like to try different camera shots. I’m new at this. Try his prompt: (((full body))) bnw artistic low light nude photography of a white woman standing illuminated from the right side, looking up to the right, back to the viewer, faint expression, short curly hair, black background, dark room, (((single light source))) ADMIN MOD. Stable Diffusion Random Prompts Generator. A tribute to portable gaming. One of my prompts was for a queen bee character with transparent wings -- the "q You can also generate a file that saves all parameters including the seed phrase. - prompt2. Scale 5. 5. Working on realistic photos of people (prompt/parameters in comments) I built off a prompt I saw by u/northdeer earlier today. Artsio. io, it's a great way to explore the possibilities of stable diffusion and AI. katy perry, full body portrait, wearing a dress, digital art by artgerm. Its possible learn a lot. More examples of what you think are good SDXL prompts, in your Chat GPT prompt, will help it produce more focused outputs. Avyn - Search engine with 9. Stable Diffusion image 2 using 3D rendering. Over the last few months, I've spent nearly 200 hours focused researching, testing, and experimenting with Stable Diffusion prompts to figure out how to consistently create realistic, high quality images. Svelte is a radical new approach to building user interfaces. Sorry about that. **I didn't see a real difference** Prompts: man, muscular, brown hair, green eyes, Nikon Z9, Canon R6, Fuji X-T5, Sony A7 /r/StableDiffusion is back open after the protest of Reddit killing open API access, which will bankrupt app developers, hamper moderation, and exclude blind users from the site. 9 vae, along with the refiner model. I think my personal favorite out of these is Counterfeit for the artistic 2D style The Automatic1111 version saves the prompts and parameters to the png file. Analog Madness. - the UI, model, image dimensions, seed and other factors determine if your image is going to look like their image. " Press Enter. 35 Architecture prompts. **I didn't see a real difference** Prompts: man, muscular, brown hair, green eyes, Nikon Z9, Canon R6, Fuji X-T5, Sony A7 Friendly reminder that we could use command line argument "--gradio-img2img-tool color-sketch" to color it directly in img2img canvas. """. When I get something I want, I create a duplicate of the file in my file manager, change the extension to . The prompt book is showing different examples based on the official guide, with some tweaks and changes. PromptoMania: Highly detailed prompt builder. To save people's time finding the link in the comment section, here's the link - https://openart. A digital audio workstation with a built-in synthesizer and sequencer. Please keep posted images SFW. CivitAI has been doing a pretty good job of having a catalog Setup. 43 Anime prompts. 3. Prompt: A beautiful ((Ukrainian Girl)) with very long straight hair, full lips, a gentle look, and very light white skin. S/R stands for search/replace, and that's what it does - you input a list of words or phrases, it takes the first from the list and treats it as keyword, and replaces all instances of that keyword with other entries from the list. ai- txt2img, img2img, in-painting (also with text), and out-painting on an infinite iOS Apps. Some fictional characters have face shapes and hairstyles that can influence the output. A few. I have used the positive prompt: marie_rose, 3d, bare_shoulders, barefoot, blonde_hair, blue_eyes, colored nails, freckles on the face, braided hair, pigtails, Note: The positive prompt can be anything with a prompt related to hands or feets So I decided to try some camera prompts and see if they actually matter. To generate realistic images of people, I found that adding "portrait photo" in the beginning of the prompt to be extremely effective. - keep a file of prompt ideas that you have copied and try them out. Uber realistic porn merge (urpm) is one of the best stable diffusion models out there, even for non-nude renders. wf fa pa js fl vy cr tp tr ft