{"id":20128,"date":"2022-10-16T11:28:20","date_gmt":"2022-10-16T18:28:20","guid":{"rendered":"http:\/\/jnack.com\/blog\/?p=20128"},"modified":"2022-10-16T11:28:23","modified_gmt":"2022-10-16T18:28:23","slug":"blender-stable-diffusion-%f0%9f%aa%84","status":"publish","type":"post","link":"https:\/\/jnack.com\/blog\/2022\/10\/16\/blender-stable-diffusion-%f0%9f%aa%84\/","title":{"rendered":"Blender + Stable Diffusion = &#x1fa84;"},"content":{"rendered":"\n<p>Easy placement\/movement of 3D primitives -> realistic\/illustrative rendering has long struck me as extremely promising. Using tech like StyleGAN to render from 3D can produce interesting results, but it&#8217;s been difficult to bring the level of quality &amp; consistency up to what Adobe users demand.<\/p>\n\n\n\n<p>Now with Stable Diffusion (and, one hopes, other diffusion models in the future) attached to Blender (and, one hopes, other object manipulation tools), the vision is getting closer to reality:<\/p>\n\n\n\n<figure class=\"wp-block-embed is-type-rich is-provider-twitter wp-block-embed-twitter\"><div class=\"wp-block-embed__wrapper\">\n<blockquote class=\"twitter-tweet\" data-width=\"550\" data-dnt=\"true\"><p lang=\"en\" dir=\"ltr\">This <a href=\"https:\/\/twitter.com\/hashtag\/StableDiffusion?src=hash&amp;ref_src=twsrc%5Etfw\">#StableDiffusion<\/a> add-on for Blender looks amazing. <a href=\"https:\/\/twitter.com\/ai_render?ref_src=twsrc%5Etfw\">@AI_Render<\/a> renders an AI-generated image based on a text prompt and your scene in Blender.<a href=\"https:\/\/t.co\/v3IvXBkcpi\">https:\/\/t.co\/v3IvXBkcpi<\/a> <a href=\"https:\/\/t.co\/YTVqCFvekf\">pic.twitter.com\/YTVqCFvekf<\/a><\/p>&mdash; hardmaru (@hardmaru) <a href=\"https:\/\/twitter.com\/hardmaru\/status\/1581673133148639235?ref_src=twsrc%5Etfw\">October 16, 2022<\/a><\/blockquote><script async src=\"https:\/\/platform.twitter.com\/widgets.js\" charset=\"utf-8\"><\/script>\n<\/div><\/figure>\n","protected":false},"excerpt":{"rendered":"<p>Easy placement\/movement of 3D primitives -> realistic\/illustrative rendering has long struck me as extremely promising. Using tech like StyleGAN to render from 3D can produce interesting results, but it&#8217;s been difficult to bring the level of quality &amp; consistency up to what Adobe users demand. Now with Stable Diffusion (and, one hopes, other diffusion models [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":[],"categories":[18,66],"tags":[],"_links":{"self":[{"href":"https:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/posts\/20128"}],"collection":[{"href":"https:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/comments?post=20128"}],"version-history":[{"count":1,"href":"https:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/posts\/20128\/revisions"}],"predecessor-version":[{"id":20129,"href":"https:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/posts\/20128\/revisions\/20129"}],"wp:attachment":[{"href":"https:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/media?parent=20128"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/categories?post=20128"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jnack.com\/blog\/wp-json\/wp\/v2\/tags?post=20128"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}