For the full issue on unzip.dev click here.
What?
Stable diffusion is a model that can generate images from text prompts.
Why is it so cool? (don't we have DALLE-2 already?)
- It is open source and free for commercial use.
- You can run it on consumer hardware.
- It generates really impressive results.
Here is a prompt I made for indie hackers: "a group of digital people helping each other build things on the internet":

⚗️ Example
You can run your own text prompts here.
🧞 Some forecasts:
- Videos and 3D models: videos and 3d models are just a bunch of images. I can see how we get models that produce videos soon. Imagine creating movies from some text idea? Maybe someone could finally create new seasons for Firefly? I can also see this impacting video game creation.
- UI/UX: mocking UI/UX experiences could be done relatively easily now. I can see tools targeting this niche.
- Employment ripples: I’ve already seen some artists raising concerns about these models. I can assume that lower-end art creators will have a hard time competing with these models unless creativity and special stylistic designs are needed.
I hope you liked it, for more developer trends like this one you can go check out unzip.dev (I post every few weeks), and for the full Stable Diffusion article click here.