r/StableDiffusion Sep 11 '22

A better (?) way of doing img2img by finding the noise which reconstructs the original image Img2Img

Post image
912 Upvotes

View all comments

155

u/Aqwis Sep 11 '22 edited Sep 11 '22

I’ve made quite a few attempts at editing existing pictures with img2img. However, at low strengths the pictures tend to be modified too little, while at high strengths the picture is modified in undesired ways. /u/bloc97 posted here about a better way of doing img2img that would allow for more precise editing of existing pictures – by finding the noise that will cause SD to reconstruct the original image.

I made a quick attempt at reversing the k_euler sampler, and ended up with the code I posted in a reply to the post by bloc97 linked above. I’ve refined the code a bit and posted it on GitHub here:

link to code

If image is a PIL image and model is a LatentDiffusion object, then find_noise_for_image can be called like this:

noise_out = find_noise_for_image(model, image, 'Some prompt that accurately describes the image', steps=50, cond_scale=1.0)

The output noise tensor can then be used for image generation by using it as a “fixed code” (to use a term from the original SD scripts) – in other words, instead of generating a random noise tensor (and possibly adding that noise tensor to an image for img2img), you use the noise tensor generated by find_noise_for_image_model.

This method isn’t perfect – deviate too much from the prompt used when generating the noise tensor, and the generated images are going to start differing from the original image in unexpected ways. Some experimentation with the different parameters and making the prompt precise enough will probably be necessary to get this working. Still, for altering existing images in particular ways I’ve had way more success with this method than with standard img2img. I have yet to combine this with bloc97’s Prompt-to-Prompt Image Editing, but I’m guessing the combination will give even more control.

All suggestions for improvements/fixes are highly appreciated. I still have no idea what the best setting of cond_scale, for example, and in general this is just a hack that I made without reading any of the theory on this topic.

Edit: By the way, the original image used in the example is from here and is the output of one of those old "this person does not exist" networks, I believe. I've tried it on other photos (including of myself :), so this works for "real" pictures as well. The prompt that I used when generating the noise tensor for this was "Photo of a smiling woman with brown hair".

78

u/GuavaDull8974 Sep 11 '22

This is spectacular! I made feature request for it already on webui, you think you can produce actualy working comit for it ?

https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/291

4

u/redboundary Sep 11 '22

Isn't it the same as setting "masked content" to original in the img2img settings?

48

u/animemosquito Sep 11 '22

no, this is finding which "seed" basically would lead to SD generating the original image, so that you are able to modify it in less destructive ways.

22

u/MattRix Sep 11 '22

yep exactly! Though to be somewhat pedantic it’s not the seed, it’s the noise itself.

9

u/animemosquito Sep 11 '22

Yeah that's a good distinction to make, I'm trying to make it accessible and less complicated, but it's important to make the distinction that the seed is what is used to produce the initial noise, which is used to diffuse / iterate on to get to a final product

5

u/Trainraider Sep 12 '22

It's a really important distinction because there's a lot more potential entropy in the noise than in the seed. There may be a noise pattern that results in the image, but there probably isn't a seed that makes that specific noise pattern.

9

u/wildgurularry Sep 12 '22

It's true... there are only 2^32 possible seeds, but almost 2^6291456 possible noise patterns for a 512x512 image.

13

u/ldb477 Sep 14 '22

That’s at least double

1

u/Lirezh Sep 15 '22

he regular

There might be countless noise patterns in math but not in reality. The vast majority of those patterns will certainly result in identical result images which is also true for the 2^32 seed variations.
A lot of them are probably going to show the same result.

7

u/almark Sep 12 '22

this means we can keep the subject we like and alter it, move the model, poses, different things in the photo.

1

u/[deleted] Sep 12 '22

... make perfecto hands, I'd hazard a guess

3

u/almark Sep 12 '22

hands are floppy things - laughs

I still have nightmares from first glance in SD.