AI image generation accentuates your bad photos and kills photography?
We don’t quite understand the appeal of asking an AI for a photo of a gorilla eating a waffle while wearing headphones is worth it. However, [Micael Widell] shows something in a recent video that could be the best use we have ever seen of DALL-E 2. Instead of concocting new photos, you can apparently use the same technology to clean up your own rotten images. You can see his video, below. The part about the DALL-E 2 edition is at around 4:45.
[Nicholas Sherlock] gave the AI an image of a blurry ladybug and asked it to focus on the subject. It made. He also introduced other images and asked her to make subtle variations of them. He also did a good job.
As [Micael] points out, right now the results aren’t perfect, but they really aren’t bad. What do you think systems like this will be able to do in a few years?
We may not completely agree with [Micael’s] thesis, however. He thinks photography will be dead when you can just ask the AI to mangle your smartphone photos or just ask it to create whatever image you want out of all the fabric. Of course, that will change things. But people still ride horses. People still take black and white photos on film even though we don’t have to anymore.
Of course, if you made a living making horseshoes or selling black-and-white films, you probably had to adapt to changing times. Some did and – of course – some didn’t. But it’s probably no different from that. It may be that “non-augmented” photography becomes more of a niche. On the other hand, certified non-aug photos (remember, you heard that phrase here first) might become more valuable as art in the same way that a handcrafted vase is worth more than one pushed out of a factory line. Who knows?
What do you think? Is photography dying? Or just change? If AI takes hold of photographers, what will they tackle next?
We think our Hackaday jobs are safe for a bit longer. We wonder if DALL-E could learn to take a photographic journey through time?