I was inspired by Roger Alsing’s supposed “genetic” image compression. It begs for further experimentation!
Here’s my second batch of image reconstruction using Times New Roman characters. The algorithm is a brute-force affair: New characters are colored & positioned randomly. Any characters that make the canvas look more like the original image are saved. And that’s about it. Oh, and the font sizes start large (5120pt) and end small (10pt), so that fine details have a chance of survival.
Time lapse of the Mona Lisa reconstruction on YouTube, with silly music.
My first batch uses a different algorithm. Each canvas allocates a certain quantity of letters, and progressively mutates them, trying to mimic the original image as closely as possible. This technique is more akin to image compression. This batch is still in progress, it’s very slow. I’ll post these when they’re ready!
BEAUTIFUL BEAUTIFUL! JUST BEAUTIFUL!
Astonishing…these are amazing
Genetic Mutation Algorithms are THE awesome. I’d love to see what heuristics you finally end up with.
Nice, visually stunning work on this sketch! Processing + Python?
What if the text that makes up the image is related somehow… Poems about butterflies, err something. Dialog from films that mention Mona Lisa? Maybe just use the text from the wiki page for Mona Lisa and Butterfly?
Thanks Christian! These were pure Python, I’ll port this to Processing or Cocoa when I pick it up again.
I did experiment with limited letter sets, I have pictures of myself spelled with “Z-A-C-H” here somewhere… This begs for more experimentation — blend modes, different fonts, whole words (with rotation?),..
There may even be a way to apply the effect to a video stream in realtime, using some creative math and a little cheating ;)