Reconstructing Images With Text

I was inspired by Roger Alsing’s supposed “genetic” image compression. It begs for further experimentation!

Here’s my second batch of image reconstruction using Times New Roman characters. The algorithm is a brute-force affair: New characters are colored & positioned randomly. Any characters that make the canvas look more like the original image are saved. And that’s about it. Oh, and the font sizes start large (5120pt) and end small (10pt), so that fine details have a chance of survival.

monarch_1 monarch_2 monarch_3 monarch_4 monarch_5 monarch_6 monarch_7 monarch_original

mona_original mona_reconstruction
Time lapse of the Mona Lisa reconstruction on YouTube, with silly music.

moth_0 moth_1 moth_2 moth_3 moth_4 moth_5 moth_6 moth_7 moth_original

My first batch uses a different algorithm. Each canvas allocates a certain quantity of letters, and progressively mutates them, trying to mimic the original image as closely as possible. This technique is more akin to image compression. This batch is still in progress, it’s very slow. I’ll post these when they’re ready!

6 thoughts on “Reconstructing Images With Text

  1. What if the text that makes up the image is related somehow… Poems about butterflies, err something. Dialog from films that mention Mona Lisa? Maybe just use the text from the wiki page for Mona Lisa and Butterfly?

  2. Thanks Christian! These were pure Python, I’ll port this to Processing or Cocoa when I pick it up again.

    I did experiment with limited letter sets, I have pictures of myself spelled with “Z-A-C-H” here somewhere… This begs for more experimentation — blend modes, different fonts, whole words (with rotation?),..

    There may even be a way to apply the effect to a video stream in realtime, using some creative math and a little cheating ;)

Leave a Reply

Your email address will not be published.