Every time I watch this, I notice something new. Thanks to the Northwest Animation Festival for screening this on the big screen this year, twice! What a treat.
I feel like animation is the most immersive medium available today. It has beauty, style, motion (which is—by definition—unreal), a guiding story, and a depth of subject matter that catches you off guard. (You don’t see it coming in Fallin’ Floyd.)
Try it yourself, watch animation on the big screen. Does it pull you inside?
Here are more gems from the Festival that I want to inhabit. I want to climb inside the screen, and experience these worlds: Continue reading →
The LED strip is the same model used in the HypnoLamp (LPD8806). The difference, of course, is that these animations are as bright and attention-grabbing as possible!
Some people loved it. In dark environments it was probably too bright, and painful to look at. Next time, I’d like to diffuse the light somehow.
Attaching objects to your bicycle helmet is unwise, because it may get snagged in the event of an accident. (I saw plenty of accidents that night. WNBR is a dangerous ride, because many of the participants are inebriated, or inexperienced cyclists.) As a responsible adult, I recommend that you do NOT affix LEDs to your bicycle helmet. … Instead, launch a Kickstarter campaign for translucent helmets with embedded LEDs, and send me the link so I can sponsor it!
Portland Dorkbot had a booth at the Bay Area Maker Faire this year. Here’s my first proper invention, the HypnoLamp, alongside the Editor’s Choice award that we received:
Many factors helped birth the HypnoLamp: At Toorcamp 2012, I learned to program microcontrollers. Jeff of OlyMEGA blessed me with addressable LED strips, at the aforementioned event. Jeff was also at the Portland Mini Maker Faire, showcasing (among other things) glass Ikea lamps with LEDs inside. I decided to build my own version!
I was lucky enough to receive a Leap Motion Controller. It acts like a short-range Kinect for your hands, tracking the position of each individual finger. The Leap’s sensors are fast, and spookily accurate. I love it.
It was a joke, back in September. A goofy idea, amidst a brainstorming session of merely silly ideas. It’s a heavenly harp! And when you turn it upside-down, it becomes a Devil Harp! Ha, ha.
The YouTube trailer would probably look something like this:
I hacked Angel Harp together in my spare time. Four long months! The plan was to finish by Halloween of 2011, but it took considerably longer than expected. The synthesis was completed in one week, the sound effects in another week. Standing on the shoulders of Twang, Angel Harp produces somewhat-realistic tones (like an actual harp! Complex filtering!) And it has 3+ dozen strings, for serious plucking power!
And, the graphics… Let’s talk about that.
Once the Halloween deadline became improbable, I decided to hack each feature until it was “good enough.” If any feature became an eyesore, then I’d revisit it — either for version 1.0, or a future release. The clouds were redone a couple times. I had grand plans for the harp itself, using an (awful, buggy) harp modeling tool; in a future version, you can draw your own harps, and skin them with fancy materials, I think.
Please note: This code comes with no warranty, nor support, whatsoever. None. Zip. Nada. If your talking robots become self-aware and enslave humanity, then I will not be held responsible. But if you’re in the mood for tinkering, here’s how it’s strung together:
First, some Python code: The analyze_lpc.py script analyzes phonemes.dat (which is just a headerless version of phonemes.aif). Individual phonemes are separated by moments of silence, so the script splits the sound file on those. Each phoneme is converted to LPC data, using code that I ported from the rt_lpc project. I felt like I understood the mathematics 3 years ago, but I doubt I could explain it today.
Now, in Flash: Launch the DictCompressor application, and watch the trace messages. Click the screen to open the browser window, then select your cmudict___.txt pronouncing dictionary. (You can obtain the latest CMUdict here.) Flash will convert this to a (smaller) cmudict.dat file, which is what LPCsynth.swf loads.
LPCsynth is the application that talks. The LPCSynthHarness.sayItNow() method creates an array of LPCFrames, which are “spoken” in the sampleData() method. This was never intended for public distribution, so the code is not exactly stellar (the talking bit should be extracted into its own class).
Is this interesting? Did your Flash Player become self-aware after hearing its own voice? Let me know!
It’s funny, originally this was intended for the controlzinc.com website. The robot voice would sing as you clicked, crooning about your mousing habits. I still can’t decide if that idea was brilliant, or terrible.
This book, written by producer Mike Senior, is fantastic:
If only Mixing Secrets For The Small Studio had existed 10 years ago, my music would have been impeccable! (Well, I like to think so.) This book is a magical tome for anyone who records or produces music on a budget. It’s packed with big reveals, and explains the science behind each mixing technique. Forget the accumulated hit-or-miss wisdom of the internet; after reading this book, I found that I could produce substantially better mixes immediately. That’s amazing. (My mixes still aren’t great, but I’m working on it!)
Here are my favorite takeaways from the 20 chapters. I’m writing this to lock these concepts in my head. I’m skimming lots of material, because there’s so much valuable information packed into this book, I can’t possibly recap all of it.
Chris Kann, the owner of wayfar.org, sells a device called the Midines. It’s a Nintendo cartridge that plays the Nintendo Entertainment System like a musical instrument, I kid you not. You insert the Midines cartridge into your NES, plug MIDI cables into the Midines, and off you go, into a world of bloops and blips.
I paid $99 for a Midines in the year 2008, and… I have still not received it. I have sent Chris Kann at least a dozen emails, and never received a single reply. In 2008, I did track him down on IRC — he mentioned that he was going through some hard times, but now it is 3 years later, and he has been completely silent.
The attractor coefficients are still chosen randomly. But now, attractors that explode/collapse are rejected. Also, attractors that create “boring” shapes (by drawing the same pixels repeatedly) are discarded. It’s a little slow, but I’m sure the speed could be improved using Pixel Bender.
You can push your formant sequence to the Yamaha FS1R, using software such as K_Take’s FS1R Editor. Click the “Save .syx” button, and follow the instructions in K_Take’s documentation. This is a lot of fun, and breathes new life into the FS1R.
This project became much deeper than anticipated! The code includes FFT analysis (thanks Gerry Beauregard), pitch detection, a formant detection algorithm, and an AIFF parser to read AIFF files. The interface was a challenge to design and implement, and there are still many unfinished features.
My energy is shifting to other work, so I’ll enhance fseq-flash when time permits.
Drag and resize the blue blocks to change the filter frequency and width.
This sequencer is not using expensive bandpass filters. The oscillators are sine waves, which are frequency modulated with white noise. It may not sound inherently musical, but you can produce great hihats, bass thuds, and airy pitched noises.
Here’s the source code. (Requires Flash CS5 to compile.) Have fun!