Recently worked on a project where I had to receive strings from Arduino to Openframeworks. Its not really that straight forward like processing so thought of sharing the code..
Recently was fasinated by the aesthetics of turing patterns. Wanted to tinker a little bit with the algorithm and values to see the outcomes. Started exploring and reading codes by Kyle McDonald, Jonathan McCabe and Martin Schneider. Here are some of my explorations.
From the age of Face Detection, we have come to an age where Face Tracking is feasible without a professional setup. All you need is a decent machine with a Kinect. If you have tried / tinkered with ofxFaceTracker, faceAPI or FaceDetect, I am sure you will love this. FaceShift can now be downloaded from their website. Kyle has developed a ofxaddon ofxfaceshift.
My observations: The tracking is very accurate and was impressed with the way it tracks the blinks and eye ball movements. Frame rate is good and the tracking does not have any gittering.
All this said, it comes with lot of limitations. Most of them comes with kinect. Like the minimum distance where the user has to stand / sit. Lighting conditions etc. and you have to train the software with few poses for good tracking results. So still its far from usage in realtime setups. But as of now can be used for animation, expression tracking etc.
I started using computer vision libraries in Processing and then shifted to openframeworks. But have not done anything in web for quite sometime. So thought of trying out some cool stuff in flash. I had very low impression on flash when it comes to computer vision stuff. But looks like even though its slow, its very easy to code in as3.
I built my first gestural bubble game. I dont have to explain the game for those who are familiar with opencv example of processing. But for others, ya this game is all about hitting the bubbles using your hands / fingers / head / anything in your hand :-)
You can play for yourself by clicking here.
Note: Obviously you need a webcam to play this game.
I was a great fan of Apesnake Manwolf project at Fabrica. Thanks to Che-Wei Wang and David Peñuela for inspiring me. The moment I returned home, I couldn resist myself from trying FaceOSC. Its soo cool to play with our face gestures. I spent nearly an hour trying out all possible weird gestures ;-) Then landed up doing an app which will detect when you YAWN. I was imagining a scenario where this app can run in stealth mode in corporates and keep a count of how many times a particular employer yawns. Actually that can generate a good yawn report for the managers :-) Anyways here I am detecting when the user yawns and I am playing an audio which says “Stop yawning and get back to work!” and also displays a text “Stop Yawning!”.
Note: I myself yawned once genuinely when I was writing this post and this girls voice “Stop yawning and get back to work!” was soooooo irritating… then I smiled and said to myself “This app works.”
Tried playing around with openCV blobs and created a palette. This is my first project in openframeworks. I had this idea for the benetton’s window project while I was in fabrica but did not implement it then. As the logic is simple, thought of trying it in openframworks.
Basically it detects the consumers (users) walking by the window and generates a palette with predefined colors. The idea is to have the color pallete of the collection in the showroom. I still want to make the appearance of each bar little smooth.
Exploring Theodore Watson and Zachary Lieberman’s LaserTag. Was nice to learn from their opensource code.