Some say creativity is like a muscle; you can train in order to improve it. In this idea Yonder started a program this year to spice up the internship sessions but also the R&D inside the company by introducing innovative pieces of hardware, things like: Leap Motion, Google Glasses, Tod beacons, …
Last week we received the Leap motion device. For those who don’t know what is Leap Motion, it is an advance motion sensing device, similar to the Kinect but unlike the Kinect it is targeted to be placed on a physical desk in front of a user.
In reality it is a small USB peripheral device that uses two cameras and three infrared LED to observe a space shaped as a reversed pyramid that extends from 2cm up until around 60 cm in which will track a maximum of 20 fingers / 4 hands and associated tools (pens, …)
Tools like Leap Motion are important for the future as with the introduction of Kinect but more importantly of touch based smart phones and tablets the traditional keyboard / mouse based interaction with a computer tends to fade away being replaced by touch and gestures.
After playing a bit with it I can say that is an interesting tool, and as anything in life it has it’s sweet spots and rough edges.
Good parts:
- Support for most important operating systems: OS X, Windows, Linux
- Stellar development support. With development kits for C++, C# and Unity, Java, JavaScript (web socket based server provided), Python.
- Great API, very simple and powerful API, practically a callback for any event, callback that supplies the detailed information for the position of the: Hands (all hands, with fingers, palm position, palm velocity, palm normal vector, direction, palm sphere center and radius, rotation axis and angle, scale factor and translation vectors), Fingers and Tools(length, width, direction, tip position and velocity), Pointables (all the fingers and tools found), Gestures (Circle, Swipe, Key Tap, Screen Tap)
- Under optimal conditions very sensitive and accurate
Not so good parts:
- Very sensitive to infrared light and lighting conditions. In decent light it has problems with the accuracy
- Requires new models in handling interaction information. We are not speaking about clear declared intentions as a mouse click is. A circle gesture will also involve probably a small swipe gesture so filtering out the exact intentions from the data will be a bit tricky.