How does it work?

We use a scoring approach that is both analytical and machine learning based. Clay AIR adapts in real-time to the ever-changing environment. A scoring is applied at every frame to the competing layers and filters. Those filters which turn out to be the most relevant for the environment are selected. The approach drives low power consumption while its proprietary analyzers and settings minimize latency and maximize accuracy. The result is high-performing hand tracking and gesture recognition that doesn’t drain your power.

What makes Clay AIR superior to other tracking/gesture solutions?

Clay AIR is the result of 10+ years of research in AI and UX. We’ve developed a solution with uniquely impressive performance that feels more natural and is less power consuming than our competitors. Additionally, we customize the technology to meet your hardware needs – no one else takes an approach this tailored. Our flexibility sets us apart as a provider of reliable solutions for all products and use cases.

How is hand tracking and gesture control useful?

Traditional AR/VR headsets (HMDs) require hand controllers or wands in order to manipulate objects or toggle controls within an experience. Similarly, mobile phones and other smart devices require physical contact with a touch screen or use voice recognition solutions in order to interact with the product. Clay AIR provides a hands-free solution that does not require additional hardware, physical contact, is more reliable than voice prompts, and can be used in all noise environments.

What cameras are you compatible with?

Clay AIR is hardware agnostic, making it compatible with a wide array of modules for all different devices. Our software is equally compatible with traditional sensor types such as, TOF, RGB, NIR, Monochrome and others.

What type of chipset will it work on?

We have experience working with a wide variety of chipsets. We have the most experience with Qualcomm chipsets, and our software will work best with the capabilities of a Snapdragon 425 or above.

What platforms will it work on?

Clay AIR is compatible with Windows, Mac OS, iOS, Android, Unity, ARCore, ARKit, Unix/Linux and more.

How many gestures do you have?

We have a library of 40 gestures that are readily available but we can customize and develop any gesture that is desired for your use case.

What about distance?

Our software can detect the hand and/or gestures up to 10-15 ft away. This distance is dependent on the device’s hardware.

How much power does Clay AIR consume?

The performance of our software depends on the hardware and operating system configuration, so the numbers vary. In most conditions, Clay AIR will consume 6-10% of the CPU.

How accurate is it?

On average we have measured 96% accuracy or above.

What is the latency?

Latency is dependent on the device’s frames per second (FPS). Our software refreshes at every frame, which drives low latency and we experience the highest performance at 30FPS or above.

Will your solution work for different hand sizes and colors?

All skin tones and hand sizes can be accurately recognized by our software. Clay AIR treats all hands equally, although with some cameras we do need adequate light in the environment.

Will it work in all environments?

Clay AIR works well in almost all environmental conditions. If your use cases involve dark rooms or being outdoors at nighttime, an IR or TOF camera would be preferable.

Does your software support the recognition of dual hands simultaneously?

Yes, Clay AIR is able to recognize both hands. The quality of this recognition will vary based on hardware composition.

What types of devices has the software been embedded in?

The uses for our software are rapidly expanding, so we already have experience successfully embedding our software into all different types of devices in many industries. This includes: smartphones, tablets, AR/VR headsets, laptops/computers, and many more.

Can you provide a skeletal model?

Yes – we’ve developed a skeletal model on multiple camera types using our proprietary machine learning methods.

How many points of interest can you track?

We can track a range of different points depending on how much computation you want to dedicate to our software and what you want the output to be. In some cases we have tracked well over 30 points of interest and in others just 1 point (as a cursor).

How would you describe the user experience?

We’ve spent a lot of time making Clay AIR intuitive and natural by driving low latency. The software also tracks subtle properties of the user’s hand and becomes smarter as the user’s session continues. The solution is lightweight, which minimizes discomfort caused by heat and the need for device charging.

Does the user need to calibrate the hand?

One of the benefits of using Clay AIR as a hand tracking or gesture recognition provider is that the user does not need to perform calibration.