TinyML and the Future of Design

Interesting post on how ‘magical experiences’ fueled by AI and machine learning will change how products are designed and used.

There is growing momentum demonstrated by technical progress and ecosystem development. One of the leading startups that are working on helping engineers take advantage of TinyML by automating data collection, training, testing, and deployment, is Edge Impulse. Starting with embedded or IoT devices, Edge Impulse is offering developers the tools and guidance to collect data straight from edge devices, build a model that can detect “behavior”, discern right from wrong, noise from signal, so they can actually make sense of what happens in the real world, across billions of devices, in every place, and everything. By deploying the Edge Impulse model as part of everyone’s firmware, you create the biggest neural network on earth. Effectively, Edge Impulse gives brains to your previously passive devices so you can build better a product with neural personality.

Another interesting company is Syntiant, who’s building a new processor for deep learning, dramatically different from traditional computing methods. By focusing on memory access and parallel processing, their Neural Decision Processors operate at efficiency levels that are orders of magnitude higher than any other technology. The company claims its processors can make devices approximately 200x more efficient by providing 20x the throughput over current low-power MCU solutions, and subsequently, enabling larger networks at significantly lower power. The result? Voice interfaces that allow a far richer and more reliable user experience, otherwise known as “Wow” and “How did it do that?”

TinyML = Big Opportunity

This post explores while tinyML may be the next big thing.

A coalescence of several trends has made the microcontroller not just a conduit for implementing IoT applications but also a powerful, independent processing mechanism in its own right. In recent years, hardware advancements have made it possible for microcontrollers to perform calculations much faster.  Improved hardware coupled with more efficient development standards have made it easier for developers to build programs on these devices. Perhaps the most important trend, though, has been the rise of tiny machine learning, or TinyML. It’s a technology we’ve been following since investing in a startup in this space.

TinyML broadly encapsulates the field of machine learning technologies capable of performing on-device analytics of sensor data at extremely low power. Between hardware advancements and the TinyML community’s recent innovations in machine learning, it is now possible to run increasingly complex deep learning models (the foundation of most modern artificial intelligence applications) directly on microcontrollers. A quick glance under the hood shows this is fundamentally possible because deep learning models are compute-bound, meaning their efficiency is limited by the time it takes to complete a large number of arithmetic operations. Advancements in TinyML have made it possible to run these models on existing microcontroller hardware.

In other words, those 250 billion microcontrollers in our printers, TVs, cars, and pacemakers can now perform tasks that previously only our computers and smartphones could handle. All of our devices and appliances are getting smarter thanks to microcontrollers.

TinyML represents a collaborative effort between the embedded ultra-low power systems and machine learning communities, which traditionally have operated largely independently. This union has opened the floodgates for new and exciting applications of on-device machine learning. However, the knowledge that deep learning and microcontrollers are a perfect match has been pretty exclusive, hidden behind the walls of tech giants like Google and Apple. This becomes more obvious when you learn that this paradigm of running modified deep learning models on microcontrollers is responsible for the “Okay Google” and “Hey Siri,” functionality that has been around for years.

But why is it important that we be able to run these models on microcontrollers? Much of the sensor data generated today is discarded because of cost, bandwidth, or power constraints – or sometimes a combination of all three. For example, take an imagery micro-satellite. Such satellites are equipped with cameras capable of capturing high resolution images but are limited by the size and number of photos they can store and how often they can transmit those photos to Earth. As a result, such satellites have to store images at low resolution and at a low frame rate. What if we could use image detection models to save high resolution photos only if an object of interest (like a ship or weather pattern) was present in the image? While the computing resources on these micro-satellites have historically been too small to support image detection deep learning models, TinyML now makes this possible.