Nov 23, 2022

Bijan Mohammadi of Bondzai was invited to the tinyML EMEA 2022 conference

This 3 days Innovation Forum gathered key industry leaders, technical experts, and researchers, from Europe, the Middle East and Africa, innovating with machine learning and artificial intelligence on ultra-low-powered devices.

Bondzai was invited to presented DavinSy, its “on device learning solution to stop model drift”.


tinyML community has recognized the necessity of “on device learning” because of the limitations of current deep AI solutions to adapt to changes, avoid model drift and exploit tremendous available ground truth data. The community has created a working group dedicated to “on device learning” chaired by Danilo Pau (STM).

In his introductory talk, Dr. Pau presented what should be the features of “on device learning” strategies:

  • No more learning on the cloud using big servers,
  • No more backpropagation for learning,
  • No more big data: learning should use small datasets compatible with on device available memory,
  • No more data/model transfer from/to the device: learning should take place on the device using ground truth data,
  • No more stochasticity: deterministic algorithms are necessary in industry.
  • Tiny code footprint

These constitute the backbone of Bondzai’s DavinSy and Bijan’s presentation showed how Bondzai solution answers all these requirements. Bondzai’s DavinSy use abstraction (through a single functional spiking neural layer, adaptive architecture, and no backpropagation) and decomposition (using coupled virtual models) to break Deep Learning complexity.

It is noteworthy that in his keynote presentation Prof. Alberto L. Sangiovanni-Vincentelli of U.C. Berkley put forward abstraction and decomposition as the necessary characteristics for on device Deep Learning to succeed. Indeed, the terabytes of data generated by sensors cannot be transferred/stored and should therefore be digested/learnt on the fly at the device level. Also, he pointed out that backpropagation is a real issue with spiking networks.

By avoiding gradient calculation and backpropagation DavinSy avoids this fundamental difficulty.

Participants showed great interest for DavinSy with numerous questions on the core algorithm, the on-device learning workflow, and the performances. Special attention was given to how Bondzai’s DavinSy facilitates deployment and maintenance of Deep Learning models through “on device learning”.