MicroAI and Silicon Labs to deploy edge-native AI - The EE

MicroAI and Silicon Labs to deploy edge-native AI

MicroAI, the provider of edge-native artificial intelligence (AI) and machine learning (ML) products, announced that it has joined Silicon Labs’ Technology Partner Program and begun to collaborate to deliver the benefits of edge-native AI for Silicon Labs’ customers.

Silicon Labs is a provider of secure, intelligent wireless technology for a more connected world. MicroAI is the provider of edge-native AI technology that personalises AI on connected endpoints by enabling training and inferencing on each unique edge-connected device.

This collaboration between the companies will enable Silicon Labs customers an accelerated path to designing, developing, and deploying next-generation connected devices that personalize AI for each unique end-user, through mass customisation and contextualisation of their devices’ unique environment.

MicroAI’s AtomML is a next-generation AI solution that enables an AI algorithm to run on simple devices leveraging wireless connectivity with limited memory and CPU capacity; an approach that provides adopters with greater design flexibility, lower cost, faster time to market, and quicker ROI.

AtomML provides breakthrough, edge-native AI capabilities that personalises AI on next generation edge devices with AI training and inferencing that is unique on each and every edge endpoint. With a small compute footprint and these self-learning capabilities, this solution provides sophisticated personalised and context-aware intelligence for high value use-cases that include condition monitoring, security, anomaly detection, and predictive maintenance.   

Silicon Labs ‘Works With’ virtual conference

Yasser Khan, CEO of MicroAI, will present at the upcoming Silicon Labs Works With Conference on September 15 at 10:00am (CDT). The presentation, entitled “Benefits of Enabling Artificial Intelligence & Machine Learning on the Edge,” will provide Silicon Labs customers with an inside look on how edge-native AI solves the business and technical challenges of legacy cloud and hybrid-cloud AI edge approaches, and how OEMs can circumvent cumbersome modeling tool approaches that unnecessarily add cost, complexity and time to get next-gen devices to market.

Follow us and Comment on Twitter @TheEE_io

By continuing to use the site, you agree to the use of cookies. more information

The cookie settings on this website are set to "allow cookies" to give you the best browsing experience possible. If you continue to use this website without changing your cookie settings or you click "Accept" below then you are consenting to this.

Close