close

AI Accelerator -
Case Study

Synopsis

Xailient Detectum™ was created to fulfill the need for fast deep learning model inference at the Edge. Xailient with Movidius on Raspberry 3B+ gives you 70 fps with better software and an AI accelerator.

Key Outcomes

1Xailient outperforms the state-of-the-art MobileNet v2 model with and without the use of hardware accelerators.

2Xailient Detectum was created to fulfill the same real-time demands, making deep learning algorithms more efficient. The result is 4 times higher frame rate than the state-of-the-art MobileNet accelerated with the Movidius™ Neural Compute Stick.

Problem Statement

MobileNet v2 is a state-of-the-art architecture for mobile and embedded computer vision applications. Traditional AI models, like MobileNet v1 and MobileNet v2, are computationally intensive and require computational resources beyond the capabilities of low-power and low-cost devices.

Activity

01

We trained MobileNet v2 and Xailient Detectum using an open-source training dataset to create face detectors. These face detectors perform both localization and classification of faces in an image (i.e., generate bounding box data).

02

These models provided the baseline of state-of-the-art Edge-optimized AI, and their performance was measured in terms of inference speed in frames per second on Raspberry Pi 3B+

03

Four different experiments were run: MobileNet v2 Face Detector without Movidius, MobileNet v2 Face Detector with Movidius, Xailient Face Detector without Movidius, and Xailient Face Detector with Movidius.

Results

The Baseline MobileNet v2 without Movidius NCS had an inference speed of 1 frame per second, and MobileNet v2 with Movidius NCS had an inference speed of 5 frames per second. At the same time, the Xailient model processed 20 frames per second without the Movidius NCS and processed 70 frames per second with Movidius.

Next Steps

To meet the demand to run such deep learning models providing real-time, on-device inferences, AI hardware accelerators such as Movidius™ and Neural Compute Stick significantly improve inference times.

Discussion

The results were unprecedented when the Xailient Detectum™ algorithm was combined with the Movidius™ Neural Compute Stick hardware accelerator. Xailient achieved 70x faster inference than MobileNet v2 and 14x faster inference with MobileNet v2 combined with Movidius™ Neural Compute Stick.

Press releases

January 18, 2024

NEWS PROVIDED BY Xailient Inc.  18 Jan, 2024, 01:13 ET SYDNEY, Jan. 18, 2024 /PRNewswire/ — Xalient customer Abode, a leading provider of DIY smart home security solutions, has been recognized for their innovative new product, the Abode Edge Camera. Xailient AI runs inside the Abode Edge Camera, watching for anomalies like package deliveries or strangers on the […]

November 1, 2023

NEWS PROVIDED BY Xailient  25 Oct, 2023, 09:05 ET Wi-Fi HaLow™ Technology Enables Long-Range, Low-Power Connectivity for Smart Cameras SYDNEY and IRVINE, Calif., Oct. 25, 2023 /PRNewswire/ — Xailient, the leader in edge artificial intelligence (AI) for computer vision, today announced a strategic partnership with Morse Micro, a fast-growing fabless semiconductor company focused on Internet of Things (IoT) connectivity. Together, they […]

Explore our blogs

We see things differently in the dynamic field of computer vision AI

Get started with Xailient

We empower companies to bring computer vision AI products to market
faster and with less investment

OnEdge Newsletter

A weekly newsletter with the best hand-picked resources about Edge AI and Computer Vision

OnEdge is a free weekly newsletter that keeps you ahead of the curve on low-powered Edge devices and computer vision AI.

 

You’ll get insights and resources into:

  • Edge computing use cases.
  • Market trends in Edge computing.
  • Computer Vision AI at the Edge.
  • Machine learning at the Edge.
Cookie Policy