close

A Comprehensive Glossary of Edge AI Terms for Business

All

Edge AI

A Comprehensive Glossary of Edge AI Terms for Business

Shandra Earney / November 24, 2022

Edge AI terminology businesses need to know

Edge AI refers to AI algorithms that run on local devices and machines.

It gets its name because computations occur at the network’s edge (near users and close to where data is located) rather than in a cloud computing facility.

This localized aspect of Edge AI comes with many benefits, including reduced latency, improved privacy and security, and improved reliability, to list just a few.

As Edge AI becomes increasingly popular, it’s vital for business decision-makers to understand the technology and how it can benefit their organization.

Here’s a comprehensive glossary of Edge AI terms companies should be aware of.

5G

5G is the fifth generation of mobile networks and wireless technology.

Following 4G, 5G will provide better network speeds for the ever-growing smartphone industry, enhance capabilities for calls and texts, and connect everyone everywhere across the globe.

5G has fewer loading time delays, is faster, and has more data capacity than any other generation. It’s the reliable upgrade from 4G that will improve many industries, such as agriculture, healthcare, and logistics.

AI

Artificial intelligence (AI) refers to machines or systems that mimic human intelligence to perform tasks and iteratively improve based on collected information.

AI systems work by taking in large amounts of labeled training data, analyzing the data for patterns and correlations, and using those findings to make predictions about future states. 

AI significantly enhances human capabilities and contributions, making it a valuable business asset.

AI accelerators

AI accelerator is a term used to refer to high-performance parallel computation machines that are specifically designed for the efficient processing of AI workloads.

These computation machines make platforms significantly faster across a variety of models, applications, and use-cases. 

AI accelerators can be grouped into three main categories: AI-accelerated GPU, AI-accelerated CPU, and dedicated hardware AI accelerators.

AI chips

Artificial intelligence (AI) chips include field-programmable gate arrays (FPGAs), graphics processing units (GPUs), and application-specific integrated circuits (ASICs), specialized for AI. 

General-purpose chips like CPUs can be used for simple AI tasks, but they’re becoming less and less useful as AI becomes more sophisticated. 

AI chips are speedy and efficient as they can complete multiple computations per unit of energy consumed. They achieve this by incorporating huge numbers of tiny transistors, which run faster and consume less energy than larger transistors.

Bandwidth

Bandwidth refers to the volume of information that can be sent over a connection in a measurable amount of time. 

Usually measured in megabits, kilobits, or bits, bandwidth is a measure of throughput (amount per second) rather than speed (distance covered per second). 

Gaming, streaming, running AI, and other high-capacity activities demand a certain amount of bandwidth speed to get the best user experience without any lag.

Bandwidth efficiency

Bandwidth efficiency is a term for the information rate transmitted over a given bandwidth in a communication system. 

A high bandwidth network delivers more information than a low bandwidth network in the same amount of time. 

As this makes the network feel faster, high bandwidth networks and connections are often referred to as “high-speed.”

Cloud

The term cloud refers to servers that are accessed over the internet. It also refers to the databases, software, and AI that run on those servers. 

Cloud servers are located in data centers all over the world. When a device uses cloud computing, data must travel to these centralized data centers for processing before returning back to the device with a decision or action.

By implementing cloud computing, users don’t have to manage physical servers themselves or run software applications and AI on their own machines.

Cloud AI

Cloud AI combines artificial intelligence (AI) with cloud computing.

Cloud AI consists of a shared infrastructure for AI use cases, supporting numerous AI workloads and projects simultaneously. 

Cloud AI brings together AI, software, and hardware (including open source), to deliver AI software-as-a-service.

It facilitates enterprises’ access to AI, enabling them to harness AI capabilities.

Cloud computing

Cloud computing refers to the delivery of different services through the internet. 

These resources include applications and tools like data storage, databases, servers, and software.

Rather than storing files, running applications, and generating insights on a local Edge device or Edge server, cloud-based solutions take care of these tasks in a remote database. 

As long as an electronic device has access to the web, it has access to the data and the software programs offered by the cloud.

CNN

A convolutional neural network (CNN) is a type of artificial neural network used in image processing and recognition that’s specially designed to process pixel data.

A CNN is a powerful image processing AI that uses deep learning to perform both descriptive and generative tasks. It typically uses machine vision that includes video and image recognition technologies.

Computer vision

Computer vision is a field of artificial intelligence (AI) that enables systems and computers to obtain meaningful information from videos, digital images, and other visual inputs — and make recommendations or take actions based on that information.

CPU

A Central Processing Unit (CPU) refers to the primary component of a computer that processes instructions. 

The CPU runs operating systems and applications, constantly receiving input from active software programs or the user. 

It processes data and produces outputs, which may be displayed on-screen or stored by the application. CPUs contain at least one processor, which is the actual chip inside the CPU that executes calculations.

CVOps

Cameras that look out a front door are doing a different job to the cameras looking down on a backyard from the roofline. Both watch for people to keep users safe, but in the world of AI these are very different tasks. 

CVOps is a category that describes the enterprise software process for delivering the right Computer Vision to the right camera at the right time.

A reliable, accurate CV system needs to collect data over time (to deal with Data Drift and Model Drift), and to deliver software updates that adapt to the changing physical world.

Edge

The term ‘Edge’ in Edge computing refers to where computing and storage resources are located. 

The ‘Edge’ refers to the end points of a network such as user devices. Edge computing keeps compute and storage at the same point (or as close as possible to) where data is initially generated.

Edge AI

Edge artificial intelligence (Edge AI) combines Artificial Intelligence and Edge computing. 

With Edge AI, algorithms are processed locally, either directly on user devices or servers near the device. 

The AI algorithms use data generated by these devices, allowing them to make independent decisions in milliseconds without connecting to the cloud or the internet.

Edge AI algorithms

Edge AI algorithms process data generated by hardware devices at the local level,  either directly on the device or on the nearby server.  

Edge AI algorithms utilize data that’s generated at the local level to make decisions in real time.

Edge AI hardware

Edge AI hardware refers to the many devices that are utilized to power & process artificial intelligence at the Edge.

These gadgets are at the Edge because they have the capacity to  process artificial intelligence on the hardware itself, rather than relying on sending data to the cloud.

Edge computing

Edge computing is a distributed computing method that keeps data processing and analysis closer to data sources such as IoT devices or local Edge servers. 

Keeping data close to its source can deliver substantial benefits, including lower costs, improved response times, and better bandwidth availability.

Edge devices

An Edge device is a piece of hardware that controls data flow at the boundary between two networks.

Edge devices fulfill a variety of roles, but they all serve as network exit or entry points. 

Some common Edge device functions include the routing, transmission, monitoring, processing, storage, filtering, and translation of data passing between networks. 

Edge devices can include an employee’s notebook computer, smartphones, various IoT sensors, security cameras, or even the internet-connected microwave in an office lunchroom.

Edge network

An Edge network refers to the area where a local network or device teams up with the internet. 

This area is geographically close to the devices it’s communicating with and can be thought of as an entry point to the network.

Embedded Edge AI

Embedded Edge AI refers to Edge AI technology that’s built-in or embedded within a device. This is done by incorporating specialized Edge AI chips into products.

GPU

A GPU, or a graphics processing unit, is a specialized processor. 

These processors can manipulate and alter memory to accelerate graphics rendering. 

GPUs can process many pieces of data simultaneously, making them useful for video editing, gaming, and machine learning use cases. They’re typically used in embedded systems, personal computers, smartphones, and gaming consoles.

HomeCams

HomeCams are a type of home security camera that can function indoors or outdoors, picking up faces and identifying who people are inside and around a property.

When HomeCams are installed, they act as a defender of your home, sending instant alerts when people are detected.

IoT

The term internet of things (IoT) refers to the collective network of connected devices and the technology that allows communication between devices and the cloud, and between the devices themselves. 

Thanks to the rise of high bandwidth telecommunications and cost-effective computer chips, billions of devices are connected to the internet. 

In fact, everyday devices like vacuums, toothbrushes, cars, and other machines can use sensors to collect data and respond intelligently to users, making them part of the IoT.

IoT applications

IoT applications are defined as a collection of software and services that integrate data received from IoT devices. 

They use AI or machine learning to analyze data and make informed decisions.

IoT devices

IoT devices are hardware devices, such as appliances, gadgets, sensors, and other machines that collect and exchange data over the Internet. 

Different IoT devices have different functions, but they all work similarly. 

IoT devices are physical objects that sense things going on in the physical world. They contain integrated network adapters, firmware, CUPs, and are usually connected to a Dynamic Host Configuration Protocol (DHCP) server.

Latency

Latency is another word for delay. 

When it comes to IT, low latency is associated with a positive user experience while high latency is associated with a poor one.

In computer networking, latency is an expression of how much time it takes for data to travel from one designated point to another. Ideally, latency should be as close to zero as possible. 

Depending on the application, even a relatively small increase in latency can ruin user experiences and render applications unusable.

Machine learning

Machine learning (ML) is a type of artificial intelligence (AI). 

Machine learning allows software applications to become more accurate at predicting outcomes without explicitly being programmed to perform such tasks.

Machine learning algorithms predict new output values using historical data as input.

Machine learning algorithms

A machine learning algorithm is a method AI systems can implement to conduct their tasks. These algorithms generally predict output values from given input data. 

Machine learning algorithms perform two key tasks – classification and regression.

Microcontrollers

A microcontroller, sometimes referred to as an embedded controller or a microcontroller unit (MCU), is a compact integrated circuit.

Microcontrollers are designed to govern specific operations in embedded systems. The usual microcontroller includes a processor, memory, and input/output peripherals, all contained on the one chip. 

Microcontrollers can be found in many devices and are essentially simple, miniature personal computers designed to control small features of a large component without a complex front-end operating system.

Microprocessors

Microprocessors are computer processors where data processing control is included on one or more integrated circuits. 

Microprocessors can perform functions similar to a computer’s central processing unit.

Orchestrait

Orchestrait is an AI management platform that automates and ensures Face Recognition data privacy compliance across all jurisdictions. 

Orchestrait allows companies to monitor the performance of AI algorithms on all IoT devices, allowing them to continuously improve their Edge AI models no matter how large the fleet.

TinyML

Tiny machine learning (TinyML) is a fast-growing field involving applications and technologies like algorithms, hardware, and software capable of performing on-device sensor data analytics at low power.

TinyML enables numerous always-on use cases and is well-suited to devices that use batteries.

TPU

Tensor Processing Units (TPUs) are hardware accelerators for machine learning workloads.

They were designed by Google and came into use in 2015.

In 2018,  they became available for third-party use as part of Google’s cloud infrastructure. Google also offers a version of the chip for sale.

Video analytics

Video analytics is a type of AI technology that automatically analyzes video content to carry out tasks such as identifying moving objects, reading vehicle license plates, and performing Face Recognition.

While popular in security use cases, video analytics can also be used for quality control and traffic monitoring, among many other important use cases.

Wearables

The term wearables refers to electronic technology or devices worn on the body. Wearable devices can track information in real time. 

Two common types of wearables include smartglasses and smartwatches.

Spread the love

About the Author

    Shandra is a writer and content marketer working in the B2B space. She enjoys learning about new concepts and ideas surrounding cutting-edge technologies and brings a passion for researching and writing about how the digital world influences society.

Trusted by

Xailient’s commercial partners

Press releases

September 27, 2024

by Newsdesk Fri 27 Sep 2024 at 05:39 Global industry supplier Konami Gaming is set to unveil new technology at Global Gaming Expo (G2E) in Las Vegas next month that brings its player facial recognition solution for Electronic Game Machines to table games. The expanded offering, in partnership with Xailient, follows the launch of SYNK Vision […]

July 13, 2024

NEWS PROVIDED BY Xailient Inc  Nov 05, 2023, 22:24 ET Konami Gaming and Xailient partnering to replace magnetic player tracking cards with facial recognition technology at slots and tables for the casino industry LAS VEGAS, Nov. 3, 2023 /PRNewswire/ — Konami Gaming, Inc. and Xailient Inc. announced a strategic partnership to introduce SYNK Vision™ to the casino industry. This revolutionary collaboration […]

OnEdge Newsletter

A weekly newsletter with the best hand-picked resources about Edge AI and Computer Vision

OnEdge is a free weekly newsletter that keeps you ahead of the curve on low-powered Edge devices and computer vision AI.

 

You’ll get insights and resources into:

  • Edge computing use cases.
  • Market trends in Edge computing.
  • Computer Vision AI at the Edge.
  • Machine learning at the Edge.
Cookie Policy