Artificial Intelligence (AI) Enters the Mainstream

By Mark Patrick, Mouser Electronics

3335

Until recently Artificial Intelligence (AI) existed more in the realms of science fiction than our everyday lives but now AI may be behind more than we actually realise. If we look inside many common household appliances, such as robotic vacuum cleaners, refrigerators, or washing machines, we may well find AI. Our smartphones and tablets are becoming able to identify us by face alone through the implementation of AI that supports facial unlock and expression tracking.

More and more consumer products have AI functionality
Fig. 1: More and more consumer products have AI functionality

As we combine dedicated hardware and application-specific apps, so AI will become more prevalent and will become an integral part of mundane everyday devices such as cameras, small appliances, security systems and more.

Why is AI taking centre stage, and why now? It seems that several trends are combining to create this time of rapid and significant change:-

Leveraging developments in the gaming arena

The Internet now provides an unprecedented amount of real-world data, which is universally available. By applying deep learning techniques to this vast database, neural networks have accelerated from being relatively small-scale experiments to truly valuable tools that can outperform highly qualified humans.

However, the learning process is very computing intensive and is based upon computers learning through exposure. So-called ‘deep learning’ involves neural networks being fed with thousands or millions of pieces of data – for example, pictures of similar objects – so that AI systems can accurately distinguish one from the other. The learning systems are highly parallel to minimise the learning times, yet it remains a processor and power-hungry process.

Traditional CPUs are not ideal for this task as, by the very nature of their architecture, they are engineered to be flexible enough to accommodate a wide variety of processing tasks. AI learning simply requires that a relatively simple task is performed over and over again.

One device that has an architecture that supports this type of processing is a Graphics Processor Unit (GPU). High-end GPUs that enhance the realism of simulations are to be found in the gaming world where they repeatedly perform complex mathematical processes – exactly what is required for AI deep learning.

GPUs are designed to operate in parallel, which brings the ability to process huge amounts of data quickly. As their architecture is simpler and the core die area is smaller, GPUs have significantly more cores per device. As an example an Intel Xeon Platinum 8180 processor has 28 cores, while an NVIDIA Tesla K80 has 4,992 cores. As a result, AI performance is 5-10 times better when run on a GPU than when run on a CPU.

While early developments were run on modified GPUs, given the importance of AI as a technology area, many chipmakers are now releasing GPU-style devices that are specifically architected for AI. NVIDIA’s Tesla V100 is based on tensor cores that are designed specifically for deep learning, as well as GPU cores. Google has announced a Tensor Processing Unit (TPU) that will form the backbone of its main services.

Given the huge performance increase in the main machine learning processor cores, servers are being re-engineered with significantly improved bandwidth to move data to and from the processing cores as fast as they are able to process it. All of the major software frameworks are being modified and databases are being honed to suit this new, high speed, world of AI learning.

Where to place AI functionality?

One of the significant discussions is whether AI functionality should be placed in individual devices or centrally in the cloud. As with many issues there are a number of factors in play and no clear conclusion. Factors such as high bandwidth, low-latency Internet connectivity and the fact that remote processing reduces device costs are pushing AI towards the cloud. On the other hand, privacy concerns over sensitive data (such as medical records) means that consumers often prefer to have AI processing performed locally. Local processing also means that devices do not necessarily need to be connected and can continue to operate during Internet outages.

While the computing needs for machine learning are significant, AI often requires algorithms to be executed close to the user – at the ‘edge’ of the cloud. Less powerful GPU structures such as the Drive PX accelerator cards from NVIDIA are being released for this sole purpose. Containing up to 4 GPUs, these devices are configured for local sensor and video inputs making them ideal for executing AI algorithms locally.

Personal assistant devices such as Apple’s Siri or Amazon’s Alexa (and others) leverage the power of centralised AI hosted in remote data centres to provide the benefits of AI with minimal local hardware content. Satellite Navigation (GPS) devices often offload the processor-intensive task of route planning to cloud-based processors and then provide the directions using localised computing power.