Artificial Intelligence (AI) Enters the Mainstream

By Mark Patrick, Mouser Electronics

3177

AI is also giving rise to new business models such as an approach that could be termed ‘AI as a service’. Companies such as Amazon Web Services (AWS) are offering virtual GPU services where a customer provides the software to run on the AWS platform and, when they need AI processing that would be too power-intensive to run locally, the data is sent to the cloud and results returned in under a second.

Finding the skillset to implement AI

Many people, developers included, view AI as a ‘black box’. Data is fed into the front end and decisions are delivered at the back end, but what goes on inside or how the decisions are reached is not well understood by the vast majority.

AI - the ‘black box’
Fig. 2: AI – the ‘black box’

This brings a challenge to companies that wish to use AI to enhance their products or services. Hardware engineers and software developers with the requisite skill sets and AI experience are few and far between.

Even with the right team on board, developing AI and especially debugging it, presents a new set of challenges. As the behaviour is often unpredictable, at least to us humans, the debugging process has to be fundamentally different to that used for a ‘normal’ piece of software code.

Manufacturers struggle to guarantee that AI products will perform as intended 100% of the time and when the do go awry, it can be significant as we have seen recently with well-documented issues with some personal assistant devices that began to laugh almost uncontrollably.

Opening the closed AI box

Recently, the well-respected technology publication IEEE Spectrum interviewed Sameep Tandon, the cofounder and CEO of drive.ai – a company that develops AI-based software for self-driving vehicles. Tandon revealed that he regarded the closed box nature (or perception) of AI as a ‘serious problem’ and offered some insight into how his company is addressing the issue.

Using a modular approach where the driving system is created from modules with a simple and defined function, debugging is broken into a series of simpler challenges. As each module is individually evaluated, issued can be isolated far more easily than is possible with a single large and complex AI system.

Relatively predictable results can be achieved by constraining the data set used for testing. This approach is quite similar to traditional debugging and might, for example, mean that image recognition modules are tested with only a small part of the image available.

While, in normal life, unanticipated AI behaviour may raise an eyebrow or even a laugh, in applications such as self-driving vehicles the outcome can be far more serious. As AI applications are more ‘trained’ or ‘grown’, unanticipated outcomes must be expected – at least some of the time. In order to address this, many safety-critical applications have redundant or diverse systems that monitor each other with the ability to safely shut down if needed.

Summary
AI promises a very different and much improved future than would have been possible with ‘dumb’ devices. But to truly unlock the potential, everyone involved needs to be prepared to think and act differently. Developers need to change their approach, especially with debugging, as their role is more one of ‘coach ‘than’ designer’.

Also, end users need to forget how they used the simpler dumb devices and fully embrace the capabilities of AI-based devices to fully realise the benefits that they can deliver.

Changing perceptions in this way will require time along with well defined plans for design, marketing and educating consumers as to how to benefit from AI