Artificial Intelligence is the capabilities of a machine to imitate intelligent human behaviour.
Curated by Vinay Prabhakar Minj
Artificial Intelligence (AI) is not new; it has been present for almost a century. Here’s a brief glimpse of AI history:
1956 – The possibility of creating an electronic brain gave birth to AI.
1964 – Eliza, a natural language processing computer and one of the first chatbots was introduced.
1980s – Driverless, autonomous car was introduced.
1997 – Deep Blue supercomputer defeated world chess champion Gary Kasparov in a chess match.
2010 – Siri virtual assistant based on natural language interface was revealed.
2012 – Next Level Visual Smartness image recognition tool by Google researchers trained giant neural networks for image recognition by not providing any identifying information. The AI was able to learn to detect pictures using deep learning algorithm.
2015 – Alpha Go computer program became the first computer game program to defeat a human professional.
2016 to 2017 – Several tools such as Google Tensor Processing Unit, Intel Nervana NNp, NVIDIA Tesla V100, Neuromorphic chip, Apple A11 Bionic and IBM Power91 were introduced.
Today, AI is a key trend for developers in embedded sytems. Some key driving factors for the growth of AI have been:-
- Increasing amount of data
- Data storage power
- Better computational power
- Improvements in security
- Automation
AI: A large digital brain
When we talk about AI, most of the time we refer to it as a big brain that can perform fast computations. The way it handles large data is through:
- Objects: It could be your phones, sensors or any smart device. It has four key elements like processing, connectivity, security, sensors and actuation.
- Gateways: Data sent by the objects is received by the gateways for further processing, connectivity and securely establishing data.
- Cloud: From the gateways, data further goes to the Cloud. This is also known as the AI brain. Here, the final processing takes place.
Based on the above computations, the data is sent back to the real world. For this, a huge bandwidth is required to distribute data.
AI opportunity in embedded systems
The benefits of distributed AI (i.e. distributing the AI brain over the sensors and gateway stages) include:
- Reduced central processing needs
- Reduced data communication requirements
- Faster response time
- Locally improved intelligence
- Improved privacy and data security
- Overall reduced energy needs
Therefore, devices with integrated neural networks will be ideal to provide distributed artificial intelligence. It is also to be noted that this new wave of AI will create demand for many more sensors and actuator nodes.
AI applications
To do a scene classification of your present location, you will require some audio, some environmental sensors and some video for the AI node. This way, the node will become intelligent enough to detect data on its own and will not have to continuously rely on the cloud for processing it.
This will also create a lot of user interaction and continuous learning. The node will keep on understanding and updating things.
AI vs Machine Learning vs Deep Learning
AI generally refers to any technique which enables a computer to mimic human intelligence.
Machine Learning refers to the software research area that enables a wide variety of algorithms and methodologies to improve over time through self-learning from data.
Deep Learning utilises learning algorithms that derive meaning out of data, by using a hierarchy of multiple layers that mimic the neural networks of the human brain.
So, for distributed AI, all the above three are essential to be included.
Why deep learning is important?
Convolutional Deep Neural Networks or CNN is a technology for training the neural networks. Below is a summary of how it has outperformed previous methods on a number of tasks
Problem | Dataset | Accuracy without CNN | Accuracy with CNN | Difference |
Object classification | ILSVRC | 73.8% | 95.1% | +21.3% |
Scene classification | SUN | 37.5% | 56% | +18.5% |
Object detection | VOC 2007 | 34.3% | 60.9% | +26.6% |
Fine-grained class | 200Birds | 61.8% | 75.7% | +13.9% |
Attribute detection | H3D | 69.1% | 74.6% | +5,5% |
Face recognition | LFW | 96.3% | 99.77% | +3.47% |
Instance retrieval | UKB | 89.3% | 96.3% | +7.0% |
Neural networks
AI neural networks work very much like the biological neural networks: taking a lot of inputs, giving weights to each input and getting the output after computing algorithms on the weighted inputs. This output is then given to different AI neurons (just like in the brain).
A neural network is defined by an architecture constituting multiple hidden layers of neurons (deep neural network) and each input layer is interconnected internally.
Finding the optimal architecture/weight values is called the training (or learning) phase. It is done once on big servers (GPU) or on the Cloud.
AI application processing requirements
- Require low power: Sensor analysis, activity recognition (motion sensors), stress/attention analysis.
- Require medium power: Audio & sound, speech recognition, object detection.
- Require high power: Computer vision, multiple object detection, tracking/classification, speech synthesis.
Key steps behind neural networks
- Capturing data (through sensors).
- Label the captured data in the neural network topology.
- Train the neural network model.
- Convert neural network into optimised code for MCU.
- Process and analyse new data using a trained neural network.
Sensors are so optimised now that they can be embedded anywhere and make even very small objects intelligent.
Future evolution of artificial intelligence
- Artificial Intelligence
- Artificial Narrow Intelligence
- Artificial General Intelligence
- Artificial Super Intelligence
Today we are at second level (Narrow Intelligence) where machines are trained for a particular activity/function. The next level of evolution (General intelligence) which might come in few years might enable machines to follow your command and communicate with other machines to improve productivity. The final level that may come in the next few decades will see these machines surpassing the humans.
There could be multiple use cases that these machines will make it possible, but at the same time this evolution can have negative impact on humans.
So, we have to ensure that the AI technology that we come up with positively impacts our social life.
About the author
The above article is an extract from a speech presented by Vinay Thapliyal, MCU Marketing Manager, STMicroelectronics–India atIOTSHOW.IN 2019. Vinay has more than 24 years’ experience in semiconductor industry. He has been part of STMicroelectronics Pvt. Ltd since past 15 years and worked in various functions like embedded designs, applications and technical marketing.