Intel's explanation: 6 terms of artificial intelligence

Artificial intelligence encompasses a broad set of computer science for perception, logic, and learning. One method of artificial intelligence is machine learning - programs that perform better over time and with more information. Deep learning is one of the most promising approaches to machine learning. It uses neural network- based algorithms - a way to connect inputs and outputs based on a model of how we think the brain works - that find the best way to solve problems on their own, as opposed to the programmer or the scientist who writes them. Training is the way deep learning applications are "programmed" - feeding them with more information and fine-tuning them. Inference is the way they are run, for analysis or decision making.

ARTIFICIAL INTELLIGENCE

There are many ways to define artificial intelligence (AI) - not least because "intelligence" alone can be difficult to pin down and also because people attribute everything to AI, from the awesome to the practical.

Intel researcher Pradeep Dubey calls artificial intelligence "a simple vision in which computers become indistinguishable from humans." It has also been simply defined as "making sense of the data," largely reflecting how companies use it today.

In general, AI is a generic term for a variety of computer algorithms and approaches that allow machines to perceive, reason, act, and adapt, as humans do - or in ways beyond our abilities.

Among the human-like capabilities are face-recognition apps in photos, robots that can roam hotels and production plants, and devices capable of having (in some ways) natural conversations with a person.

Features that go beyond human could include identifying potentially dangerous storms before they form, predicting equipment failures before they occur, or detecting malware - tasks that are difficult or impossible for people to perform.

A group of people at Intel, the Artificial Intelligence Product Group, works to deliver hardware, software , science, and data research to bring these new capabilities to life.

"We want to create a new class of AI that understands data, in all areas" - Amir Khosrowshahi, Chief Technology Officer, Intel Artificial Intelligence Product Group.

People "think we are recreating a brain," said Amir Khosrowshahi, chief technology officer for the Artificial Intelligence Products Group in an interview. However, “we want to go further; We want to create a new kind of artificial intelligence that can understand the statistics of the data used in commerce, medicine, in all fields, and that the nature of the data is very different from the real world. ”

Artificial intelligence work dates back to at least the 1950s, followed since then by several boom-and-bust cycles of research and investment, as hopes for new approaches and applications grew (such as the checkers game program of Arthur Samuel in the 1950s, and Stanford's Shakey robot in the 1960s), and later suffered a crash due to these methods failing to work (causing "the winters of artificial intelligence" when the investment and the public interest cooled.)

There are four big reasons why we are in a new AI spring today: more computing (the cloud makes high-capacity computers available to everyone), more data (especially with the proliferation of cameras and detectors), better algorithms (Approaches have shifted from academic curiosities to beating human performance on tasks such as reading comprehension) and extensive investments.

AUTOMATIC LEARNING

Artificial intelligence encompasses a whole set of different computing methods, and an important subset of these is called “ machine learning .”

As Intel's Dubey explains, machine learning "is a program where performance improves over time," and that also improves with more data. In other words, the machine is more intelligent and the more it “studies” it becomes even more.

A more formal definition of machine learning that is used at Intel is: "the construction and study of algorithms that can learn from data to make predictions or make decisions."

Wired magazine declared "the end of the code" when describing the way machine learning is changing programming: "In traditional programming, an engineer writes explicit instructions, step by step, for the computer to follow. With machine learning, programmers don't code computers with instructions. They train them. ”

Using machine learning, a leading Chinese ophthalmology hospital was able to increase detection of potential causes of blindness to 93%, which was traditionally 70 to 80% for clinicians.

For example: an artificial intelligence powered ophthalmoscope (the digital version of the device a doctor would use to see inside the eyes) built by the Aier Eye hospital group and MedImaging Integrated Solutions, learned how to identify diabetic retinopathy and macular degeneration related to age (which can cause blindness) by "seeing" thousands of labeled images of healthy and unhealthy eyes.

An early analysis based on data from 5,000 Aier patients showed that the detection accuracy, which had averaged from 70 to 80% on the human test, went up to 93% with the artificial intelligence solution. With more time and more data, your accuracy could continue to increase.

NEURAL NETWORKS AND DEEP LEARNING

Neural networks and deep learning are closely related and often used interchangeably; but, there is a difference. Quite simply, deep learning is a specific method of machine learning, and is based primarily on the use of neural networks.

"In traditional supervised machine learning, systems need an expert to use their knowledge to specify the information (called characteristics) in the input data that will best result in a well-trained system," wrote a team of data scientists and engineers. of artificial intelligence from Intel in a recent blog. In the blindness prevention example, it would mean specifying the colors, shapes, and patterns that separate a healthy eye from a troubled eye.

Deep learning is different. "Instead of specifying the characteristics of our data that we think will lead to better classification accuracy," they continued, "we let the machine find this information on its own. Often they can see the problem in a way that even an expert could not have imagined. ”

In other words, Aier's eye health examiner might not "see" certain conditions, as any human doctor does, even though he is still more accurate. It's what makes deep learning so powerful - considering that enough reliable data is available, they can be used to solve problems with unprecedented skill and precision.

The neural network - technically it is an “artificial neural network,” since it is based on how we think the brain works - it provides the math that makes it work. Google offers a tool where you can actually play around with a neural network in your browser, and it also offers a simplified definition: “First, a collection of software 'neurons' is created and connected, allowing messages to be sent to each other. Then the network is asked to solve a problem, which it tries to do over and over again, each time strengthening the connections that lead to success and decreasing those that lead to failure. ”

Here is a basic representation of a neural network, where the circles are neurons and the arrows are connections:

The important part is this: The neural network allows the program to be broken down into smaller and smaller fragments - and, therefore, increasingly simpler. "Deep" in deep learning defines the use of a multi-layered neural network. With more layers, the program is more refined in what I can categorize and more precise in doing so - it just requires more and more data and greater computing power.

"Deep learning is not magic - it is math." - Pradeep Dubey, Intel researcher and director of the Intel Labs Parallel Computing Laboratory.

The concepts sound complex, but when it comes to the actual code being executed, it's actually very simple. "It's not magic - it's math," Dubey pointed out. Matrix multiplication, to be exact, "simpler, impossible," added the Intel research fellow.

TRAINING AND INFERENCE

All right, there are two other quick concepts to consider: training and inference . Training is the part of machine learning where you are building your algorithm, configuring it with data to do what the person wants you to do. This is the hard part.

"Training is the process by which our system finds patterns in the data," wrote the AI team at Intel. “During the training, we pass data through the neural network, errors are corrected after each sample and it is repeated until the best network parameterization is achieved. After the network is trained, the resulting architecture can be used for inference. "

In the case of the Aier eye examiner, for example, the training involved feeding images of eyes labeled healthy or not.

And so therein lies the inference, which fits his dictionary definition to the letter: "The act or process of deriving logical conclusions from premises that are known or assumed to be true." In the software analogy, to train is to write the program, while inference is to use it.

"Inference is the process of using the trained model to make predictions about data that we haven't seen before," wrote the expert kids at Intel. This is where the function a consumer might see really takes place - Aier's camera assessing the health of their eyes, Bing answering their questions, or a drone circling an obstacle automatically.