What is artificial intelligence (AI)?

Artificial intelligence (AI) is a difficult term to define because experts continue to argue about its definition. We’ll get into those arguments later, but for now, think of AI as the technology through which computers execute tasks that would normally require human intellect. Humans and animals have a natural intellect, but computers and other intelligent agents have artificial intelligence that engineers and scientists design.

AI differs from machine learning and deep learning, though the topics are related. Machine learning is a subcategory within AI in which a machine learns and performs functions it wasn’t specifically programmed to do (using what some argue to be logic). Deep learning is a subcategory of machine learning that allows machines to analyze multi-layer algorithms or neural networks. In a way, it mimics human thought.

The great debate

Alan Turing was a pioneer in the computing space. Many believe his theories about the creation of a ‘thinking computer’ in 1950 to have become the foundation of the field of AI as we know it today. Turing wondered if computers could think, but also argued for the necessity of empirical evidence to prove the achievement of that milestone.

He created the “Turing Test” based on a Victorian game called the imitation game to tease out that information. If a machine could perform as well as a human at the game, its performance would be considered equivalent to thought. Sounds simple, but to date, scientists have not been able to build a machine that comes close to passing the Turing Test/

Others, such as another founding father of AI John McCarthy (who was also considered to have coined the phrase Artificial Intelligence), argued a simpler definition of AI existed. McCarthy stated a machine could be thought of as having AI if it performed tasks, “which, when done by people, are said to involve intelligence.” And this broad definition, as you can imagine, still left room for plenty of debate.

Broadening the lens: General vs. narrow AI

AI technology has advanced significantly. While computers today cannot be said to think, according to Turing, our machines do all sorts of incredible things. IBM’s supercomputer Watson won a game of Jeopardy! We are nearing the market-release of autonomous vehicles. And, many consider the algorithms that recommend radio stations and products you might like based on based browsing history to be a true sign of artificial intelligence.

In order to acknowledge the existence of such advancements, the field of AI currently defines two types of technologies: General, or Strong, AI, and Narrow, or Weak, AI. General AI can be thought of as a machine capable of the kind of complex, multi-faceted thinking humans exhibit – in line with Turing’s definition. Narrow AI, on the other hand, encompasses the kind of AI we see today: smart machines that can carry out intelligent tasks that would otherwise require human intervention or ingenuity. And that kind of AI exists all around us. Not just in supercomputers like Watson, but also in predictive algorithms behind Google Maps, facial recognition programs, in systems that predict wear and tear in industry, and so much more.

Where AI is headed

Experts continue to debate how long it will be until General AI-powered devices co-exist in our world. Further, important conversations surrounding the ethics about how developers build this future are taking place. 

AI can be used for good, but it can also be weaponized. Ethics and quality control must become inherent to the development of AI programs. With this, if computers are not built with the capacity for things like empathy, some argue a future akin to the film Terminator will be waiting for us. While that future isn’t necessarily probable, it is possible. AI programmers have incredible power. And, to quote a great film, with that power comes great responsibility. 











How John Deere got good at AI

Memory is key to future AI and ML performance

From smart bicycles to hearbuds, AI is expanding applications for audio