During the pandemic, researchers in Liverpool built a robot chemist that can work 21.5 hours a day. Fully autonomous, it pauses only to recharge. It ran 688 experiments in eight days, discovering a brand new chemical catalyst all on its own. But most impressively, its “brain” can navigate a 10-dimensional space of 98 million possible experiments. It can even intelligently choose the next one to run based on previous results. Has the artificial intelligence takeover finally arrived?

In this no-BS explainer, we’ll demystify exactly what AI is and why you should care. There will be no needless jargon, hype, or buzzwords. Our goal is to help you truly understand, prepare for, and reap the benefits of artificial intelligence.

Artificial Intelligence Explained. Big data analysis with AI technology. Machine learning and deep learning neural network.
Credit: NicoElNino/Adobe

What exactly is artificial intelligence?

Humanity has always tried to go beyond our natural limits. We tamed oxen to plow our fields faster. We bred horses to pull our carriages farther. We built machines to staff our factories and lift heavier objects. So it should be no surprise that we’re now doing the same with our intelligence.

The goal of AI is to outsource the brain.

Put simply, artificial intelligence is about teaching machines to imitate human intelligence. In the general sense, any number of things could fit the broadest definition of AI. One could even argue that old arcade games like Pac-Man used “AI” in the maze ghosts. After all, their intention is to imitate four other players chasing Pac-Man.

Of course—those ghosts are not true “AI” in the modern sense of the word. When we talk about AI today, we don’t mean deterministic programs. If Pinky always chases two spots ahead of Pac-Man, you can easily exploit that behavior. It becomes predictable, formulaic, and no longer “intelligent.”

Early versions of artificial intelligence. (Note: The “intelligence” part is debatable.)

Instead, AI specifically implies the ability to adapt and learn. As the program collects more data, it must get smarter. For example, if the ghosts changed their patterns based on your personal play style—that would be AI. For this reason, the term machine learning has become synonymous with artificial intelligence. The machine must be able to learn and change its own behavior, without human intervention.

What are neural networks and deep learning?

A neural network is a statistical model loosely based on how the human brain works. It even borrows terms like “neurons” and “synapses” to describe its structure. But for all intents and purposes, you can think of it as a huge lattice of numbers. Those numbers, or parameters, represent the connection strengths between neurons.

Here’s where the “learning” part comes in. You can use data to update those parameters toward a specific goal. You can strengthen the connection here… loosen the connection there… etc. This is how the human brain works too, at an oversimplified level.

Neural networks are very flexible—as long as you have the data. You can “train” one neural network to recognize cats in photographs. You can train another one to translate French to Japanese. And you can train yet another one to pilot an autonomous vehicle.

But as the task gets more complex, you’ll need a bigger neural network with more learning capacity. Think about it this way: If you want your AI to spot cats in photos, there are only about fifty breeds to worry about. But if you want your AI to drive a car, it better be ready for the trillions of road, weather, and traffic permutations.

The good news is that you don’t need neural networks a trillion times larger. As the neural network gets bigger, its learning capacity actually grows exponentially. We refer to these larger neural networks as “deep” because they have more layers of neurons—hence, the term deep learning.

Deep learning relies on GPUs (graphics processing units).

The theory for deep learning was published all the way back in 1967. Yet it wasn’t until almost 50 years later—around the mid 2010’s—that commercial deep learning really took off. Why? Because large neural networks require mountains of data to train. And crunching through all that data requires a mountain of processing power. So for decades, we just didn’t have enough computing power to train deep neural networks within a feasible amount of time.

But that changed in 2009, in what people now consider to be the “big bang of deep learning.” That year, AI researchers discovered that GPUs could speed up deep learning by 100X. Originally designed as graphics chips in gaming computers, GPUs have parallel circuit structures. As it turns out, that makes GPUs perfect for processing large blocks of data in parallel.

This one discovery reduced deep learning runtimes from weeks to days… and eventually down to hours and minutes. It opened the floodgates for commercial deep learning. That’s why traditional GPU manufacturers like Nvidia and AMD are now also considered AI companies. They’re essentially selling the picks and shovels in the AI gold rush.

What are some real uses of AI today?

Artificial intelligence is not a silver bullet. It’s not a magic genie that can conjure anything you ask of it. And it’s not a cure for all the world’s ills. There are plenty of AI pretenders out there, so this section will be your “Defense Against AI Marketing BS.”

What AI can or cannot do is defined by its learning paradigm. There are three main paradigms: supervised learning, unsupervised learning, and reinforcement learning.

Supervised Learning

In supervised learning, you tell the neural network exactly what it should learn. In other words, you provide “labels” (desired output) for the raw data (input). The network then infers the logic to get from the inputs to the desired outputs.

For example, supervised learning can be used to identify cats or dogs in photos. You would label photos with “cat” or “dog,” and then feed them into a neural network. The neural network then learns the countless subtle patterns that distinguish each animal.

Labels and input photos.

In practice, supervised learning can be used for any problem that has a “correct” answer. You could label…

  • Satellite images of crops with “healthy” or “at-risk.”
  • Emails with “spam” or “not spam.”
  • Tweets with their sentiment, such as “angry,” “excited,” or “cautious.”
  • Stocks’ earnings reports with how its price changed the next day.
  • Weather balloon data with how much it rained the next day.
  • Customers with how much they’ve spent with you.
  • Etc.

And as long as you have enough data points, your neural network can infer the underlying patterns. It can learn the common phrases of spam emails. It can learn the signals in an earnings report that lead to positive market reactions. Or it can learn which atmospheric conditions bring more rainfall.

Finally, the neural network can use its learnings to predict the future or in real-time. Should that new email go straight to the spam box? Will the stock price go up after today’s announcement? Will it rain tomorrow? As you can tell, these questions all have a right or wrong answer. Thus, supervised learning is appropriate.

Unsupervised Learning

Now, supervised learning won’t always be feasible. Some types of data will be too messy, unstructured, or bizarre. And labeling large datasets can often be time-consuming, expensive, or even impossible. This is especially true for the vast amount of data being generated on the Internet every day.

In these situations, we need unsupervised learning, which works on unlabeled data. In this paradigm, the neural network searches for patterns or structures in the data. But it does so in an open-ended way, without knowing what’s right or wrong.

For example, let’s say you had billions of animal photos, but none are labeled. If you tried to hand-label them all, the Sun might burn out before you’re finished. Instead, unsupervised learning can be used to group the photos based on similarity. The neural network wouldn’t really know what each group is. But it would be able to extract the underlying patterns that separate the groups.

A great example of unsupervised learning in the real world is anomaly detection. An AI can digest a massive amount of data to build a set of “standard profiles.” It then compares new events against those profiles to detect outliers. Anomaly detection is especially useful in settings that evolve constantly, such as financial fraud detection, cybersecurity, or meteorology.

Reinforcement Learning

Sometimes, we want the AI to interact directly with the real world. Maybe we want the AI to pilot a drone, drive a car, or move pieces on a chess board. That’s where reinforcement learning comes in. You give a neural network control of an output mechanism. It then learns through trial and error. As it interacts with its environment, the network gets constant feedback on whether it’s moving closer or farther from its goal.

For example, the game of Go is famous for being one of the deepest strategic games in existence. Played on a 19×19 grid, there are about 10^170 legal positions. For comparison, Chess only has about 10^46 legal positions and the Earth about 10^50 atoms (120 fewer zeroes).

So it comes as no surprise that it took longer for computers to “solve” Go. A computer had already beaten the world’s best Chess player by 1997, but it took until 2016 for the same to happen in Go. Still, three things are certain in life: death, taxes, and AI coming to kick our butts. That year, in a very high-profile match, AlphaGo stomped the top-ranked player Lee Sedol… by a score of 4-1.

Lee Sedol looks on in despair. (Image Credit: Google DeepMind)

AlphaGo was bootstrapped using a database of 160,000 expert games. So early versions of the AI actually mimicked human players. But soon enough, the student had surpassed the masters. To progress further, AlphaGo played a massive number of games against other instances of itself. After each game, it used reinforcement learning to improve its strategies and moves. And in short order, AlphaGo was coming up with moves that were inhuman. As one of its creators put it:

“Although we have programmed this machine to play, we have no idea what moves it will come up with. Its moves are an emergent phenomenon from the training. We just create the data sets and the training algorithms. But the moves it then comes up with are out of our hands—and much better than we, as Go players, could come up with.”

Another famous example of reinforcement learning is in piloting autonomous vehicles. The AI is given control of a car’s steering wheel, gas pedal, and brakes. It’s then tasked with driving to a destination safely. It receives constant feedback along the way, gradually adjusting its decision making. Aside from real roads, the AI also drives through billions of simulated miles. That way, it gets exposed to more scenarios, creating a more “experienced” driver.

Why should you care about AI?

The long-term impact of artificial intelligence cannot be understated. Just like the Industrial Revolution changed our approach to physical labor, AI is doing the same for mental work.

AI is getting creative. One day, it will be more prolific than all the world’s writers combined.

In late 2022, ChatGPT, a chatbot developed by OpenAI, took the world by storm. There had been chatbots before, but none this realistic and versatile. As a “generative AI,” ChatGPT creates original content based on any prompt.

ChatGPT was trained on 300 billion words from the Internet. Wikipedia, books, articles, websites… all included in the 570GB training database. And what has impressed users the most is its sheer versatility.

In its short time open to the public, ChatGPT has already…

  • Composed music, plays, and novels.
  • Written and debugged code for computer programs.
  • Answered test questions (at a level above the average human, depending on the test).
  • Ghostwritten song lyrics in the style of famous artists.
  • Rap battled users (and won handedly).
  • Simulated games, chatrooms, and other scenarios.
  • Brainstormed smoothie recipes.
  • And even planned travel itineraries.

Under the hood, ChatGPT is a language model. It tries to predict the word with the highest probability to appear next, based on the sentences so far. For example, let’s say the prior text is, “water under the”. In some contexts, ChatGPT might assign the word “bridge” to follow. “Don’t worry, it’s water under the bridge.” But if you were discussing home repairs, it might assign the word “sink” instead. “Be sure to mop up the water under the sink.”

What is AI? Explained by Ernest Hemingway and Douglas Adams. (ChatGPT)

Not to be outdone, Google announced their own chatbot AI—Bard—in early 2023. And while ChatGPT and Bard both have accuracy issues for now, they will get better. It’s only a matter of time. Microsoft has already incorporated ChatGPT into Bing Search, and Google will follow suit with Bard.

But more broadly speaking, generative AI will change how content is created forever. Right now, ChatGPT’s underlying model is GPT-4. It’s only the fourth version of its series, and it’s already writing short novels and basic programs. Imagine what it will be capable of once we reach GPT-100.

AI will (hopefully) prevent robots from crushing us.

Robots have already taken over factories. Over the past twenty years, annual industrial robot shipments have grown by 500%. Even so, most industrial robots have strictly-defined roles and limits. They are hand programmed for specific tasks in tightly controlled silos. They also stay behind barriers to avoid accidentally crushing their human colleagues.

Yet, the goal for robotics has always been bigger: to build robots that can operate safely alongside humans. These collaborative robots, or “cobots,” would aid us in heavy-duty jobs like mining or construction.

“Hey, pass the wrench.” (Image Credit: Boston Dynamics)

For this to become a reality, robots will need to be able to adapt to conditions in real-time. The International Federation of Robotics has outlined four levels of collaboration:

  • Level 1: Co-existence where humans and robots work on the same problem, but don’t share a workspace.
  • Level 2: Sequential collaboration where we share a workspace, but don’t work on the task at the same time.
  • Level 3: Co-operation where we share a workspace and work on the same task at the same time.
  • Level 4: Responsive collaboration where the robot responds in real-time to the worker, in the same workspace.

Today, most commercial cobots operate in the first two-levels. Amazon’s warehouse robots are a great example of Level 2 collaboration. They zip back and forth across warehouse floors, transporting parcels to human workers. But human workers don’t share the same space with them, to avoid collisions or route mixups.

But as we begin to enter Level 3 and Level 4 collaboration, we’ll begin to see a broader range of uses. For example, Japan has a large aging population, and it will face a shortage of one million caregivers. Robots can play an instrumental role in elder care. In fact, the Japanese government has allocated a third of its budget toward developing “carebots” to look after the elderly.

AI will be the “smarts” behind smart cities and cars.

In 2017, Saudi Arabia announced a $500 billion project to build an entire “smart city” from scratch. It will supposedly accommodate the wonders of the future, from flying taxis to robot maids. But first up, it will feature The Line, a “linear smart city” that will house 9 million people.

Only 200 meters wide but 170 kilometers long, The Line will feature no roads, cars, or emissions. All residents will be within a five-minute walk to everything they need. Artificial intelligence will power many automated services throughout the city. According to an exec, “We will leverage 90% of the data we produce and utilize it in the city. It’s never been done before. We want to build a citywide operating system that is aware, predictive, and can take action.”

The Line: Utopian dream or dystopian “Snowpiercer in the desert”? You decide. (Image Credit: NEOM)

To be clear, The Line is an extreme example. There’s a lot of controversy around the project, and many people (rightfully) doubt if it will ever be completed. Most smart city projects are much less ambitious and more realistic. But what they all share in common is a reliance on artificial intelligence.

Even outside smart cities, we’ll still interact with AI in our day-to-day lives. For many of us, driverless vehicles will be the first autonomous robots we see on a daily basis. Companies like Waymo, Tesla, and General Motors are already accruing millions of driverless miles a year.

It might sound scary to trust an AI with a car, but safety statistics show the opposite story. Driverless cars will eventually be far safer than letting humans stay at the wheel. The key thing about self-driving cars is that they can “live many lives.” One car might be dispatched to a crowded city. Another to a barren desert. And another to a mountain during a snowstorm.

Each of those cars are generating over 4TB of data per day. The AI then crunches through all that data, improving its driving skills nonstop. This cumulative shared experience can make AI far more competent than human drivers. As a result, driverless cars could reduce accidents by up to 90 percent, according to McKinsey.

AI will protect us from cyber-threats.

Today, a thief is more likely to target online banking passwords than burglarize a home. Corporations are more worried about secrets on their servers than their office buildings. Even nations expect more future conflicts to be fought by hackers than by soldiers. In this digital era, cybersecurity is security.

Not only are the stakes rising, but vulnerabilities are as well. The amount of data flowing through the Internet is rising exponentially. The pandemic accelerated trends toward remote work and digital lifestyles. The metaverse is growing, with younger generations spending record amounts of time online. And as the Internet of Things expands, potential attack surfaces will also increase. In short, security breaches will become costlier and harder to prevent.

In this world of mass interconnectivity, we need a tool that can monitor endless streams of data. We need something that can track the evolution of computer viruses at scale, in real-time. And we need a tool that can adapt, spotting unfamiliar threats as they arise. Thus, AI might be the only viable solution to cybersecurity in the future.

What’s next for artificial intelligence?

The ultimate goal of many AI researchers is to invent artificial general intelligence (AGI)…. or the ability to learn any intellectual task that a human can. AGI is also known as “strong AI,” whereas all the AI available today is “weak AI.” Weak AI is narrow in scope and used for well-defined problems. Predicting the weather… playing a game… or even driving a car… These are all “narrow” when compared to general cognitive abilities.

And while it’s debatable if we’ll ever reach true AGI, all trends point toward AI getting “stronger.” ChatGPT is already performing feats that were unfathomable a few years ago. You can give it a vague prompt like “write me a short sci-fi story in the style of Ernest Hemingway”… and it will do it!

As AI gets closer to imitating general intelligence, we will need to be careful about how we use it. To start, as a society, we’ll need to learn how to cooperate with AI. It will make some jobs obsolete, but it will also create many opportunities.

But more importantly, we will start to come face-to-face with hard ethical questions. Who should control AI? What is the fairest way to distribute the value created from it? And how should lawmakers regulate this powerful tool? These are issues we must think through if humans and AI are to have a prosperous future together.