What is AI? Should I act to adopt AI? Will AI disrupt my business?

Answering these is important to any CEO and business leader today. Because understanding the basics of AI is the path to AI adoption and to beating the competition.


Artificial Intelligence (AI) has been around since the 1950s. Only recently, however, it’s become front and center of the news, heralding a technology revolution unlike anything seen before.

But what’s changed for AI over the last few years?

In fact:

Artificial intelligence has a long history of being “the next big thing” — and every time, the wave of hype over AI was followed by AI winters.

AI adoption and research history

In recent years, however, the explosive growth in AI research and adoption is caused by three major factors that weren’t around when the first two AI hype waves were busted. These factors are known as the three pillars of AI, which, basically, are key elements to understanding all about artificial intelligence.

Three Pillars of Artificial Intelligence (AI):

  1. Computing power
  2. Data availability
  3. Advanced AI techniques

In this article, I’m going to look into those to explain to CEOs, business leaders, and decision makers the basics of AI. Specifically, I’ll dig into what is AI and why the recent leaps in computing power and data availability, and the advancements in AI techniques, such as machine learning and deep learning, enable businesses to deploy AI at scale to sharpen their competitive edge in the 21st century.

Let’s kick off this artificial intelligence 101, then.

What Is AI?

Artificial intelligence, according to Wikipedia, is intelligence demonstrated by machines, in contrast to the natural intelligence displayed by humans.

AI is fundamentally different from natural intelligence (i.e. machines don’t “think” like humans do) and has its own pros and cons, as demonstrated below.

Artificial intelligence vs Natural intelligence

When we think AI, then, we’d better focus on its tech aspect rather than on such abstract notions as intelligence, reasoning, and thinking.

First and foremost, artificial intelligence is a set of computer science techniques that allow machines to solve tasks that require cognitive intelligence like learning from experience, analyzing and adapting to new information, and generating insights from data.

Before we dig deeper, let’s debunk two common myths about AI:

  1. AI is about creating sci-fi robots. Nobody’s going to exterminate humanity or oust humans from the workplace. AI is about intelligent machines, but intelligence is subjective and can mean different things when applied to AI and humans. In short, we have different skill sets.
  2. AI is primarily focused on replicating human intelligence. Human intelligence consists of multiple branches of human performance like perception, motion, and creative thinking. As most AI research is focused on mimicking a single branch at a time, the creation of an intelligent machine that at least somehow resemble a human is purely theoretical.

Overinflated expectations about artificial intelligence are often caused by the media, too. They focus too much on hype instead of drawing a distinct line between two types of AI — Artificial General Intelligence (AGI) and Artificial Narrow Intelligence (ANI) — both of which are critical to understanding what AI is and what benefits it can realistically generate for business.

Artificial General Intelligence (or strong AI) is a purely theoretical branch of AI that aims to unite multiple applications of ANI through knowledge engineering to create a “thinking” machine capable of full human intelligence. In case of AGI, the machine can collect and analyze knowledge in any form, process it using software programs, and then make independent decisions.

How AI works

 

The moment when humanity develops strong AI is known as technological singularity. At this point, as futurist Ray Kurzweil claims, humans will achieve immortality by “uploading” their minds to machines.

If what you’ve heard in the video sounds a bit unrealistic, don’t worry — it still is. Yet, as computing power increases and neural networks become more advanced, we’re slowly but surely closing the gap to AGI and Super Intelligence.

Artificial Narrow Intelligence (or weak AI) is the case-by-case form of AI that has seen great progress over the last 70 years. In case of ANI, specific repetitive tasks like speech recognition, demand forecasting, and fraud detection are completed in limited contexts by feeding huge swaths of data to ML algorithms and neural networks.

Artificial General Intelligence vs Artificial Narrow Intelligence

Let’s illustrate how ANI works with an example.

When you shop at Amazon, AI-powered product recommendation system, which is an example of weak AI, won’t have a problem figuring out what alike products to display to you, based on your search history and website behavior, to increase sales. However, no matter how sophisticated this system may be, it won’t play music or order pizza for you. In contrast to Artificial General Intelligence, weak AI doesn’t think or act independently in a broader context.

Nonetheless, it’s weak AI that is already revolutionizing every industry, from finance and healthcare to manufacturing and agriculture.

AI use cases

Now, when you know the basics of what AI is, let’s continue this artificial intelligence 101 to find out why computing power, data, and artificial networks drive the AI revolution.

Three Pillars of Artificial Intelligence

#1 Computing Power

Computing power is the driving horse of artificial intelligence.

Basically, it acts as AI’s brain that processes the collected data — the smarter the brain is, the better and faster results can be achieved with it.

But what’s powering the AI brain?

The exponential growth in computing power was unexpectedly triggered by graphics processing units (GPUs). Designed mainly for 3D game rendering and graphics design, GPUs happened to be perfect to power AI.

Here’s how Insight64 principal analyst Nathan Brookwood describes the unique capabilities of GPUs:

GPUs are optimized for taking huge batches of data and performing the same operation over and over very quickly, unlike PC microprocessors, which tend to skip all over the place.

In contrast to central processing units (CPUs), which were originally used to conduct AI computations, GPUs act as individual arrays of linked processors that operate in parallel, which allows to significantly accelerate computational workloads. They can be up to 50 times more efficient and, actually, are much cheaper than CPUs.

Price of computing power drop
Price of Computing Power for 30 Petaflops since 2013

Organizations no longer have to own supercomputers to run complex AI computations. This makes AI a feasible technology not only to the government and global technology giants but also to small and medium enterprises.

Note: The availability of “cheap” computing power is driven by cloud computing, which allows businesses to get computer system resources on demand. Simply put, using cloud technology, you can conveniently order computing power and storage at a reasonable cost over the internet without having to run your own datacenter.

#2 Data Availability

If computing power is the driving horse of artificial intelligence, data is the fodder to feed it.

As artificial intelligence learns how to complete a specific task by analyzing dozens of thousands of examples of how this task can or cannot be completed, data, which holds all of these examples, is the foundation of every AI project.

In fact, if it wasn’t for the exponential data growth since 2010 (collectively known as Big Data), most likely we’ve already had another AI winter on our hands, and there won’t be any AI revolution.

Amounts of data generated since 2000
Progressive growth in the amount of data generated since 2000

Let’s illustrate the importance of data with an example. Say, you’re looking to use AI for visual inspection of mugs in the assembly line.

Here’re a few basic steps you’ll have to follow:

  • Collect a dataset of images featuring both high-quality mugs and mugs with defects
  • Label a certain amount of images, telling AI which mugs are quality and which are broken
  • Train the machine learning model by feeding the labeled data to it (you’ll have to retrain it many times to ensure accuracy by modifying either the model itself or feeding more labeled data to it)
  • Deploy the machine learning model to test how it works in the assembly line (you’ll have to continue to collect more data to update and maintain your ML model)

Using this example, it’s pretty obvious that you need lots of data to launch and maintain any meaningful AI project.

The key point here is:

There’s no working AI without data. It’s the fuel that powers AI’s engine; and, the more data you have in your disposal, the more impactful business results you’re most likely to obtain from it.

At this point, it’s worth looking into AI learning techniques and how they’re dependant on data. Specifically, let’s have a look at three major types of it — supervised, unsupervised learning, and reinforcement learning.

Supervised vs. Unsupervised vs. Reinforcement Learning

Supervised learning is a type of AI learning in which input and desired output are fed to machine learning algorithms or deep neural networks. Basically, in supervised learning, algorithms take advantage of both data and information about the data — labeled training data, in which value (e.g. broken or unbroken mug) is assigned to each data point.

Unsupervised learning is the training of AI in which data is neither classified nor labeled, which allows algorithms to “teach themselves” to act on their own. In contrast to supervised learning, which is used in the overwhelming majority of AI/ML business use cases, unsupervised learning is geared toward finding patterns in data. For instance, it won’t tell broken mugs from unbroken mugs, but it’ll be able to group mugs by color.

Reinforcement learning is a training method in which AI algorithms are rewarded for desired behaviors and punished for undesired ones. It can be used to direct unsupervised ML algorithms using rewards and penalties; therefore, reinforcement learning doesn’t rely on labeled training data.

supervised vs unsupervised vs reinforcement learning

Now, when you know how AI, ML and DL algorithms for that matter, learn from data, let’s find out what data problems can limit the potential of your AI project.

Data Problems and Solutions

In today’s interconnected world we produce about 2.5 quintillion bytes of data every day. Over the last few years, 90% of all the data ever produced by humanity was generated. These mind-boggling stats aren’t that hard to explain, considering the amount of sensors in smartphones, cameras, cars, drones, and even in our homes.

Unfortunately, not all data is made equal.

The quality of data that you feed to machine learning models and deep neural networks pretty much determines the results. As the old saying goes: garbage in, garbage out.

Here’re three basic data problems that can put your AI project on brake:

  1. There’s not enough data. As AI learns by example, it needs dozens of thousands of those to grasp general concepts to be able to deliver accurate results. Using the mug example, the manufacturer won’t be able to implement visual inspection if they train their ML model using ten images of broken and unbroken mugs.
  2. Data is too uniform and consistent. Data diversity is as important to training a “smart” model as the amount of data. Let’s say you train an image recognition model for self-driving cars using images from highways. The model can achieve high accuracy spotting regular SUVs and sedans, yet it’ll fail when tasked to identify, say, a golf car or a tractor — because examples of those weren’t fed to the system in the first place.
  3. Data is biased. Bias in AI systems is a widespread problem. For example, data collection can be biased when you train a voice recognition model using samples with the American accent but apply the model in Australia. Also, bear in mind that AI finds patterns to interpret its decision on a task. So, for instance, AI can analyze data on new hires to find out that most of them are white men. It’ll follow the pattern to predominantly hire white men.

How do you make sure that you have a lot of high-quality a.k.a. properly structured, diverse, and unbiased data, then?

These tips to solve data problems will help:

  • Preemptively collect and organize data. Every business generates tons of data. Unfortunately, in most cases, it’s never stored, or kept in different databases across departments in an unorganized fashion. Data collection can take years, so make sure that you collect, store, organize, and label it in advance, on a single data lake.
  • Use third-party data sources. Not having enough high-quality data is a common problem across enterprises. They either fail to collect it in the first place or don’t generate it enough to drive AI transformation. In this case, it makes sense to access publicly available data sources.

Well-structured data is a foundational element of any AI project. However, if all you have is unstructured data from multiple sources, don’t hesitate to approach AI — advanced AI techniques, such as deep learning and neural networks can make sense of any type of data.

#3 Advanced AI Techniques

And, last but not least, goes the third pillar of artificial intelligence — deep learning, also known as deep neural networks.

At this point, it makes sense to briefly explain how artificial intelligence, machine learning, and deep learning are connected with each other.

artificial intelligence vs machine learning vs deep learning

In short, AI is a broad area of computer science that encompasses both machine learning and deep learning. ML is a subset of AI, while deep learning is a subset of machine learning.

So, what’s deep learning and how it works?

Deep Learning (DL) is a machine learning technique that uses multi-layered artificial neural networks to help machines learn by example to execute tasks.

DL’s neural networks can process any types of data while using supervised, unsupervised, and reinforcement learning techniques. This versatility, coupled with the availability of cheap computing power, makes deep learning so powerful.

Here’s how a simple and a more complex deep neural network look like:

How artificial neural network works

Here you get an input layer, an output layer, and only four hidden (“deep”) layers in between. The more hidden layers a deep learning neural network has, the more powerful and “smart” it becomes.

Bear in mind that neural networks have been around for decades, yet only recently the leaps in computing power allowed to build complex deep neural networks consisting of billions of hidden layers — powerful enough to make sense of data on their own.

How Deep Neural Networks Work

Neural networks can be very abstract, so let’s illustrate how all of these layers actually work with an example.

Say, you want to train a DL model capable of recognizing human faces at any given image.

To begin with, you’ll have to label data, putting “face” and “no-face” tags on a certain amount of images. Then, you’ll send these labeled images through the network to re-train and adjust it to ensure high accuracy. Finally, you’ll need to feed an unlabeled image to the network, which should be able to figure out if the image features a human face or not.

Here’s what happens inside the deep neural network:

  • An unlabeled image is fed to the input layer of the deep neural network
  • The input layer breaks the image down into pixels, assigning values to every pixel
  • Hidden layers analyze the values to find local patterns like specific areas of forehead, nostrils, eyebrows, etc.
  • They unite and compose local patterns to build face features like eyes, lips, nose, etc.
  • Hidden layers figure out what’s in the picture, face or no face, to deliver the result to the output layer

These steps are summarized in the image below:

Facial recognition with deep neural networks

Looking at artificial neural networks, the possibilities of artificial intelligence can seem limitless. However, AI is still in its infancy and, unlike humans, even the most powerful neural networks aren’t capable of abstract thinking. All they can do is to act on a very specific objective, strictly defined by data you’ve fed to it.

Conclusion

Organizations are at a key inflection point of AI adoption.

It’s no longer magic or something shown in science fiction movies. AI is here, and it’s already revolutionizing most, if not all, industries.

Dominated by global tech brands for decades, AI is now a feasible technology for businesses of all shapes and sizes — from Fortune 500 brands to your local car wash.

Companies that have already adopted AI or are now transforming to become AI-first are posed to capture a considerable first-mover advantage on the market. This is hardly surprising, since AI allows businesses to unlock actionable insights from data, make more effective decisions, automate manual and repetitive tasks, and ensure efficient data-driven monitoring of internal and external operations. And this is just a small part of potential benefits that AI can generate for your business.

I hope you’ve enjoyed reading this artificial intelligence 101. If you have any questions about AI or any of the three pillars of AI, don’t hesitate to reach out and share feedback in the comment section.

Looking to make inroads into AI, but don’t know where or how to start? Contact Squadex AI & ML professionals! They have completed numerous AI, ML, and DL projects and will be happy to help you embark on a successful AI journey.