Skip to main content

Chances are, you're exposed to artificial intelligence every day. Whether you're browsing your Facebook (FB) - Get Meta Platforms Inc. Class A Report feed or talking to Apple's (AAPL) - Get Apple Inc. Report Siri, you're interacting with artificial intelligence. 

And artificial intelligence has been the cause of many of the technological breakthroughs in the past several years - from robots to Tesla (TSLA) - Get Tesla Inc Report . But while there are certainly naysayers to the technological development, AI seems set to become the future of predictive tech. 

But, what actually is artificial intelligence, and how does it work? Better still, how is AI being used in 2019? 

What Is Artificial Intelligence? 

Coined in 1955 by John McCarthy as "the science and engineering of making intelligent machines," artificial intelligence (or AI) is software that is able to use and analyze data, algorithms and programming to perform actions, anticipate problems and learn to adapt to a variety of circumstances with and without supervision. AI is generally broken down into specialized or general and strong or weak AI, depending on its applications. 

Generally, there are three main divisions of AI - neural networks, machine learning and deep learning. Neural networks (often called artificial neural networks, or ANN) essentially mimic biological neural networks by "modeling and processing nonlinear relationships between inputs and outputs in parallel." Machine learning generally uses statistics and data to help improve machine functions, while deep learning computes multi-layer neural networks for more advanced learning. 

The original seven aspects of AI, named by McCarthy and others at the Dartmouth Conference in 1955, include automatic computers, programming AI to use language, hypothetical neuron nets to be used to form concepts, measuring problem complexity, self-improvement, abstractions, and randomness and creativity.

How Does Artificial Intelligence Work? 

So, how does AI actually work? 

Well, for starters, there are different kinds of AI that operate differently. And while AI is generally a blanket term for these different kinds of functions, there are several different kinds of AI that are programmed for different purposes - including weak and strong AI, specialized and general AI, and other software.

Strong vs. Weak AI

On a basic level, the difference between strong and weak AI is supervision. 

Weak AI is designed to be supervised programming that is a simulation of human thought and interaction - but is ultimately a set of programmed responses or supervised interactions that are merely human-like. Siri and Alexa are a good example of weak AI, because, while they seemingly interact and think like humans when asked questions or to perform tasks, their responses are programmed and they are ultimately assessing which response is appropriate from their bank of responses. For this reason, weak AI like Siri or Alexa don't necessarily understand the true meaning of their commands, merely that they comprehend key words or commands and their algorithms match them up with an action.

On the other hand, strong AI is largely unsupervised and uses more clustered or association data processing. Instead of having programmed solutions or responses to problems, strong AI is unsupervised in its problem-solving process. Strong AI is typically known for being able to "teach" itself things - for example, strong AI is used to teach itself games and learn to anticipate moves. Even as far back as 2013, AI taught itself Atari (PONGF) games and ended up beating records and even surpassed humans in several different games. 

But apart from games, strong AI is commonly associated with the "scary" robots and machines that most often plague the public's nightmares of how dangerous AI could be. However, on a basic level, unsupervised learning goes into problems without any pre-programmed answers, and is able to use a mixture of logic and trial and error to learn the answers or categorize things. This is often demonstrated in exercises where strong AI is shown images with colors and shapes and is supposed to categorize and organize them. 

Specialized vs. General AI

But apart from supervision, there are different functions of AI.

Specialized AI is AI that is programmed to perform a specific task. Its programming is meant to be able to learn to perform a certain task - not multiple. For example, from self-driving cars to predictive news feeds, specialized AI has been the dominant form of AI since its inception (although this is rapidly changing). 

On the other hand, general AI isn't limited to one specific task - it is able to learn and complete numerous different tasks and functions. In general, much of the cutting-edge, boundary-pushing AI developments of recent years have been general AI - which are focused on learning and using unsupervised programming to solve problems for a variety of tasks and circumstances. 


As far as its uses go, AI is potentially boundless.

Scroll to Continue

TheStreet Recommends

However, AI has been leveraged for a variety of industries and purposes.

In business, AI has had considerable success in customer service and other business operations. AI has been used in business for various purposes including process automation (by transferring email and call data into record systems, helping resolve billing issues and updating records), cognitive insight (for predicting a buyer's preferences on sites, personalizing advertising and protecting against fraud) and cognitive engagement (used primarily in a customer service capacity to provide 24/7 service and even answers to employee questions regarding internal operations). 

In fact, 2017 studies show that 45% of people "prefer chatbots as the primary mode of communication for customer service activities." For the 2016 year, the global chatbot market was reportedly worth $190.8 million - and could potentially comprise about 25% of customer service interactions by 2020, according to Gartner (IT) - Get Gartner, Inc. Report

In addition to its involvement in customer service and business, AI has also been used in recent years for writing news stories. Especially for formulaic articles like earnings reports or sports statistics, AI has increasingly been used to automatically write stories and fill in different data. This technology has been employed by the likes of The Wall Street Journal.

"You get these news releases about things that are happening in sports, for example, or in business. But people are not creating these pieces anymore. It's actually A.I. that's releasing this information," Stephen Ibaraki, founder and chairman of the UN ITU AI For Good Global Summit with XPRIZE Foundation told Neil Sahota for Forbes this year. "Lots of us are spending hours on our mobile phones reading updates about events and news flashes never realizing it's A.I. that's generating this stuff now."

And it seems as though AI is even integrating into education.

Yi Wang, chairman and CEO of LAIX (LAIX) - Get LAIX Inc. Report , a Chinese-based company teaching various languages using AI, told TheStreet back in September that "we want everyone to become global citizens." 

Artificial Intelligence Examples

It may come as a surprise that artificial intelligence is all around us - and has even permeated our routine on a daily basis. 

Whether on our phones or at the cutting edge of technological development, artificial intelligence is all around.


Whether or not you've thought about that voice in your phone as a product of AI or not, Apple's Siri and Amazon's (AMZN) - Get, Inc. Report  Alexa both use AI to help you complete tasks or answer questions on your mobile devices.

As examples of weak AI, Siri and Alexa are programmed with responses and actions based on commands or questions posed to them by the phone owner. 

Facebook Feed

Believe it or not, your Facebook feed is actually using AI to predict what content you want to see and push it higher. 

Algorithms built into the feeds filter content that is most likely to be of interest to the particular Facebook user and predict what they will want to see. 


Despite its founder (the ever-eccentric Elon Musk) being vocally suspicious of advanced AI technology, Tesla's electronic cars use a variety of AI - including self-driving capacities. Tesla also uses crowd-sourced data from its vehicles to improve their systems.  


Yes, while you are chilling on your couch with some Netflix (NFLX) - Get Netflix, Inc. Report , you are reaping the benefits of AI technology.

The media streaming site uses advanced predictive technology to suggest shows based on your viewing preferences or rating. And while the data currently seems to favor bigger, more popular films over smaller ones, it is becoming increasingly sophisticated. 

Still, how did AI get from a futuristic technology to part of our daily lives?  

History of Artificial Intelligence

Although it is widely known for being cutting edge, the processes and concepts behind AI have been around for a while. In fact, according to Bernard Marr, a best-selling author and Forbes contributor,"the theory and the fundamental computer science which makes it possible has been around for decades."

So, how did AI get started, and how has it progressed?

As far back as the mid 1600s, French scientist and philosopher Rene Descartes hypothesized about two divisions - machines that could one day think and learn a specific task, and those that could adapt to perform a variety of different tasks as humans do. These two veins were later called specialized and general AI.

The so-called Turing Test, proposed in 1950 by English mathematician Alan M. Turing, attempted to determine when a computer could actually think. The test predicted that, by 2000, AI computers would be able to be nearly indistinguishable from humans after interrogation, although this test has never been "passed." 

By 1955, the Dartmouth Conference (or the Dartmouth Summer Research Project on Artificial Intelligence) developed some of the first concepts of AI. Spurred on by Dartmouth College professor John McCarthy, AI was given a comprehensive definition, and other concepts like machine learning, language processing and neural networks were hypothesized and introduced. 

In 1966, ELIZA, the first chatbot, was developed at MIT by Joseph Weizenbaum and is the predecessor of the likes of Siri and Alexa. Although ELIZA didn't actually speak (and communicated via text instead), the technology was the first that was developed to relay messages in language (or natural language processing) as opposed to using computer code and programming. 

AI moved from a largely cutting-edge technological development to useful applications in business by 1980, when Digital Equipment Corporation's XCON was able to save the company around $40 million in 1986 through its learning system. 

And after the development of the internet in the early 1990s, the sharing of data across the web only increased AI's capabilities. By 1997, Deep Blue - IBM's (IBM) - Get International Business Machines Corporation Report supercomputer - beat world chess champion Garry Kasparov through a variety of calculations, and indicated that AI was developing to possibly best humans.  

But apart from games, AI was developing to operate automobiles as well. The 2005 DARPA Grand Challenge put the prowess of automated vehicles on display as they raced a course in the Mojave desert. And just six years later, IBM's cognitive computing engine Watson beat Jeopardy's champion, winning the $1 million prize money - further indicating AI's capabilities in successfully navigating language-based problems. 

Still, it wasn't until 2012 that the full capabilities of AI were beginning to be understood. The research paper Building High-Level Features Using Large Scale Unsupervised Learning was published in 2012 by researchers at Google (GOOG) - Get Alphabet Inc. Class C Report and Stanford, and explained advances in unsupervised learning through deep neural networks that allowed AI to learn to recognize different pictures of cats without labeling the pictures. This breakthrough was progressed further in 2015, when researchers officially declared that computers were better at recognizing images than humans following the ImageNet challenge. The challenge had computers identify 1,000 objects.

In 2016, Google subsidiary Deep Mind's AlphaGo software was able to beat Go world champion Lee Sedol over the course of five matches. The defeat was more impressive than AI's previous superiority in chess because Go has over 100,000 potential opening moves, and it forced AlphaGo to use neural networks to learn the game and defeat its opponent. The defeat marked an astonishing breakthrough that demonstrated AI's ability to learn on its own. 

However, one of AI's most recent and promising applications has been with self-driving cars, released in 2018. Google spin-off Waymo released self-driving taxi service in Phoenix called Waymo One - which is reportedly used by around 400 citizens. 

Artificial Intelligence News in 2019 

So, what's happening with AI in 2019? 

AI and Elon Musk

Surprisingly, Musk has been one of the most vocal criticizers of advanced AI technology - despite owning a company, OpenAI, dedicated to AI research. 

Musk's co-founded company researches ways to responsibly and safely progress AI technology - which Musk sees as imperative. The Tesla CEO recently claimed in a documentary "Do You Trust This Computer?" that super computers could become "an immortal dictator from which we would never escape."

However, Zia Chishti, chairman and CEO of AI firm Afiniti, told TheStreet earlier this year that the hype over the dangers of AI may be overblown.

"AI is just a way to identify patterns in complex fields, it's not going to nuke the world -- there is no chance of that. I think the visions of the impending apocalypse as a result of robot intelligence is fanciful -- so I wouldn't be overly concerned about Elon Musk's perspective on it," Chishti said. 

Korean Killer Robots

Still, Musk's warnings may be somewhat warranted.

Recent reports claim that South Korean university Korea Advanced Institute of Science and Technology (KAIST) has partnered with a defense company to create "killer robots." Some 50 academics reportedly signed a letter calling for a boycott of KAIST and Hanwha Systems amid concerns of an arms race. 

"There are plenty of great things you can do with AI that save lives, including in a military context, but to openly declare the goal is to develop autonomous weapons and have a partner like this sparks huge concern," Toby Walsh, professor at University of New South Wales and organizer of the boycott, told The Guardian earlier this year. 

Concerns over the development of "killer robots" and AI weapons remains a concern going into 2019. 

The Sex Robot Disruption

However, on the opposite side of the spectrum, disruption is seemingly occurring in a rather unusual market - sex dolls.

Matt McMullen, CEO of Realbotix and RealDoll and creator of the sex robot, Harmony, set out to incorporate AI technology with sex dolls to create an altogether eerily human-like robot. 

"I like building this robot and seeing it move and talk and interact with people, what it does to them. It just opens up Pandora's box of psychology and science, " McMullen told Forbes earlier this year. "It's been evident that when you are using a very lifelike robot as a conduit for the AI and for the conversation, people tend to talk to that in a different way than they would, say, something on a computer screen. Putting that device in someone's home where they're relaxed and they're in their own environment creates another level of comfort."

Still, concerns over the implications of these AI, life-like dolls - especially in relation to intimacy issues and mental health - are rampant. 

Among many others, one such opponent of AI-partnered sex robots is Kathleen Richardson, professor of Ethics and Culture of Robots and AI at De Montfort University in Leicester, England.

"Basically, to say a relationship doesn't need to involve another human being, ... We're living in a culture where we have a surplus of human beings, we don't have any problems with the amount of human beings that we have in the world, but we're creating this culture and this climate where we're trying to encourage people to form relationships with commercial goods, basically," Richardson told Forbes in September.

And while many of the recent AI developments have been blanketed in controversy, innovators continue to push the boundaries.

"The individual in question who finds this happiness with an AI-driven robot, and finds intimacy that they never had with another person-whose job does it become to intervene and say, 'no, you can't do that. That's not right,'" McMullen questioned.

AI and Counseling? 

Still, in a slightly less controversial vein, AI has made recent strides in the field of mental health and counseling. 

Woebot, an intelligent software application, is becoming increasingly sophisticated, and even charities are considering using the technology to help meet demands for counseling, according to The Guardian

For those like Aidan Jones, the chief executive of Relate, a relationship counseling charity, "we have to start to look at what can be done with a non-human interaction," Jones told The Times.

Woebot uses what's called cognitive behavioral therapy (or CBT) to help counsel users. The company released an iOS app in January, offering a texting-based service.

"The Woebot experience doesn't map onto what we know to be a human-to-computer relationship, and it doesn't map onto what we know to be a human-to-human relationship either," Alison Darcy, a clinical psychologist at Stanford University and creator of Woebot told Business Insider in January. "It seems to be something in the middle."

AI Exports

Most recently, Silicon Valley is reportedly concerned over the possibility of regulators curbing exports of AI technology from the U.S., according to The New York Times.

The Commerce Department allegedly named artificial intelligence as an item that would be re-evaluated under new export rules designed to increase national security. The principle concern seems to revolve around how exports of AI may boost the industries in other countries like China, potentially to the detriment of the U.S.

"The number of cases where exports can be sufficiently controlled are very, very, very small, and the chance of making an error is quite large," Jack Clark, head of policy at OpenAI, told The New York Times. "If this goes wrong, it could do real damage to the A.I. community."

Still, it appears that, with every innovation in AI, new concerns over ethics, economics and safety seem to progress with them.