Not all Machine Learning is Artificial Intelligence

One of the downsides to the recent revival of popularity of Artificial Intelligence (AI) is that we see a lot of vendors, professional services firms, and end users jumping on the AI bandwagon labeling their technologies, products, service offerings, and projects as AI projects without necessarily that being the case. On the other hand, there isn’t a well-accepted delineation between what is definitely AI and what is definitely not AI.

Simply automating things doesn’t make them intelligent, as we’ve written about and spoken about many times. It may be complicated to train a computer to understand the difference between an image of a cat and an image of a horse or even between different species of trees, but that doesn’t mean that the system can understand what it is looking at, learn from its own experiences, and make decisions based on that understanding. MIT Professor Luis Perez-Breva argues that while these various complicated training and data-intensive learning systems are most definitely Machine Learning (ML) capabilities, that does not make them AI capabilities. In fact, he argues, most of what is currently being branded as AI in the market and media is not AI at all, but rather just different versions of ML where the systems are being trained to do a specific, narrow task, using different approaches to ML, of which Deep Learning is currently the most popular. He argues, that if you’re trying to get a computer to recognize an image, feed it enough data, and with the magic of math and statistics and neural nets that weigh different connections more or less over time, you’ll get the results you would expect. But what you’re really doing is using the human’s understanding of what the image is to create a large data set that can then be mathematically matched against inputs to verify what the human understands.

How Does Machine Learning relate to AI?

The view espoused by Professor Perez-Breva is not isolated or outlandish. In fact, when you dig deeper into these arguments, it’s hard to argue that the narrower the ML task, the less AI it in fact is. However, does that mean that ML doesn’t play a role at all in AI? Or, at what point can you say that a ML effort is an AI effort in the way we discussed above? If you read the Wikipedia entry on AI, it will tell you that, as of 2017, the industry generally accepts that “successfully understanding human speech, competing at the highest level in strategic game systems, autonomous cars, intelligent routing in content delivery network and military simulations” can be classified as AI systems.

However, the line between intelligence and just math or automation is a tricky one. If you decompose any intelligent system, even the much-vaunted AGI goal, it will look just like bits and bytes, decision-trees, databases and mathematical algorithms. Similarly, if you decompose the human brain, it’s just a bunch of neurons firing electrochemical pathways. Are humans intelligent? Are zebras intelligent? Is bacteria intelligent? Where’s the delineation between intelligence in living organisms? Perhaps intelligence is not a truly well-defined thing, but rather an observation of the characteristics of a system that exhibit certain behaviors. In this light, one of those behaviors is understanding and perceiving its surroundings, and another of those is learning from experiences and making decisions based on those experiences. In this light, Machine Learning definitely forms a part of what is necessary to make AI work.

Some say that machine learning is a form of pattern recognition, understanding when a particular pattern occurs in nature or experience or through senses, and then acting on that pattern recognition. When you look at it from that perspective, it becomes clear that the learning part must be paired with an action part. Decisions and reasoning is not just applying the same response to the same patterns over and over again. If that was the case, then all we’re doing is using ML to just automate better. Given the same inputs and feedback, the robot will perform the same action. But do humans really work that way? We experiment with different outcomes. We weigh alternatives. We respond differently when we’re stressed than when we’re relaxed. We prioritize. We think ahead and think about the potential outcomes of a decision. We play politics and we don’t always say what we want to say. And the big one: we have emotions. We have self-consciousness. We have “awareness”. All of these things move us beyond the task of learning into the world of perceiving, acting, and behaving. These are the frontiers of AI.

Sign up here to get more insights from Cognilytica on topics relating to AI and Machine Learning.

Ronald Schmelzer

Managing Partner at Cognilytica
Ron is principal analyst, managing partner, and founder of the Artificial Intelligence-focused analyst and advisory firm Cognilytica, and is also the host of the AI Today podcast, SXSW Innovation Awards Judge, founder and operator of TechBreakfast demo format events, and an expert in AI, Machine Learning, Enterprise Architecture, venture capital, startup and entrepreneurial ecosystems, and more.Prior to founding Cognilytica, Ron founded and ran ZapThink, an industry analyst firm focused on Service-Oriented Architecture (SOA), Cloud Computing, Web Services, XML, & Enterprise Architecture, which was acquired by Dovel Technologies in August 2011.

Latest posts by Ronald Schmelzer

Leave a Reply