The artificial intelligence of today has almost nothing in common with the AI of science fiction. In “Star Wars,” “Star Trek” and “Battlestar Galactica,” we’re introduced to robots who behave like we do — they are aware of their surroundings, understand the context of their surroundings and can move around and interact with people just as I can with you. These characters and scenarios are postulated by writers and filmmakers as entertainment, and while one day humanity will inevitably develop an AI like this, it won’t happen in the lifetime of anyone reading this article.
Because we can rapidly feed vast amounts of data to them, machines appear to be learning and mimicking us, but in fact they are still at the mercy of the algorithms we provide. The way for us to think of modern artificial intelligence is to understand two concepts:
- Computers can ingest millions of data points per second and make instant calculations and predictions based on this data set.
- Very specific rules can be written to help a computer system understand what to do once a calculation is made. (Or, if training a neural network, very specific inputs and outputs must be provided for the data that’s being ingested.)
To illustrate this in grossly simplified terms, imagine a computer system in an autonomous car. Data comes from cameras placed around the vehicle, from road signs, from pictures that can be identified as hazards and so on. Rules are then written for the computer system to learn about all the data points and make calculations based on the rules of the road. The successful result is the vehicle driving from point A to B without making mistakes (hopefully).
The important thing to understand is that these systems don’t think like you and me. People are ridiculously good at pattern recognition, even to the point where we prefer forcing ourselves to see patterns when there are none. We use this skill to ingest less information and make quick decisions about what to do.
Computers have no such luxury; they have to ingest everything, and if you’ll forgive the pun, they can’t “think outside the box.” If a modern AI were to be programmed to understand a room (or any other volume) it would have to measure all of it.
Think of the little Roomba robot that can automatically vacuum your house. It runs randomly around until it hits every part of your room. An AI would do this (very fast) and then would be able to know how big the room is. A person could just open the door, glance at the room and say (based on prior experience), “Oh, it’s about 20 ft. long and 12 ft. wide.” They’d be wrong, but it would be close enough.
Understanding the concepts of AI
Over the past two decades, we’ve delved into data science and developed vast analytical capabilities. Data is put into systems, people look it, manipulate it, identify trends and make decisions based on it.
Broadly speaking, any job like this can be automated. Computer systems are programmed with machine learning algorithms and continuously learn to look at more data more quickly than any human would be able to. Any rule or pattern that a person is looking for, a computer can be programmed to understand and will be more effective than a person at executing.
We see examples of this while running digital advertising campaigns. Before, a person would log into a system, choose which data provider to use, choose which segments to run (auto intenders, fashionistas, moms and so on), run the campaign, and then check in on it periodically to optimize.
Now, all the data is available to an AI — the computer system decides how to run the campaign based on given goals (CTR, CPA, site visits and so on) and tells you during and after the campaign about the decisions it made and why. Put this AI up against the best human opponent, and the computer should win — unless a new and hitherto unknown variable is introduced or required data is unavailable.
There are still lots of things computers cannot do for us. For example, look at the United Airlines fiasco last April, when a man was dragged out of his seat after the flight was overbooked. United’s tagline is “Fly the friendly skies.” The incident was anything but friendly, and any current ad campaign touting so would be balked at.
To a human, the negative sentiment is obvious. The ad campaign would be pulled and a different strategy would be attempted — in this case, a major PR push. But a computer would just notice that the ads aren’t performing as they once were but would continue to look for ways to optimize the campaign. It might even notice lots of interactions when “Fly the Friendly Skies” ads are placed next to images of a person being brutally pulled off the plane and place more ads there!
How will artificial intelligence affect you?
The way that artificial intelligence will affect us as consumers is more subtle than we think. We’re unlikely to have a relationship with Siri or Alexa (see the movie “Her”), and although self-driving cars will become real in our lifetime, it’s unlikely that traffic will improve dramatically, since not everyone will use them, and ride-sharing or service-oriented vehicles will still infiltrate our roads, contributing to traffic.
Some opinions expressed in this article may be those of a guest author and not necessarily Marketing Land. Staff authors are listed here.
About The Author