February 2020 | Point of View

Your organization has convinced you that it’s leveraging AI—but maybe It isn’t

It’s going to take time to trust AI. But in a world where it’s the new shiny object, we must challenge ourselves as leaders to first be able to recognize it, separate it from spin, and extract real business value from it.

Your organization has convinced you that it’s leveraging AI—but maybe It isn’t

AI is the new digital. Three years ago, everything was digital-this and digital-that—who do you hire, how do you infuse it into your culture and business model, and by the way, what is digital? Our answer at the time was to start somewhere without turning digital transformation into a big, scary monster that causes analysis paralysis. 

Now enter AI. And we’re back at the top of the hype cycle.

Everyone is spinning everything into artificial intelligence (AI) and machine learning (ML) to show that they’re “doing it.” It’s the current gold rush. For that, it’s worth the time to step back and separate reality from spin—because there’s plenty of that going around.

I think we can all agree that AI is when a machine learns faster, often times based on complex patterns in data, that humans can't initially understand. AI is about new, innovative insights that you would not be able to discover on your own in a short amount of time. Insights that are more than correlated—they are predictive, instructional, and advanced enough that you can do something about it for economic value or risk mitigation. Something that you cannot, as a human, quickly reverse-engineer how the algorithm arrived at a specific insight.

For example, AI would be predicting, based on the weather forecast and current traffic patterns in the area, whether a corner coffee shop is going to have a busy afternoon.

By definition:

  • AI is any technique that enables computers to mimic human behavior and thinking

  • AI is the science of making machines smart

  • AI helps solve problems

That’s what AI is. Let’s discuss what AI is not.

AI is not about coming up with an algorithm that proves you were right about something—or gives you something that I call “gee whiz insights” which are interesting, but don’t change the direction of your business. It is also not simple reporting, basic business intelligence, or an analysis of historical data.

The problem is, we are finding that many organizations are rebranding business intelligence and even traditional reporting as AI, in an effort to prove they are doing innovative work to customers or their leadership. For instance, serving up a monthly revenue prediction based on the prior year’s results—something most businesses have been doing for decades—is not AI. 
I’ve been on a crusade at West Monroe to challenge our teams on whether we’re actually using AI, or if we’re falling into the same trap. Here’s what I learned:

  1. We built a machine learning model for a large pharmaceutical company to help predict which patients were at risk for re-admission. The team did so by combining healthcare provider data with insurance claims data, allowing the machine to identify factors that are relevant to a potential misdiagnosis. The company increased its ability to predict readmissions by 20% and subsequently launched an education initiative for healthcare providers around the model, to boost sales of relevant treatments.
  2. We joined a cross-functional team of consultants for a large bank to make its massive data set more useful.  The metadata in its various data sets was not matching the bank’s business terms. By using AI, we were able to help the company increase the match rate of data to the company’s business terms from 44% to 85%.
  3. Internally, I’ve asked for a system that alerts our client satisfaction team when a client project is at risk. It would take into account dozens of factors that affect client satisfaction and success and proactively alert us when it suspects an issue, and do so in time for our teams to proactively manage the risk.

Those were the best illustrations of true AI. But it got me thinking. First, I considered how much we had to challenge each other on whether something was truly AI, or if it was something else being repackaged and rebranded as AI. Second, I thought about how much we as business leaders are willing to trust AI-driven insights. Is anyone willing to let a machine make a million- or billion-dollar decision? Would our own client success team take action when AI flags a project for being at-risk, even if there is no current indicator that something is wrong? Would the healthcare provider trust AI when it flags a patient at risk for readmission, even though their discharge was routine and safe—or would they resort to physician intuition and gut instinct?

This is the point: Right now, many are ready for AI to help us do our jobs better – for instance, reviewing legal contracts or resumes when recruiting talent. But I don’t think most of us are ready to trust AI to its fullest potential, like when it comes up with a counter-intuitive, billion-dollar decision. The answer to solving this is more about data literacy than it is about the technology itself, a much larger conundrum than most are capable of tackling today.

It’s going to take time to trust AI. But In a world where it is the new shiny object, we must challenge ourselves as leaders to first and foremost be able recognize it, then be able to separate it from spin, and then extract real business value from it.

Because at the end of the day, that is why we are all chasing after it.

Explore our latest perspectives