RTM - Little ai, Big AI—Good AI, Bad AI

The real risk with AI isn’t malice but competence. A super intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours, we’re in real trouble.
—Stephen Hawking
 

Artificial intelligence (AI) is enjoying a renaissance. Increases in computing power, advances in machine learning algorithms, dramatic increases in data volumes, and new data structures to manage the volume are coming together to accelerate AI’s application across industries. The technology has already powered improvements in operational efficiency, asset utilization, medical diagnoses, and personalized marketing campaigns, to name just a few areas, and its impact is only beginning to be realized.

Not all AI is created equally, however. Artificial intelligence can be divided into two categories; I call them “little ai” and “big AI.” Little ai saves money by improving part of an ongoing operation, perhaps by automating a function currently requiring human judgment—performing a diagnostic task or optimizing inventory management. Big AI, on the other hand, is an enabler for a complete rethinking of a business strategy or critical process. Creating successful big AI requires a shift in thinking, from a focus on automation or optimization to a focus on reframing strategy based on intelligent capabilities. The benefits of big AI done well can be much larger than those of little ai; big AI can—and probably will—transform entire industries. Advertising, transportation, retail, and medical practice are already being reinvented, with great benefit to the disruptors.

Both little ai and big AI can bring significant value to the organization that employs them, but both can also be implemented badly (Table 1). Little ai is usually good when it can be integrated into a work process without fragmenting the work. If, on the other hand, it is implemented in a way that optimizes a part of the process while sub-optimizing the whole, it can result in revenge effects—second-order consequences that can erode or even erase the original benefits (Tenner 1996).

A system that automates parts of a customer service operation, for example, can appear to be more efficient by a narrow set of metrics, but it can also be jarring to customers and disempowering to employees. The direct cost to the company may be very low, but the long-term, indirect cost, in terms of customers who look to other providers out of frustration, could be very high. A better AI system might be designed as an assistant to a human being. That system would help the human service provider decide what services or remedies to offer, without disorienting the customer or overriding the human decision maker’s power.

One of the best big AI examples I know of is StitchFix. The company has used AI to completely redefine the experience of shopping for clothes. Instead of selecting a specific item, paying for it, and having it shipped to you, you pay StitchFix to predict what you will want based on your responses to a questionnaire and your previous experience with the company. The company ships its selections to you, and you return any that you do not want. Of course, this system only works if the prediction algorithms are accurate enough to keep the costs of returns from making the model unprofitable, but the company has completely reinvented clothes shopping—and it works, if StitchFix’s performance is any indication. In this case, good big AI is both profitable for the company and good for the customer.

Big AI is bad when its intelligence is deployed in a way that is detrimental to the customer. A medical insurance application, for example, that seeks to predict when medical tests are economically justified might save money in direct expenses but result in unnecessary deaths or larger downstream medical costs if it denies tests that would have detected a condition early. Similarly, an AI system that uses its massive ability to process personal data is a bad system if it uses that data to manipulate customers rather than to serve them.

The challenge of making AI work, whether it’s big or little, is to create value without losing sight of people. Given the pace of technology change, achieving this balance will be an ongoing struggle. Last year, RTM’s parent organization, the Innovation Research Interchange, devoted its inaugural SPRING futures program to AI and intelligent systems. Participants and interested parties gathered at a conference in Research Triangle Park, North Carolina, in October to survey the results and consider a range of perspectives on the latest AI renaissance and its implications. This special issue is drawn from that meeting; the talks adapted here examine the challenges to implementing good AI (or ai) while exploring the potential of AI of all sorts to transform business and society alike.

Tom Culver, Lee Green, and Jim Redden look forward, considering both utopian and dystopian futures that might result from the evolution of AI. In “Peering into the Future of Intelligent Systems,” they discuss a process for creating useful scenarios around an emerging technology and also present three scenarios that can stretch our thinking about intelligent systems and AI. “Rising Tide,” the optimistic scenario, describes a world in which AI, manages to build trust and drives prolonged economic growth. “Stolen Promise” is dystopian, describing a future where cyber-attacks and a loss of trust in intelligent systems leads to information warfare and global recession. “Moving Target” includes both increased innovation and increased insecurity. The choice is essentially one between good AI, bad AI, or a world that muddles forward.

Brian Bergstein, in “From Intelligent Systems to Intelligent Organizations,” directly addresses the choices we will make with AI. He discusses positive and negative uses of AI technology and proposes that we think not in terms of intelligent systems but in terms of intelligent organizations, of which these systems are a part. Citing some leading examples of companies striving to become intelligent organizations, he notes, “These companies aren’t just asking what the technology can do for them; they’re asking what it will mean for the technology to be a coworker.” Even little ai needs to be designed together with changes in the organization and in collaboration with the people who will work with and alongside the technology.

In “Five Lessons for Applying Machine Learning,” Robbie Allen discusses the results of a survey his company undertook with more than 150 decision makers in data analytics, seeking to assess the state of machine learning. He reviews five lessons about the state of machine learning derived from the data, including his perspective on the hype about the topic (it is not overblown); the challenges of applying the technology (especially those related to data and human resources); and its implications for our economy (it won’t cause a jobs problem). He notes that much of the technology is still maturing, but he urges companies to move with urgency to take advantage of nearterm opportunities.

In his article, “Surfing the Hype Cycle, to Infinity and Beyond,” Sam Adams discusses the history and future of several technologies that are key to the renaissance of AI. Many of these technologies, he notes, are ready to be deployed today, and exciting new capabilities are on the horizon. Adams suggests that readers take the time to survey the landscape frequently, seek to understand a wide range of emerging technologies, and try to understand connections and synergies among them.

Jaron Lanier, inventor of virtual reality (VR) and tech guru, is the subject of this issue’s Conversations interview, “What Has Gone Wrong with the Internet, and How We Can Fix It.” One of Lanier’s biggest concerns is the use of customer data, together with very effective AI, to hyper-target advertising. His belief is that we have moved from merely targeting consumers to shaping their behavior, engaging a process based on the science of behavior modification, one that looks very similar to suborning addiction. This is a case of big AI gone bad.

AI is developing very rapidly. As leaders, we are making critical decisions, for our companies and for our society, about the proper uses of this technology, often implicitly. Our decisions will affect how we work, how our companies are organized, and how we interact in the world. Some paths lead to a future where technology supports us, providing improved products and services and better workplaces. Other paths lead to fragmented work, impoverished social experiences, and a loss of privacy. We hope this issue provides innovation leaders with a perspective on both the state of the art and the stakes of the decisions we must make, for this technology will affect all of us.

 

 

Logged in IRI Members click here for credential verification (only needed once per logged in session). Proceed directly to the article: https://tandfonline.com/doi/full/10.1080/08956308.2019.1587280