While generative AI can be useful to businesses, the technology has some notable shortcomings, including a propensity to get simple things wrong and occasional difficulty with basic logic. Given this, how should organizations think about finding the right use cases to effectively leverage generative AI for sustainable business advantage?
During a webinar hosted by MIT Sloan Management Review, MIT Sloan Professor of Practice presented a three-step approach to help businesses identify the best use cases for generative AI and automate parts or all of a business process. He also provided practical advice on best practices to help organizations effectively leverage the benefits of generative AI while avoiding common pitfalls.
“There are a multitude of issues to worry about when using (a large language model)…and there are no ironclad solutions yet,” Ramakrishnan said, adding that research organizations and the vendor community are making significant progress toward solving them. “Given all the issues, the big question is: How should we consider using LLMs for business productivity?”
3 steps to identify business use cases for LLMs
Ramakrishnan suggests taking the following steps to determine which knowledge work business processes would be best served by generative AI automation:
Divide workflows and tasks into tasks. Jobs are discrete task sets which vary depending on how they can be automated with generative AI. For example, a database of occupations from the United States Bureau of Labor Statistics associates 25 tasks with the job of a university professor, and only some of them can be easily automated. Preparing course materials and assignments, grading student work, and preparing for lectures are tasks that can be partially automated, but moderating class discussions or lecturing does not translate well to an LLM use case. “That’s why you have to take the trouble to break tasks down into individual, discrete tasks,” Ramakrishnan said. “Some things are easy with an LLM while others are really difficult.”
Evaluate tasks using the generative AI cost equation. It is important to consider all potential costs associated with automation. There are obvious costs to using an LLM, such as paying licensing or API fees. But there are also less obvious costs that could be even greater, including the time, effort and money required to tailor a generative AI tool to the degree of accuracy required for the task at hand and to create mechanisms to detect and correct errors.
Task costs may differ depending on the required accuracy of an LLM and whether the use case has a margin for error. Some tasks, like writing advertising copy, product descriptions, or the plot of a movie, have a little more room for error. Uses that require logical reasoning or factual knowledge; encompass cause and effect relationships; or where the stakes are high, such as medical care, require more precision. These cases require a robust mechanism to monitor and correct LLM results – often, a human in the loop. That adds significant effort and potential expense, Ramakrishnan said. The possibility of an error slipping past human monitors, causing brand damage or reputational risk, adds another potential cost factor to the mix.
Once these costs are identified, organizations should compare the generative AI cost equation with the cost of doing business as usual (without generative AI) and determine which is smaller. And, given the pace of market change, something that doesn’t make sense to automate now might be easier to automate in the future.
“If you apply the equation to a particular task and it is not successful because the costs are too high, you should probably revisit it periodically, because as LLM capabilities improve, the cost of adoption goes down significantly,” Ramakrishnan said.
Create, launch and evaluate drivers. If the first two conditions are met, the final step is to turn experimentation into action. Companies can take different approaches to pilot projects, for example using application providers, adapting a business model such as GPT-4, or adapting an open source LLM like Llama 3.
Software companies are also racing to incorporate generative AI into existing products, as evidenced by the rise of AI co-pilots for knowledge work, a trend that is helping to accelerate the deployment of generative AI.
Companies should establish a rigorous evaluation process when building LLM-based applications, as it can be more difficult and riskier than building a predictive AI application based on machine learning, Ramakrishnan said.
Executive AI Academy
In person at MIT Sloan
Register now
Best practices for using the LLM
Once companies have completed these three steps, there are some best practices they can follow to ensure a successful implementation of generative AI, Ramakrishnan said:
- Make sure you have a rigorous review process in place when creating or evaluating LLM-based applications.
- Don’t rush into production without a robust mechanism to check and correct errors. Having a human in the loop can be costly, but catching issues before a tool is deployed or made available to customers is worth it.
- Consider narrow use cases, especially if you run a small business. More focused tasks require smaller LLMs, which generally means less cost and easier training and maintenance.
- Find and train talent outside the traditional data science organization. It is important to identify and train people who are interested in generative AI and continually develop their skills, Ramakrishnan said. “There is…talent hiding in the business,” he said, and using LLMs with prompts does not require strong technical knowledge.
- Set your ROI expectations by prioritizing obvious use cases that will ensure rapid ROI and serve as a valuable learning exercise. Ramakrishnan noted that most organizations focus on business productivity for their first wave of LLM adoption.
“The way to move past this dichotomous, paralyzing state is to say we’re going to do easy, low-stakes things first and see what happens, but we’re going to do a lot very quickly,” Ramakrishnan said.
