Alternatives

Generative AI: A Roadmap for PE Portfolio Companies

June 27, 2023 | 4 minute read
Author(s)
Avatar
Lorie Buckingham
Goldman Sachs Asset Management Value Accelerator Operating Advisor
Avatar
Prerak Vohra
Goldman Sachs Asset Management Value Accelerator Operating Advisor
Avatar
Carmen O'Shea
Goldman Sachs Asset Management Value Accelerator Operating Advisor

Introduction

Many business leaders have struggled to distinguish reality vs. speculative hype in the recent torrent of information on generative artificial intelligence (“AI”). Today the benefits and risks are largely hypothetical, with predictions ranging from massive benefits for humanity to catastrophic destruction. Many companies are unsure where to start and how to engage with the technology in ways that demonstrate value while also posing minimal risk.

Operating Advisors in the GS Value Accelerator have worked closely with our private equity portfolio companies on best practices around generative AI. We suggest that companies take a measured, bounded approach given the risks and complexities inherent to this nascent technology.

Defining and Understanding Generative AI

Generative AI creates novel content in the form of text, code, images, audio, video, etc., based on data it has been trained on and/or inputs it has received. Some examples of the technology include large language models like ChatGPT 3/4 and other forms of new content generation such as DALL-E for images. Common to each technology is an underlying interface that promotes human-simulating interactions.

The technology has some known issues. Generative AI models present content with the same degree of confidence whether the information generated is correct or not (called hallucinations). In addition, the technology is unable to self-reflect or to mimic human ethical judgment capabilities. Since generative AI is not yet subject to standard policies or regulations, there are many gray areas that need to be considered. For instance, who ultimately owns the content generated, and who is liable for the actions arising due to that content?

Potential Use Cases and the Importance of the "Safe Case"

Many businesses are exploring generative AI for report-writing, social media, interviewing, and other use cases. At the same time, the concept of a “safe” use case for piloting generative AI in a business context is critical. Companies should place boundaries around testing and using generative AI by limiting the underlying datasets, managing variables, and implementing quality control checks, all of which can help manage risk.

Several companies in our private equity portfolio are considering how best to apply the technology in their space. Below we have highlighted two examples that show the potential of AI technology and how we think about building a “safe” use case.

Case Study

Report Writing at a Global Assurance Company

A large, global risk management service provider undertakes in-person inspections of business systems and processes to produce a written audit findings report. An employee typically spends 1.5 days onsite, taking notes based on a pre-defined checklist or template, and then half a day synthesizing findings and writing up a report. Our hypothesis is that leveraging generative AI technology to create reports real-time immediately following a site visit would enable faster, more accurate, and higher quality report writing. It would also increase worker productivity by freeing them up to focus on more site visits.

This example demonstrates a measured use case for a business services company in which the inputs are limited and defined and the outputs are gated. The universe of report outputs is constrained by the nature of and finite number of inputs. These can be general inputs, such as previous reports and templates from which the AI learned, as well as client-specific situational notes the employee uploads upon completion of the inspection. Crucially, the risk of incorrect, irrelevant, or misleading information is further curtailed by a quality control step in which the employee must read, edit and quality check reports before they are finalized. This ensures conclusions reached are appropriate. 

Case Study

Product Reviews at an eCommerce Company

A leading eCommerce platform has hundreds of chat rooms that include thousands of lines of reviews written for specific products over specific time frames. The numerous threads reflect community engagement around an individual product. The company’s current technology synthesizes the collective views, analyzes the sentiment, and shares a majority positive or negative rating but is unable to describe why the rating is what it is. We believe applying generative AI to analyze the library of threads and yield ratings would enable rapid search and deliver comprehensive, richer reviews when users conduct product searches. This could lead to increased engagement onsite and higher user satisfaction. It might also lead to increased conversion rates since members can more rapidly access higher quality product information.

This example also illustrates a thoughtful use case for an eCommerce company. For one, the technology is limited to threads already existing in the system. In addition, there is already a level of content quality assurance provided by employees who monitor the chat rooms. Finally, generative AI will not be used to engage directly with community members in the forums, thereby eliminating the potential for risks around content liability, false advertising, reputation management, and user data privacy violations.

Basic Policies and Protections all Companies Need
intellectual-property_icon
Protect Intellectual Property (IP) and Personally Identifiable Information (PII)
program-interface_icon
Enact Application Program Interface (API) standards for all data sources
require-permission_icon
Block access by default – require permission to use
authorized-users_icon
Restrict access only to authorized users
lab-environment_icon
Create a lab environment for exploring within a secure area (digitally and physically if necessary)
discussion_icon
Add a discussion of generative AI to security awareness programs for employees

Prioritization, Resourcing, and Planning Ahead

Portfolio company leaders who are seriously exploring generative AI for their business must see it as integral to their strategy, not as a side concern that can be outsourced or delegated. Viewing the technology and its repercussions as a core strategic lever implies the necessity to take the time and do the work to understand what it means in their company, industry, and ecosystem.

Several key questions will drive initial and next steps in the planning process. What are competitors doing? What specific areas of their business model, processes or operations lend themselves to safer pilots of this technology where certain human elements remain intact? Once pilots are defined, what would the attendant risks and risk mitigation plan be? How might they create a decision framework around the ethical use of generative AI that aligns with their company values? What kind of light touch governance structure within the company would best lend itself to experimentation, collaboration, and thoughtful outcomes?

Conclusion

Generative AI has the potential to become a valuable tool for many companies. To effectively utilize the technology, businesses must consider the precautions required to implement a “safe case.” By actively engaging with the technology from an early stage, executives can ensure companies are best positioned for the future. 

Author(s)
Avatar
Lorie Buckingham
Goldman Sachs Asset Management Value Accelerator Operating Advisor
Avatar
Prerak Vohra
Goldman Sachs Asset Management Value Accelerator Operating Advisor
Avatar
Carmen O'Shea
Goldman Sachs Asset Management Value Accelerator Operating Advisor
Start the Conversation
Contact Goldman Sachs Asset Management for a detailed discussion of your needs.
card-poster