Generative AI is a term that describes a new class of intelligent models that can understand context , sentiment, and nuances in language. They are different in that the context they have imbibed is the entire human race knowledge available on the internet and online literature.
The internet in the early/late nineties, followed by mobile phones in the 2000s, followed by social media platforms in the 2010s, Generative AI is the fourth revolution which marks the advent of intelligent platforms that can have conversations about any topic, generate content in different modalities – text, video, images, etc.
This tectonic shift will change how humans collaborate with intelligent agents to complete complex tasks more productively. For an enterprise looking to create value for its key stakeholders, Generative AI holds a lot of promise that can drive productivity improvements, enhance their interactions with their customers, enhance or create new product portfolios, and drive hyper-personalisation. How should enterprises think about this new technology?
Generative AI(GenAI) should be looked at primarily as a “tool” that can augment a human in the workforce. The primary lens here is productivity and when done right, along with the necessary guardrails that ensure accuracy, privacy, unbiased perspective, and context sensitivity, there is value capture that is possible. There are four key areas to focus when looking at GenAI for the enterprise – Context, Data, Risk Management, and Integration. We will explore each of these topics at a high-level leaving room for deeper exploration in the future.
Context
holds utmost importance in the realm of GenAI. Consider a large language model like OpenAI’s GPT-3 that’s been trained on approximately 450GB of datasets, encompassing books, web pages, and more. This model boasts roughly 175 billion parameters, processes around 2000 tokens per interaction, and uses approximately 860GB of storage. However, the future of GenAI, and the promise of value creation, lies in smaller, more specialized language models that are industry focused. A good example of these is MedPalM2 for Healthcare, and BloombergGPT for financial advisory services. These “small” language models can be tailored to an industry-specific or enterprise-specific use case. When fine-tuned with a combination of unstructured and structured enterprise data, these models become more relevant for an enterprise.
Data
is the foundation for any AI model to be effective. The most important aspect of ensuring the model learns the right weights/parameters when training, is to ensure the data is clean, not overfitted, and most importantly doesn’t have excessive bias which can skew the weights that the model is trained on.
The other aspect of the data used for training the models, is the transparency of the sources of data. There are hundreds of LLMs available today which are “foundation” models with a learned set of parameters, however the data sources used to train the models are still opaque. An enterprise looking to combine its own structured and unstructured data, should consider these aspects before embarking on deploying the models for critical decisions. Imagine a bank leveraging an LLM, to take decisions on mortgage applications. They need the ability to audit these decisions, provide the sources of data, and explain the decision. This is a non-trivial task and will pose roadblocks in the way of adoption.
Risk Management
is a key area of focus especially when enterprises are looking at LLMs for specific use cases. The key aspects should be around security, privacy, and copyright infringement. It is possible for the LLMs to be used in a malicious way by injecting prompts that could cause unintended security lapses. A well scripted prompt can bypass guardrails that the model builder has incorporated and produce harmful content. The model can also be tricked to produce snippets of private information that must be in the trained dataset. This can lead to PII information getting exposed. The most important risk is around copyright infringement, especially around the notion of “fair use”. Most of the arguments state that any new creation from existing content like images, text, videos etc. could be considered “fair use” and doesn’t need the permission of the original copyright holder.
This is not entirely true and there are subtle factors that influence the definition of what constitutes “fair use”. All this boils down to the need for governance at an enterprise level to establish clear guardrails and measure those on a periodic basis when the models are used within the enterprise application landscape.
Integration
Integration has always been the glue that allows the use of AI models in conjunction with enterprise systems to drive outcomes. Since ChatGPT made LLMs very popular, the plethora of LLMs including commercial and open source that are now available has exploded. There are multimodal LLMs that can generate text, images, videos all based upon a text prompt. To activate these models in a meaningful way, integration into the enterprise is critical. Existing cloud IaaS providers have launched GPU focused cloud environments, providing platforms that allow selection and hosting of models with varying parameter counts, token limits etc. DevOps tools to deploy models are also seeing various degrees of maturity. This whole area is called ModelOps and it is about automating the process of model training, fine tuning, date pipelines, embedding creation, testing, and deployment. The best way to think about this is CI/CD for LLMs. This presents a tremendous opportunity for enterprises to capitalize on the innovation and promise of the LLMs.
Generative AI is riding the crest of the AI Wave, but does it mean we are getting closer to an Artificial General Intelligence. It would be hard to make that argument. GenAI models are at best learning through “rote”. They have memorized a huge database of tokens/words and through a sophisticated mathematical model that has weights( Parameters), that provide the probability of a closest match to the “Prompt”, they respond with the text. Enterprises should look at specific use cases which target productivity, and use this tool to drive efficiencies, always keeping a human in the loop. Will it have an impact to the future of work? Absolutely yes, especially to knowledge workers. Will it eliminate jobs? Probably in some vocations, but it will most definitely create new jobs like a “Prompt Engineer” for example.
There’s a lot to unpack. For now, remember that Generative AI is not here to eliminate jobs, but instead serves as another tool in our toolbox.”