This Axios article states what we already know: The responses coming from many generative AI (genAI) systems are misleading, not what the users asked for, or just plain wrong. The public issue is that Microsoft software engineering lead Shane Jones sent letters to FTC chair Lina Khan and Microsoft’s board of directors on March 6 saying that Microsoft’s AI image generator created violent and sexual images and used copyrighted images when given specific prompts.
Of course, the huge, publicly accessible large language models (LLMs) get the most negative attention. What about business applications that leverage generative AI? Surely, the smaller focus will drive better-quality responses. Nope.
Where generative AI goes wrong
Many are telling me they thought generative AI was supposed to provide the best chance of an informational and helpful response. It seems the technology is not living up to that expectation. What the hell is going on?
Generative AI has the same limitations as all AI systems: It depends on the data used to train the model. Crappy data creates crappy AI models. Worse, you get erroneous responses or responses that may get you into legal trouble. It’s important to acknowledge the limitations inherent in these systems and understand that, at times, they can exhibit what may reasonably be called “stupidity.” This stupidity can put you out of business or get you sued into the Stone Age.
Generative AI models, including models like GPT, operate based on patterns and associations learned from vast data sets. Although these models can generate coherent and contextually relevant responses, they lack proper understanding and consciousness, leading to outputs that may seem perplexing or nonsensical.
You may ask a public large language model to create a history paper and get one explaining that Napoleon fought in the United States Civil War. This error is easily spotted, but mistakes made in a new genAI-enabled supply chain optimization system may not be so easy to spot. And these errors may result in millions of dollars in lost revenue.
I’m finding that users of these systems take the response as gospel, more so than other systems. Mistakes are often not caught until much damage is done, sometimes months later.
It’s the data, stupid
Most business issues with generative AI are caused by insufficient data. Companies spend all their time picking AI tools, including public cloud services, but don’t spend enough time getting their data into better shape to provide solid training data for these AI models. The systems consume “dirty data” and end up with all kinds of trouble from these newly built LLMs or small language models (SLMs).
Businesses understand this issue, but they seem okay to move forward with generative AI systems without fixing the data being ingested. They often assume that AI tools will find flawed and erroneous data and eliminate it from consideration.
AI systems can do this, as long as a verification process is undergone before the data is viewed from a specific model that’s not fit to be relied upon. A verification process can find and eliminate data that’s way off, but not all insufficient data looks like bad data. If the erroneous data is ingested as training data, your generative AI system will become dumber and dumber.
Most of the issues enterprises are having with generative AI are related to poor-quality data or data that should not have been used in the first place. Although you would think that fixing data issues is easy, for most enterprises, you’re talking millions of dollars and months or years to get the data in a pristine state. Instead, the money is being spent on AI, not the data. How could the result be any different?
Additionally, generative AI systems are susceptible to biases. If their training data contains biases or inaccuracies, the model may inadvertently perpetuate or amplify them in generated content or provide automated consultations with other applications and/or humans. It takes work to remove bias once it has been built into the models. Different parts of the model may be poisoned and challenging to isolate and remove.
Other issues with generative AI
Lack of common sense is one primary factor contributing to generative AI’s perceived “stupidity.” Unlike humans, these systems don’t possess innate knowledge about the world; they rely on statistical patterns learned during training. This result could be responses that may need more depth of real-world understanding.
Another aspect to consider is the sensitivity of generative AI to input phrasing. The system generates responses based on the input it receives from humans through a prompt or from applications using APIs. Slight changes in wording can lead to drastically different outcomes. Due to this sensitivity, users may find that the AI occasionally produces unexpected or irrelevant responses. Much of the value from AI can only be unlocked by asking just the right questions and using the proper methods.
Further, the inability to distinguish business data from data that may be subject to copyright or IP ownership issues comes to light. For instance, an open letter from the Authors Guild signed by more than 8,500 authors urges tech companies responsible for generative AI applications, such as OpenAI (ChatGPT) and Google (Gemini, formerly known as Bard), to cease using their works without proper authorization or compensation. I’ve asked large public LLMs questions and had bits and pieces of my own work parroted back at me a few times. I’m sure my books and thousands of articles (perhaps from this site) were used as training data for these LLMs.
Businesses that use these LLMs for portions of their business processing could be opening themselves up to a lawsuit if someone else’s intellectual property is used for a valuable business purpose. For instance, the LLM may unknowingly use processes for supply chain management that are described in a copyrighted text to optimize your supply chain, including published algorithms. This is why most companies are forbidding the use of public generative AI systems for business purposes. It’s a significant risk.
As we continue on this journey to finding generative AI nirvana, I’m convinced that we’ll need to learn how to address these and other issues first. Sorry to be a buzzkill.