Is Generative Ai Ready for Enterprise-Wide Deployment?
STORY INLINE POST
With the global explosion of ChatGPT usage, everyone is now wondering and speculating where AI will take us from here. If AI has been around for a few decades already, why the huge hype now with ChatGPT? It is because now there is something called generative AI, which basically enables any human being to interact with an AI engine to automatically produce intelligent and continuously evolving information in varied data output formats, such as text, images, audio, video, and any synthetic data (information that's artificially manufactured rather than generated by real-world events). ChatGPT is the first chatbot based on generative AI that enables all this.
One of the challenges of generative AI is to deeply understand how the new content can be used, quality-checked and evolved in a productive and ethical way – without just producing stuff that does not support business improvement or a better quality of life in general. Let’s get into more details regarding the pros and cons of generative AI.
We’ll start with the formal definition of Generative AI: Generative artificial intelligence is artificial intelligence capable of generating text, images, or other media, using generative models. Generative AI models learn the patterns and structure of their input training data and then generate new data that has similar characteristics.
The key transformational implication is that generative AI provides the ability to interact with any human being, without the need of any technology background knowledge and with a very simple user interface. So basically, now anyone can generate new AI-enabled content. We have never had an opportunity like this in the past.
Generative AI models use deep learning to analyze patterns in large sets of data and then use this information to create new, but similarly convincing outputs. It can make it easier to interpret and understand existing content and also automatically create new content. The new content could be full essays, musical compositions, graphic designs, paintings, photos, chip designs, software code, and others. One interesting characteristic is that Generative models provide a new output, not just A or B decisions.
Existing generative AI models are a new generation of chatbots that provide simple text data entry – such as ChatGPT. While the first chatbot with AI was created in the 1960s, the ChatGPT chatbot was released just last year by OpenAI. ChatGPT stands for Chat Generative Pre-trained Transformer, and it has taken the world by storm by transforming conversational chatbots into “knowledgeable” and “intelligent” interactive entities.
One key question that is usually asked is whether the outputs from an AI-enabled generative chat bots are always good, truthful and of high quality. The fact is that depending on the inputs provided, you could have outputs that are biased, weird, incorrect, or even what some call hallucinations.
The fact of the matter is that the output sometimes does not identify the source of the content, it can be biased toward certain sources, it can sound realistic but not be accurate, it can be hard to understand for a specific context, or it can be way too biased for some topics.
So now the question is whether generative AI is ready for enterprise-wide applications? This is an evolving matter, and the answer will depend on the level of complexity of the task and on how it is implemented. Clearly, the answer won’t be the same if we are talking about automating some basic manual process for writing internal content for a company versus critical decision-making for a business.
I recently attended a CIO conference sponsored by Gartner where the CEO of Moveworks outlined the specific “tiers” on how to go about implementing generative AI at the enterprise level, As part of his recommendations, he introduced the concept of an “Enterprise co-pilot,” a fluid conversational interface that connects your employees with every business system, built on hundreds of machine learning models, and fine-tuned to your enterprise data. It is expected that eventually, all apps will have AI co-pilots.
Here are the tiers of complexity for enterprise-wide deployments with some examples (Large Language Model (LLM) enables vast amounts of text data to be the input/output):
Tier 1 – Basic LLM integration: Copywriting for website, sales call sentiment analysis.
Tier 2 – Customized-LLM implementation: Legal department document summarization, financial department FAQ, ticket transition for IT agents.
Tier 3 – Advanced LLM pipelines: Automated content moderation, medical literature analysis, multilingual IT and HR support, account team follow-up.
Tier 4 – Enterprise-wide LLM adoption: Compliance and security monitoring, decision-making, Enterprise-wide support.
Therefore, the scope of the problem, the complexity and the criticality are all aspects to be considered. Would you send a sensitive legal memo to a third party without human intervention for quality checking? Same question regarding important customer or supplier communications. Also, would you execute relevant financial transactions without a capable analyst involved?
So basic uses (Tier 1) will drive benefits such as automating the manual process of writing content, reducing the effort of responding to emails, and improving the response to specific technical queries; but as complexity increases, higher tiers will enable summarizing complex information into a coherent narrative, and simplifying the process of creating content in a particular style, among other uses.
While there is a debate on whether generative AI, and AI in general, will ultimately represent a benefit to humankind or not, I am one of those who think that the net effect will be positive. I think generative AI can produce hyper-productive (better and faster) professionals, such as software developers, financial analysts, professional writers, doctors,and industrial designers, and I think this would be fantastic for everyone.
Here are the different views regarding the potential implications:
The good:
Going forward, this technology could help write code, design new drugs, develop products, improve fraud detection, redesign business processes, design and interpret contracts, personalize marketing campaigns, and transform supply chains among many other applications. It is all about increased efficiency: getting things done faster, and refocusing resources and energy on the most strategic activities.
The bad:
There is a risk that this technology could cause unintended or intended consequences, such as providing inaccurate and misleading information — harmful content — higher intellectual property plagiarism, producing fake news, diminishing the trust on information, and more complex cyberattacks, among many other potential implications. Not to mention the possibility of generating content with inappropriate ethical or culturally biased implications. Some suggest that the appropriate use of generative AI is a matter of national security. And, of course, some see generative AI with the potential of causing huge job losses.
The ultimate “risk” is to let any “regular” human being produce content without a full understanding of the expected consequences.
As mentioned before, I remain optimistic in my view. We already survived an Industrial Revolution, and think the potential ethical and cultural implications will not worsen from where we are already today. Finally, I don’t think human intervention will ever go away, but it will certainly evolve.
Enterprise-wide adoption of generative AI is just a matter of time — and the clock is ticking.







By Alexis Langagne | Senior Vice President and Board Advisor -
Tue, 09/05/2023 - 09:00









