Home > Professional Services > Expert Contributor

The Perks And Limitations of ChatGPT For Corporate Use

By Daniel Zenteno - GUS
Chief Technology Officer

STORY INLINE POST

Daniel Zenteno By Daniel Zenteno | CTO - Thu, 03/16/2023 - 09:00

share it

Over the last few weeks, there has been worldwide hype about the release of ChatGPT-3, which has triggered a controversy between those who believe it represents the future of artificial intelligence, and those who consider it a terrifying weapon against society.

In my opinion, having tested it personally, I can surely understand why it generates so much interest. It’s a very powerful technology that can help people save time and energy in simple tasks, such as drafting a recommendation letter or searching something on Google. Now, we can simply ask ChatGPT. 

As amazing and groundbreaking as it sounds, as a machine learning expert, I cannot help but ask: Is there a dark side to its use?

Before we dive into details, let’s start with the basic definition of what ChatGPT is. Plain and simple, it is a conversational artificial intelligence (AI) tool created by OpenAI to which we can ask practically anything. It will devise a well-thought response to help you solve your problem, following a conversation in a very natural way, up to a point where you can’t even tell the difference from a real human being. 

Its creators claim that it can even track questions, keep a record of previously asked questions and answers, decide not to respond to inappropriate requests, and admit its mistakes! Up until now, this was not a feature AI tools were known for as it describes a human trait.

But how can it do that? 

I’ll break it down for you. Basically, ChatGPT uses what researchers call “generative AI models” That means it uses a large amount of training data, such as images or text, to correlate patterns. With this training data, the model seeks to match the words within a question, and it generates a combination of words as an answer. 

ChatGPT also learns context. For example, if you ask a question that includes the word “bank” it can correlate other words used to understand if you’re referring to a bank (the financial entity), or a bank of fish. But, and I emphasize, it only parses the input text to produce an output text, meaning it does not use external sources to validate information, only what has been fed directly to it (during the process of data training and throughout the conversation). Therefore, the answers may be completely wrong.

For a simple question, such as “how to properly grow an orchid,” it’s not that big of a deal, but imagine putting the wrong answer into a wider context, one where presenting false information could be harmful to society. 

Google had a taste of this recently, when they presented their equivalent technology to ChatGPT called Brad. During the live demo, the answer Brad gave had an error, which caused the value of Google shares to drop considerably during the day. 

In fact, experts’ recommend the use of generative models, such as Brad and ChatGPT-3, where errors are not detrimental and only if the last filter to qualify the information provided by the AI tool as valid is human. For instance, you can ask ChatGPT to draft the base of a letter for you, so you can then review it and make final edits before you submit the document.

There are some people who have wondered how ChatGPT can be implemented within their companies. To do that, first you need to understand that this AI tool was trained with public information from sources that exist on the internet, such as tech forums that use the question-and-answer format. It is not trained to answer specific questions about your company. 

For instance, if you aim to automate customer service using ChatGPT, it won’t be able to answer specific questions about your company, such as the opening hours of your store, or specifics in your return policies.

That being said, it is possible to purchase this AI tool and implement it for the use of your enterprise only. However, it will require a huge investment on your part to gather the amount of training data required for this purpose, and the services of a trained data scientist or machine learning engineer to do so. And even if you managed to properly train it, it would not eliminate the possibility of the chatbot displaying wrong information, because the goal of a generative model is not to provide accurate information, but to be able to engage in a fluent conversation using natural language.

If your goal is to provide accurate information, then ChatGPT is not for you.

For companies, perhaps the best solutions nowadays are conversational bots, capable of answering company-specific questions with a minimum margin of error, created by AI and machine learning engineers and neurolinguistic experts. 

At Gus, we use an AI system to train bots with an average of 15 examples of possible answers per question (a task led by our team of NLP experts), which reduces the margin of error by using a clustering system instead of a classification system. Instead of answering with a wrong answer, the bot simply lets you know if it doesn’t understand the question, giving you the opportunity to rephrase the question or providing you with a button with a specific call to action.

Bottom line: Define what you need the technology to do and find the solution that fits your needs, understanding its limitations in order to know how to best take advantage of it for the benefit of your company.

Photo by:   Daniel Zenteno

You May Like

Most popular

Newsletter