The growing popularity of ChatGPT as a productivity tool has raised concerns among experts, as it could lead to massive leaks of proprietary information when prompted with the right queries. Thus, posing a threat to workplaces.
“ChatGPT is one of the most revolutionary bots in the world because of its ability to generate human-like responses to natural language inputs. Unlike traditional chatbots, which rely on predefined responses to specific inputs, ChatGPT is a generative model that can produce responses to a wide range of inputs, making it much more flexible and adaptable in the way in which it processes natural language inputs, ” says Brenda Zetina, Territory Director Mexico, Datadog.
A recent UBS study revealed that the renowned chatbot developed by OpenAI achieved an impressive milestone by garnering 100 million monthly active users in January, reports Reuters. This remarkable feat was accomplished within just two months of its launch, positioning ChatGPT as the fastest-growing consumer application ever recorded.
However, as the software continues to gain popularity, concerns regarding data leaks have emerged. A recent study revealed that over 4% of employees have inadvertently shared sensitive information through the chatbot software, reveals Dark Reading. Furthermore, due to its training on vast amounts of online data, employees may unknowingly access and use information that is trademarked, copyrighted, or the intellectual property of others, potentially exposing employers to legal liabilities.
“As the adoption of ChatGPT and similar AI-based services as productivity tools increases among employees, so does the potential risk of data breaches,” says Howard Ting, CEO, Cyberhaven.
The absence of robust data security measures in the software has raised concerns regarding the potential retrieval of previously shared information through accurate prompts. This becomes especially worrisome as leaked data often includes sensitive client information, proprietary business strategies, and even valuable source code. A notable example illustrating this issue is the case of Samsung employees, who, within a short span of 20 days, inadvertently released sensitive data on three separate occasions.