Businesses have embraced AI-powered tools with optimism, aiming to boost employee productivity and enhance overall operational efficiency. Nonetheless, enterprise executives are growing increasingly concerned about the potential risks associated with the integration process. In particular, corporate leaders are concerned about the potential risks that generative AI could have on critical business elements including intellectual property, data privacy and cybersecurity, according to Gartner's Quarterly Emerging Risks Report.
Based on a survey of 249 enterprise risk executives, the report found that Generative AI emerged as the second most frequently cited risk during the second quarter of 2023. "This observation reflects both the rapid surge in public awareness and usage of generative AI tools, as well as the extensive range of potential use cases and, consequently, the potential risks that these tools bring," says Ran Xu, Director of Research, Gartner's Risk & Audit Practice.
Gartner’s report revealed that 66% of the surveyed executives stated that the widespread availability of AI technology, such as OpenAI's ChatGPT and Google's Bard, stand out as a paramount concern and emerging business risk. Conversely, 67% of these executives voiced more substantial apprehensions regarding unintended third-party disclosures. Followingly, 63% of them singled out financial planning as their most pressing concern, the significance of factors like cloud concentration (62%) and tensions between China and the West (56%) were the most frequently cited risks during the second quarter of 2023.
Gartner’s analysts warn that the misuse of AI-powered tools can have dire consequences to a company’s private intellectual property, data privacy and overall cybersecurity. For instance, employees can input sensitive and or confidential business information into a generative AI, which might inadvertently be exposed to other users through generated answers by the tool. Moreover, “utilizing the results of these tools could inadvertently infringe upon the intellectual property rights of others who have employed them," added Xu, potentially giving rise to legal and financial liabilities for businesses.
On the other hand, generative AI tools could inadvertently share user information with third parties without prior notification, potentially leading to violations of international privacy regulations. If sensitive or confidential data is used as input, there is a risk that the generated content might inadvertently leak sensitive information, leading to data privacy breaches and possible ransomware extortions.
Additionally, generative AI tools have vulnerabilities that hackers could exploit to gain unauthorized access, including the injection of malicious code. Recently, cybercriminals have used a new hacking technique known as “prompt injections.” This technique involves crafting a meticulously worded prompt to manipulate AI models into generating content — such as phishing emails—that serves their malicious purposes.
To mitigate the potential business risks associated with generative AI, Mexican deputy Ignacio Loyola introduced a proposal aimed at establishing a legal framework for the use and development of AI within the country. The proposal is expected to involve the creation of a new bureaucratic entity that will enable researchers, analysts and technology experts to develop official standards regarding ethical development of the AI industry in Mexico.
Despite its relatively brief existence, the Law for the Ethical Regulation of Artificial Intelligence and Robotics proposed by Loyola underscores Mexico's steadfast commitment to upholding data privacy and cybersecurity standards for businesses operating within the country. Furthermore, this legislative effort exemplifies the proactive stance that Mexico is adopting to navigate the intricacies of AI integration in the realm of business operations.