Home > AI, Cloud & Data > Expert Contributor

Is Artificial Intelligence a Possible Enemy of My Company?

By Angel Israel Gutierrez - A3Sec
CTO

STORY INLINE POST

By Angel Israel Gutierrez | CTO - Wed, 03/15/2023 - 15:00

share it

The applications for artificial intelligence (AI) are numerous, ranging from the creation of images to writing entire texts and even making videos, all from basic instructions and yielding highly satisfactory results; however, the question remains as to how safe it is for users, and while they can create useful texts, is there a possibility that they could produce something harmful for an individual, company, or institution?

We are witnessing a boom in the use of AI. We saw how people jumped on the trend of creating their own images through platforms where they uploaded a series of personal photographs and personalized images were created in different environments and graphic styles. The impact was such that artists and even politicians created their own images using AI.

Now we are experiencing the use of ChatGPT, which is a chat system based on the intelligence language model GPT-3, developed by OpenAI. It has more than 175 million parameters and is trained with large amounts of text to perform language-related tasks, from translation to text generation.

These AI solutions could be used in conjunction with, or replace, search services like Google or Bing, with capabilities to understand questions through advanced language processing analysis and deliver an answer based on an analysis of many sources of information. However, considering the cybersecurity risk, every service can be a potential entry point to an organization. This risk has risen significantly in recent years, as cybercriminals use third parties to reach their targets.

When asked directly into ChatGPT, this was the response:

How can cybercriminals misuse ChatGPT?

Cybercriminals can misuse ChatGPTin several ways, some of which include:

Malicious content generation: ChatGPT is capable of generating text autonomously, which means that cybercriminals can use it to generate malicious content, such as phishing emails or deceptive messages on social networks, that closely resemble messages written by humans. These messages can be used to trick people and steal personal or financial information.

Social engineering attacks: Cybercriminals can also use ChatGPT to carry out social engineering attacks. They can use the model to create fake social media profiles and make them look very realistic. They can then use these profiles to trick people into sharing personal information or to persuade them to click on malicious links.

Creating malicious bots: ChatGPT can also be used to create malicious chatbots that can trick people into sharing personal or financial information. These bots can be used to spread spam or to carry out phishing attacks.

To prevent these types of threats, it's important to use up-to-date security software, avoid clicking on suspicious links or downloading attachments from unknown sources and keep an eye out for messages that seem suspicious or unrealistic.

The use of AI is a great opportunity to improve our results in a variety of professional fields, but it could also be used to obtain unethical or illegal benefits. Its improper use could result in significant risks for an organization.

These technologies are like many content creator platforms that involve  human interaction, and sharing relevant information must be done with the same care as with any other application. Ultimately, the organizations that have control over these applications will be able to view, analyze, and interpret the information that is shared. Therefore, it is very important to be careful with the information that is exposed to these AI applications.

There will always be opportunities for efficiency in the implementation of new technologies, but attacks occur when they are used for other purposes, other means, or take advantage of their design to do malicious or illegal things; therefore, there is always the opportunity for an attack, and it can always be used as another tool in the lives of people and organizations.

Given the use of AI-based tools, here are some important factors to keep in mind. 

Most AI solutions are in a trial period. Both developers and the public are interacting with them and understanding their risks. For now, the idea is to use them to their fullest potential and understand the benefits this technology can offer.

Of course, many will use AI for their own gain, but ethics in the use of technology must take priority. Some recommendations include not sharing sensitive or confidential data, such as software pieces.

Today, many AI platforms are designed for recreational or educational purposes, and we must take that into account before uploading, sharing, or handling information within these platforms since there are issues of privacy, copyright, and management that have not yet been defined. 

We must also take into account that security is focused on our digital processes being reliable, available, and accessible when needed and by whom it is needed. AI is not infallible and also gives wrong answers or assumes wrong processes. This can also represent a security risk, since we could be relying on a system that does not yet have the capabilities to build infallible information or information that humans understand with context beyond what is transmitted.

Platforms based on artificial intelligence and their use will continue to develop. Many of the things that are born on the internet are not fully regulated and it is most likely that such regulations will take time to be conceived and implemented. Knowing what is allowed or prohibited will fall on developers and users.  At the moment, personal and business cybersecurity are facing a new challenge. Have you and your company thought about this?

Photo by:   Angel Israel Gutierrez

You May Like

Most popular

Newsletter