Artificial Intelligence: Risk Factor for Terrorists Attacks?
Home > AI, Cloud & Data > Article

Artificial Intelligence: Risk Factor for Terrorists Attacks?

Photo by:   Creative Commons
Share it!
Diego Valverde By Diego Valverde | Journalist & Industry Analyst - Wed, 07/24/2024 - 11:30

AI presents itself as a revolutionary tool, but at the same time as an emerging threat for different industries should it be used maliciously. While AI has improved efficiency and safety, it has also left critical systems vulnerable to manipulation, which could lead to sophisticated terrorist attacks, highlight experts.

In the aviation industry, AI is optimizing the way passenger experiences and flight operations are managed and improved. From biometric identification and automatic baggage handling to efficient flight management and inter-aircraft communication, AI is optimizing numerous aspects of the industry, reports A21. One example of this positive use is seen at Mexico City International Airport, where an AI-based flight management center is used to analyze data in real time, facilitating more efficient decision-making and improving the traveler experience. These technologies, according to A21, make it possible to make airports less stressful and more agile, thus transforming the passenger experience and operational efficiency in air travel.

Despite the benefits, During the V International Aviation Security Congress, the Mexican aviation industry expressed concerns over the potential use of AI as a tool for armed conflict, arguing that while technological advances have allowed AI to be used to improve route optimization or air traffic management, these same advances also pose a significant threat under different circumstances.

Ricardo Escobedo, Corporate Security Regional Manager, Delta Air Lines, noted that cybersecurity is an emerging field with many unknowns and that AI, being relatively new, has not yet been widely discussed in terms of its risks. Escobedo stressed the importance of forums such as the congress to address these concerns and find effective solutions.

While armed conflicts have recently been more common in Europe and the United States, criminal cells in Latin America are also exploring vulnerabilities in aviation, underscoring the need for constant vigilance and international cooperation. José Maria Peral, Regional Aviation Security and Facilitation Specialist, added that security also depends on the constant updating of standards and regulations by bodies such as the International Civil Aviation Organization (ICAO).

Concerns are not about possible scenarios but are based on the current uses that different armed groups are giving to AI for both offense and defense operations.

Military and Paramilitary Use of AI

The US military has integrated AI into its airstrike operations in the Middle East, using technology to identify and select targets, Bloomberg reported. In the words of Schuyler Moore, CTO, US Central Command, “machine learning algorithms have been instrumental in the reduction and accuracy of more than 85 airstrikes conducted in Iraq and Syria on Feb. 2, 2024.”

“Algorithms help to identify potential threats through computer vision, allowing US forces to conduct more accurate and effective strikes against enemy installations,” Moore told Bloomberg.

The US military's use of AI has also been applied in locating rocket launchers and surface ships in various regions, such as Yemen and the Red Sea. AI has facilitated the detection of threats related to Houthi militias in Yemen and the targeting of commercial shipping in the Red Sea, contributing to the destruction of rockets and ships in several strikes conducted in February.

Project Maven, launched in 2017, has been key in the adoption of AI in the US Department of Defense. This project focuses on the use of AI to support defense intelligence and has become a critical tool in the fight against Islamic State militants and other threats. Algorithms developed under this project are used to analyze satellite imagery and other data to identify and locate targets, being extensively tested in exercises prior to application in actual operations.

As MBN previously reported, the Israeli military has used AI in the Lavender program to mark targets in the Gaza Strip, calculate collateral damage and authorize attacks. This program, +972 Magazine noted in its analysis, has been crucial in identifying and targeting suspected members of Hamas and Palestinian Islamic Jihad (PIJ).

In addition, the Israel Defense Forces have used The Gospel, an AI system to automate and accelerate the selection of targeted bombing facilities in Gaza, to process large volumes of data from various sources such as drone imagery and intercepted communications, allowing it to generate up to 100 targets per day, a significant increase from the previous 50 targets per year, according to information from The Guardian.

"AI technologies provide us with a significant advantage on the battlefield, enabling quick responses and preemptive measures against cyberattacks," said a spokesperson for Israel's Ministry of Defense.

On its part, Hamas is using AI to enhance its battlefield capabilities by integrating advanced technologies into drones and combat systems. These drones perform reconnaissance, surveillance and electronic warfare tasks, improving the accuracy and effectiveness of its attacks by providing crucial data for target identification, the International Counterterrorism Center (ICCT) reported.

“While the use of autonomous vehicle bombs has yet to materialize, the possibility of groups such as Hamas exploring this technology represents a future threat,” reads the ICCT article on the subject. “In addition, Hamas and other non-state actors could also be using AI to enhance their cyberspace skills through advanced attack and defense techniques.”

The use of AI has had significant implications, increasing the lethality and accuracy of attacks. However, the magnitude of its impact could be even greater. Gladstone AI’s Action Plan to increase the safety and security of advanced AI, a report commissioned by the State Department from the Gladstone AI Foundation, highlights two main dangers: the possibility of AI being used as a weapon to cause irreversible damage and the risk of losing control over advanced AI systems, which could lead to devastating consequences for global security.

As reported by CNN, experts warn that, like nuclear weapons, AI has the potential to destabilize global security, with the risk of an “arms race” in this technology that could trigger conflicts and major accidents.

The Gladstone AI report underscores the “clear and urgent need” for the US government to take drastic action to address these risks, suggesting the creation of a new AI agency and the imposition of regulatory restrictions. Jeremie Harris, CEO, Gladstone AI, warned that the rapid advancement of AI often occurs “at the expense of security,” which could lead to the most advanced AI systems being “stolen” and used as weapons against the United States.

Future of AI

AI's ability to produce tailored and compelling content allows armed groups to develop propaganda more efficiently, personalize messages, and spread extremist ideologies to a global audience. Recent examples include the manipulation of conflict imagery, such as in Gaza, where altered photos were used to foment violence.

“This technology facilitates the creation of false or manipulated material that can influence people's perceptions and attitudes, exacerbating disinformation online,” the ICCT notes.

In addition to propaganda, armed groups are beginning to use Gen AI for “interactive recruiting.” AI-powered chatbots can interact with potential recruits, offering personalized information and attracting individuals with specific vulnerabilities. This strategy, shared equally by the ICCT, allows groups to build more personalized and effective relationships, facilitating recruitment and radicalization through automated interactions.

Despite these risks, AI can also enhance surveillance, detect extremist content, and perform predictive analytics to identify potential threats before they occur. “The key will be to balance the use of AI in security with the preservation of fundamental freedoms, ensuring that its implementation does not compromise democratic values or individual privacy,” the center notes.

Escobedo urged ongoing education and training for security personnel to handle emergencies and threats, as well as integrating historical experience into the training of new generations to meet future challenges. Meanwhile, Peral said that legislation and regulation are crucial to implement effective security measures and adapt to new threats, stressing the importance of proper change management to maintain aviation security in an evolving technological environment.

An Amnesty International report notes that the AI arms race between nations increases the likelihood of future conflicts and reduces opportunities for dialogue and cooperation. The lack of clear and consensual regulations on the use of AI in military contexts could lead to greater uncertainty and risk in the international arena.

"It is imperative that the international community works together to establish legal and ethical frameworks on the use of artificial intelligence in armed conflict, to avoid a future where machines determine the fate of humanity," said António Guterres, Secretary General, United Nations.

"If we do not act now to regulate and control the use of AI, we could face a future where machines facilitate terrorism at unprecedented levels," concluded Agnès Callamard, Secretary General, Amnesty International in the report.

Photo by:   Creative Commons

You May Like

Most popular

Newsletter