AI’s Growing Role in Nuclear Industry Sparks Security Concerns
By Diego Valverde | Journalist & Industry Analyst -
Mon, 08/11/2025 - 09:00
AI and nuclear systems are converging following agreements like the one between OpenAI and the US Los Alamos National Laboratory, which marks a turning point in the management and security of atomic arsenals. This development introduces an era of automation into the most critical domain for global security.
“As threats to the nation become more complex and more pressing, we need new approaches and advanced technologies to preserve America’s security,” said Thom Mason, Director, Los Alamos National Laboratory, back in January 2025. “OpenAI will allow us to do this more successfully, while also advancing our scientific missions to solve some of the nation’s most important challenges.”
AI’s entry into the nuclear sphere follows the need to process massive volumes of data for security and threat detection at a speed that exceeds human capabilities. In an environment of increasing geopolitical and technological complexity, the automation of early warning systems is a necessary evolution to maintain deterrence capabilities.
The collaboration between OpenAI and US national laboratories specifically seeks to strengthen control over nuclear materials and optimize risk prediction through advanced data analysis, reports Wired. It applies AI models to simulate scenarios and anticipate incidents before they occur. Proponents of this integration state that the objective is to reduce the risk of a nuclear conflict and protect hazardous materials through superior technological oversight.
The New Arms Race and the Dilemma of Speed
The root of this trend lies in the strategic competition among global powers. Modern military doctrine posits that the advantage in a future conflict will depend on the speed and quality of decision-making, a field where AI offers unprecedented capabilities, reports Wired. This technological imperative drives nations to explore autonomy in defense systems under the assumption that not doing so would represent an insurmountable strategic disadvantage.
Historically, algorithmic systems have been used for defense management since the Cold War. However, the sophistication of AI introduces the possibility of lethal autonomous weapons systems (LAWS) that are capable of selecting and attacking targets without direct human intervention. This represents a fundamental qualitative shift.
Complementary Details and Future Projections
The implementation of AI in nuclear command and control carries a series of high-impact risks that are the subject of intense international debate. Experts and organizations such as the International Committee of the Red Cross (ICRC) and the United Nations warn that AI systems, despite their processing power, could amplify errors, operate with hidden biases in their algorithms, or be vulnerable to cyberattacks, which could lead to catastrophic consequences.
A central risk is the erosion of human control. Delegating critical decisions to machines, even if they operate “on-the-loop” with human supervision and veto power, drastically compresses response times. This can lead to a situation where effective human intervention is unfeasible.
“Humans remain the most essential element of nuclear command and control. While some nations have committed to human control over nuclear weapons, phrases like “meaningful human control” or “appropriate levels of human judgment” better focus attention on the desired human role,” according to the Texas National Security Review.
The international community faces the urgent need to establish a global governance framework for AI in the military sphere. Figures like António Guterres, Secretary General of the United Nations, have called for a ban on lethal autonomous weapons by 2026. Guterres says their development could trigger a destabilizing arms race and violate fundamental principles of international humanitarian law, such as distinction and proportionality.
“There is no place for lethal autonomous weapon systems in our world,” says Guterres. “Machines that have the power and discretion to take human lives without human control should be prohibited by international law.”



