AI Errors, Shadow Use Expose Readiness Gaps for Employers
By Aura Moreno | Journalist & Industry Analyst -
Tue, 02/10/2026 - 11:28
Employers are absorbing rising costs from AI errors as workers adopt tools faster than organizations build governance, training, and data discipline, according to recent surveys and research spanning the United States and Mexico.
“Most AI-related mistakes stem from over-trust and under-scrutiny,” says Kara Dennison, Head of Career Advising, Resume.org. “Employees treat AI outputs as finished work rather than as a starting point. AI is reliable when used as an assistant, not a decision maker.”
A January 2026 survey by Resume.org found that 70% of US managers observed at least one AI-related mistake by a direct report in the past year, with some incidents resulting in financial losses above US$50,000. The poll of 1,146 managers suggests AI risk has moved beyond IT departments into daily people management, affecting workflows, client relationships, and brand credibility.
Managers cited repeated errors rather than isolated incidents. Twelve percent said they had seen AI-related mistakes many times, while 43% reported several occurrences. Resume.org described the pattern as an “AI slop” problem, referring to unchecked or low-quality AI output entering final work products.
The most common issues involved incorrect facts and missing context. Among managers who observed errors, 58% reported staff submitting work containing factual inaccuracies generated by AI tools, and more than half said outputs failed to reflect key contextual factors. Other problems included low-quality content, weak recommendations, and communication issues. Nearly three in 10 managers reported AI-related mistakes that raised confidentiality, privacy, or compliance concerns.
While the Resume.org data focuses on the United States, similar pressures are emerging in Mexico, where AI adoption is accelerating even as organizational readiness lags. Research from Google Workspace, IDC, and Provokers shows that 67% of workers in Mexico already use noncorporate AI assistants for daily tasks, while only 35% say their employer provides official access. The practice, often described as Shadow AI, reflects a gap between employee behavior and corporate oversight.
Google’s Work:InProgress report, based on more than 3,500 interviews across Latin America including 767 in Mexico, found that employees turn to personal AI tools because they are easier to access, perceived as safer, or preferred to corporate options. Only 30% of Mexican companies reported having clear AI use policies, and just 31% encourage experimentation with approved tools.
The rise of unsupervised AI use compounds risks already identified by managers. Resume.org reports that 59% of managers spent extra time correcting AI-related errors, while 53% said direct reports had to redo work and 45% said other colleagues were pulled into remediation. Missed deadlines were reported by 25% of managers, and 28% cited damage to credibility or brand. Nearly one in five managers said AI mistakes cost their company more than US$10,000, and 5% reported losses exceeding US$50,000.
Legal and employment specialists warn that unchecked AI use can expose employers to reputational and intellectual property risks. Hannah Mahon, Partner, Eversheds Sutherland, and Rebecca Denvers, Principal Associate, Eversheds Sutherland, say inaccurate or hallucinated AI content can undermine trust among clients and colleagues and raise IP concerns if content is not original.
In Mexico, business leaders argue that the root issue is not access to AI tools but organizational maturity. Carolina Ruiz, Chief Executive Officer, Brier & Thorn, says many companies assume AI adoption will resemble earlier cloud migrations, which often delivered benefits without deep operational change. AI, she says, is a general-purpose technology comparable to electricity or the internet, requiring upgrades in processes, data governance, security, and culture.
“AI learns from everything inside a company,” Ruiz says. “If it finds structure, it amplifies it. If it finds disorder, it amplifies that too.”
Data supports that assessment. A 2023 IDC Latin America report found that 67% of Mexican companies still rely on manual or semimanual processes for core operations, highlights Ruiz. EY Mexico reports that only about one-third of organizations have a formal digital transformation strategy, while GSMA data shows more than half of Mexican SMEs adopt digital tools without redesigning underlying processes.
Data governance remains a central constraint. Deloitte’s Latin America data maturity research argues that fewer than 30% of Mexican companies rate their data quality as high, and KPMG Mexico reports that nearly half lack formal data ownership. Weak data structures increase the likelihood of AI-generated errors, biased outputs, and flawed recommendations, particularly in HR, legal and client-facing roles.
Cybersecurity adds another layer of exposure. Mexico ranks among the most targeted countries in Latin America for cybercrime, and AI adoption expands the digital attack surface. Security gaps can compromise the integrity of the data AI systems rely on, turning automation into a liability rather than an efficiency tool.
Workforce expectations are also reshaping the landscape. Research from The Conference Board shows that 85% of workers globally expect AI to improve their jobs over the next two years, even as about 40% anticipate workforce reductions. Ninety-one percent said AI has already changed their tasks, and most reported productivity gains. Analysts describe a workforce that is operationally ready for AI but operating within organizations that lack unified strategies, clear accountability, and sufficient training.
In Mexico, the disconnect is visible in labor markets. Michael Page research indicates that 37% of professionals use AI tools daily and two-thirds report productivity gains, yet job descriptions rarely list AI skills explicitly. Salesforce’s Global AI Readiness Index places Mexico below the global average, with particularly low scores for workforce preparedness.
Together, the findings suggest that AI-related mistakes and shadow use are symptoms of broader readiness gaps. As AI moves from experimentation to routine use, employers face growing pressure to define governance models, invest in training and strengthen data, and security foundations. Without those measures, efficiency gains risk being offset by rework, financial losses, and erosion of trust.









