Home > Professional Services > Expert Contributor

Digitalization, Automation, and AI: How Did We Get Here? (Part 2)

By Jaime Castro Palma - BPF
General Manager

STORY INLINE POST

Jaime Castro Palma By Jaime Castro Palma | General Manager - Tue, 08/12/2025 - 07:00

share it

Do we need artificial intelligence? Will it solve all our problems? Can we trust a computer and its algorithms programmed by humans to make highly critical decisions? What are the consequences of increasingly relying on these technologies? If the use of these technologies has serious undesirable consequences, who will be legally responsible? Who really controls artificial intelligence and under what ethical principles?

These and other questions generate fears, worries, and doubts among those who still do not know which path to take. Knowledge is power, and the size of the knowledge managed and stored by new technologies is still unimaginable. The truth is that those who survive will not be of those who implement these technologies faster, but those who implement them best.

The essential problem with these risks, however, does not center on the imperfection of computer systems and tools such as AI, or on their sudden malice to wipe out humanity. The problem lies in something much simpler: the imperfections of human beings ourselves, which we could transfer to our creation.

We have developed computer systems, digitized information, and even designed artificial thinking emulations from the belief that these would be the solution to our problems, without considering that they are only tools to enhance our capabilities.

Bruce Schneier, a world-renowned cryptographer and American expert in computer security, once said: “If you think technology can solve your security problems, then you don’t understand the problems and you don’t understand the technology.” 

Technology is just a tool, an ally that must be conceived in an appropriate environment to give the best results. The migration from Industry 3.0 to Industry 4.0 is not just a matter of a management decision or approval of budgets, it is a major surgery on our organization, where we must change our business mindset and culture, review the ethical foundations that sustain it, redesign and standardize our processes, and create new paradigms. 

As we've mentioned before, the fuel for the operation of these technologies is information, and this information is based on an adequate design of processes. When processes are not designed properly or suffer from imperfections, such as inefficiencies, lack of control and indicators, it is difficult to standardize them, let alone automate them.

Until the second Industrial Revolution, processes and their production volumes were relatively simple and withstood the failures of their nature without major consequences; however, with the arrival of the computer age, better designed and standardized processes are required with fewer margins of variability and possibilities of failure. Trying to standardize an inefficient process will only lead to the repetitive standardization of inefficiency.

“The first rule of any technology used in a business is that automation applied to an efficient operation will magnify the efficiency. The second is that automation applied to an inefficient operation will magnify the inefficiency,” said Bill Gates.

And if we also want to implement some level of AI so that the process makes decisions, corrects errors, keeps maintained and continuously improves, we must map, know, and ensure the quality of the information from which it will be fed. Good decisions and inferences will depend to a large extent on the quality of the information available  there is a risk that AI will replicate our own biases if we do not clean these up from the beginning.

On the other hand, the lack of clear rules, regulations, and laws opens the door for malicious use of information. One of the great concerns today is the fact that at a certain point we depend on and trust artificial intelligence so much that we accept as valid potential errors or results, maliciously or incompetently configured, with which we will be victims of political manipulation, identity theft, fake news, wars, financial disasters, or wrong decision-making.

As long as governments and regulatory agencies continue to put these regulatory aspects on a second tier, the risk of these technologies being out of our control increases. We care more about creating and adopting new technologies than thinking intelligently about what we need to prevent them from becoming a risk. The regulatory framework must precede technology, so that with each technological advance, the rules of operation are established from the beginning, limiting the risks and impacts of its uncontrolled use.

The real problem is not whether machines think but whether men do, according to B.F. Skinner.

Another major concern regarding the use of technology is that it eliminates sources of employment and that activities currently carried out by humans will be carried out by machines. The problem is, in this case, much more complex.

It is not only about the inherent capabilities of machines to execute jobs faster, more accurately and efficiently, but also about the absence of a plan for the "relocation" of all those human capabilities and strengths that could be displaced by automation. Training and creation of new skills are required to "tame" these new technologies, so that the human being is the final factor that makes or approves the critical decisions made by the machines, taking advantage of their speed and efficiency, thus allowing greater productivity and distribution of wealth, where the human being can have more time for more intellectual activities, such as science and art.

There are many fears regarding the rapid advance of these technologies, for and against, both with strong arguments for concern and occupation. 

Modern pop culture has provided numerous examples of distressing and even terrifying stories about these technologies, which have permeated the popular imagination, creating an underlying resistance to their use and adoption. Only in cinema are there several examples where robots, computers, and artificial intelligence become threats to humanity. From Metropolis (1927) to Ex Machina (2014), M3gan (2022) and through the classics Terminator (1984) and The Matrix (1999), among many others.

In this sense, one of the greatest references, Andrew Ng, a researcher in the field of artificial intelligence and director of the Stanford AI Lab at Stanford University, has a less catastrophic vision: “I want an AI-powered society because I see so many ways that AI can make human life better. We can make so many decisions more systematically or automate away repetitive tasks and save so much human time.”

To which Professor Oren Etzioni, founder of the Allen Institute for Artificial Intelligence, adds: “A calculator is a tool for humans to do math more quickly and accurately than they could ever do by hand; similarly, AI computers are tools for us to perform tasks too difficult or expensive for us to do on our own, such as analyzing large data sets or keeping up to date on medical research.” 

If we want to be successful in the use of these new technologies, whether in the personal, corporate, or governmental sphere, we must, as a society and as a species, change our own conception of how we function and how we have conceived our processes and our culture until now. 

We must evolve into a higher form of reasoning where our knowledge management, the creation of policies and rules, critical thinking, planning, order, and our ethical principles are at the height of these extraordinary technologies that we have created.

We must evolve along with technology to be extraordinary beings who use technology, and not the other way around. 

 

You May Like

Most popular

Newsletter