Home > Tech > Expert Contributor

The Exceptional Value of Data

By Mario Gamboa - Intelimétrica
Founder & CEO

STORY INLINE POST

By Mario Gamboa | CEO & founder - Fri, 04/23/2021 - 09:04

share it

Data is to the 21st century what oil was to the 20th century: an engine of growth and change. Data overflows have originated the building of new information infrastructure, new businesses, new monopolies, new policies and, fundamentally, new economies.

Digital information is different to any previous resource: it’s extracted, refined, priced, bought and sold in different ways. Data changes the rules of markets and demands new approaches from regulators.

IDC, a market research firm, predicts that the “digital universe” (the data created and copied each year) will reach 180 zettabytes (180 followed by 21 zeros) by 2025. It would take more than 450 million years to transmit all this information over a broadband internet connection.

To assimilate it all, companies are rapidly building data refineries. In 2016, Amazon, Alphabet and Microsoft together racked up nearly $32 billion in capital expenditures and hardware leases, 22 percent more than the previous year, to mine data, according to the Wall Street Journal.

Data Quality

The quality of data determines whether a company, for example, can predict the product a customer is going to buy the next week, whether it’s efficient enough at understanding what kind of customer it is dealing with, or whether it can gain deep insights about all of its clients or know nothing about them. None of this is a switch that you just turn on, turn off and hit the sweet spot. Data must be structured, worked and deeply understood.

The approach to data in any organization is very similar to the approach used to produce a vaccine. For example, in a scientific laboratory, knowledge is typically generated incrementally, building on previous developments, and major innovations happen when a series of findings that may or may not be related are connected.

Today, data are no longer just bits of digital information, such as databases of names and other well-defined personal details, such as age, gender or income. The new economy analyzes rapid streams of often unstructured real-time data, which may well be “large streams” of data made up of photos and videos generated by users on social media, the vast amount of information produced by any driver on his/her way to work, or the flood of data from hundreds of sensors installed everywhere, among others.

Bringing Silos Together

This has been the case of a bank in Mexico with which Intelimétrica has collaborated. One of its main activities is the placement of loans. The bank suspected there were certain stages in the origination process that consumed more time than they should and, therefore, generated a significant bottleneck. These delays led to customer dissatisfaction: the lack of a quick response from the bank caused it to lose 20 percent of the potential clients who initially applied for a loan. 

Like all banks, this institution produces a great deal of information, although there was not enough clarity about the time consumed by each of the stages within the credit production line. The reason is not surprising, since many of these stages are supervised and executed by different areas of the institution.

The bank took the initiative and openness to centralize all the information that, although already existing, was not available to all areas and was not accessible in real time. To do it the right way, they turned to Intelimétrica. We started working with their data, leveraging machine learning and artificial intelligence, to help them to transform digitally.

Our team helped the bank generate a repository of information, they connected the different silos in the organization, understood which stages generated the bottleneck and even determined what reasonable benchmarks look like. With no additional marketing or other efforts, the financial institution adjusted processes, unlocked bottlenecks and got back on track to win back that 20 percent of customers it was losing.

Are You Open to Failure?

It is common that large organizations that aspire to innovate and transform their business by making use of data analysis, artificial intelligence or machine learning face a learning curve of processes that are not traditional and to which their teams are not accustomed to, because they do not usually exploit their information.

In other words, many companies accumulate and centralize large amounts of data regarding sales, human resources and business units, among others, but they have never structured that information or enabled their own units to access that data. In fact, companies often face different obstacles to exploit their own information, primarily because they tend to put it in silos. Therefore, any intention to mine this data, such as understanding customers better or being more agile at moving toward new consumer trends or gauging what the next successful product is going to be, becomes a challenge, as the company continues to operate blindly.

In this context, the company must avoid falling into the trap of letting people who, although they may be experts and have a solid opinion, do not necessarily have information based on data, or those who, even if they do, may have a partial view of the situation.

It is important to dare to experiment, to go above and beyond the obvious paths and, most importantly, to be open to learn from failure.

Photo by:   Mario Gamboa

You May Like

Most popular

Newsletter