Getting value from your data wouldn’t have been as difficult


The potential impact of the ongoing data explosion around the world continues to stir the imagination. A 2018 report estimates that every second of every day, every person makes 1.7 MB of data on average — and annual data production more than double from before and is expected to more than double that change in 2025. A report from the McKinsey Global Institute estimates that skillful use of multiple data could provide more $ 3 trillion in economic activity, enabling applications as diverse as self-driving cars, self-care health, and trackable food supply chains.

But adding all this data to the system has also created confusion about how to find it, use it, manage it, and legally, safely, and efficiently share it. Where does a specific dataset come from? Who is the owner? Who is allowed to see certain things? Where does it live? Can it be shared? Can it be sold? Can people see how it is used?

As data applications grow and become more prevalent everywhere, producers, consumers, owners and managers of data find that they don’t have a follow-up playbook. Consumers want to connect with data they trust so they can make the best possible judgment. Manufacturers need tools to bring their data safely to those who need it. But technology platforms have fallen short, and there are no really common sources of truth to connect the two sides.

How do we find the data? When do we move it?

In a perfect world, data flows freely like an accessible to all. They can be packaged and sold as raw materials. It is easy to see, without complications, by anyone with permission to see it. Origins and movement can be traced, taking away any concerns about misuse anywhere on the line.

Of course, the world today does not act like this. The widespread data explosion has created a long list of issues and opportunities that make it difficult to share pieces of information.

In creating data almost anywhere inside and outside an organization, the first challenge is to identify what is being collected and how to organize it so that it can be found.

The lack of transparency and sovereignty of stored and processed data and infrastructure opens up issues of trust. Today, transferring data to centralized locations from multiple technology stacks is expensive and inefficient. The lack of open metadata standards and multiple accessible application application interfaces can make it difficult to access and consume data. The availability of sector-specific data sources can make it difficult for people outside the sector to benefit from new data sources. Multiple stakeholders and difficulty accessing existing data services can make it difficult to share without a management model.

Europe took the lead

Despite the issues, data -sharing projects were performed to the greatest extent. A European Union -backed and non -profit group has created an interoperable data exchange called Gaia-X, where businesses can share data under the protection of Europe’s strict data privacy laws. The exchange is seen as a vessel that will share data across industries and a repository for information about data services around artificial intelligence (AI), analytics, and the internet of things.

Hewlett Packard Enterprise recently announced a solution structure to support Gaia-X companies, service, and participation in public organizations. The dataspaces platform, which is now in development and based on open standards and native cloud, democratizes access to data, data analytics, and AI by making them easily accessible to domain experts and to most users. It provides a place where experts from domain areas can easily identify reliable datasets and securely perform operational data analytics — without always requiring expensive data movement in centralized locations. .

By using this framework to integrate complex data sources across IT landscapes, businesses can provide data transparency on a scale, so everyone-whether a data scientist or not- know what data they have, how to access it, and how to use it. in real time.

Data sharing initiatives are also on the top agendas of businesses. An important business priority to tackle is the exploration of the data used to train internal AI and machine learning models. AI and machine learning are already being used by most businesses and industries to drive continuous improvement in everything from product development to recruitment to manufacturing. And we’re just getting started. IDC plans the will of the entire AI market will grow from $ 328 billion in 2021 to $ 554 billion in 2025.

To unlock the true potential of AI, governments and businesses need to better understand the collective legacy of all the data that drives these models. How do AI models make their decisions? Do they have a bias? Are they reliable? Do unreliable individuals gain access to or modify data that a business trains its model? Connecting data makers to data buyers more clearly and with greater efficiency can help answer some of these questions.

Maturity of the data set

Businesses can’t figure out how to open all their data overnight. But they can prepare themselves to take advantage of technologies and management concepts that help to create a data sharing mentality. They will ensure that they develop the maturity to consume or share data strategically and effectively rather than doing so on an ad hoc basis.

Data makers can prepare for even more data distribution by performing a series of steps. They need to understand where their data is and know how they collect it. After all, they need to make sure the people wasting the data have the ability to access the right sets of data at the right times. That is the beginning.

Then comes the more difficult part. If a data producer has consumers – either inside or outside the organization – they need to connect to the data. That’s both an organization and a technology challenge. Many organizations want to manage the sharing of data with other organizations. Data democratization — at least it can be found across organizations — is an issue of organizational maturity. How do they manage it?

Companies contributing to the auto industry actively share data with vendors, partners, and subcontractors. It takes a lot of parts – and a lot of coordination – to assemble a car. Partners can easily share information on everything from engines to wheels to web-powered repair canals. Automotyp dataspace can serve up to 10,000 vendors. But in other industries, it can be more insular. Some large companies may not want to share sensitive information even within their own network of business units.

Create a data mentality

Companies on either side of the manufacturer’s sustainability market can maintain their data -sharing mentality by asking themselves these strategic questions:

  • If businesses create AI and machine learning solutions, where do teams get their data? How do they connect to that data? And how do they trace that history to ensure the reliability and accuracy of the data?
  • If data has value to others, what is the monetization path the team is taking today to expand that amount, and how can it be managed?
  • If a company is already exchanging or monetizing data, can it allow for a much larger set of services across multiple platforms-in places and in the cloud?
  • For organizations that need to share data with vendors, how do vendors coordinate the same datasets and updates that are being done today?
  • Do producers want to copy their data or force people to bring models with them? Databases can be so numerous that they cannot be replicated. Should a company host software developers on its platform where its data and transfer and output models?
  • How can employees in a data-intensive department influence the actions of long-term data managers within their organization?

Taking

The data revolution is creating business opportunities-along with a lot of confusion about how to find, collect, manage, and get insights from that data in a strategic way. Data makers and data buyers are increasingly interconnected with each other. HPE has created a platform that supports both the local and public cloud, using open source as the foundation and solutions such as the HPE Ezmeral Software Platform to provide the same standard that both sides need to do to be deployed. the data revolution for them.

Read the original article at Enterprise.nxt.

Its interior is made by Hewlett Packard Enterprise. It was not written by the editorial staff of the MIT Technology Review.



Source link

admin

Leave a Reply

Your email address will not be published. Required fields are marked *