The rapid pace of organisations moving to various cloud services is increasing and this trend will accelerate as we head towards 2023. As organisations evaluate software, platform or infrastructure services from cloud providers, the reality is that most don’t put all their system eggs in one cloud basket. Organisations choose the best platforms for specific business outcomes.
As a result, their data is spread across multiple platforms hosted by multiple providers. The promise of a centralised platform for all their applications and data is replaced by the same situation faced in the days of on-prem systems. Data is spread across multiple systems each with its own formats, security rules and business logic. And this results in a fractured view of the customer.
A single view of the customer is key to delivering excellent service and minimising the risk of customer churn. Being able to consolidate everything you know about a customer, from the products and services they search for through to their purchase history and interactions with customer service ensures you can anticipate their needs, and more accurately manage stock and supply levels and curtail issues before they escalate.
Data warehouses and data lakes were seen as the way to build this single view of the customer. This involved a complex process of extracting data from the source systems, transforming it to fit into a centralised data repository and then loading it into the data warehouse or data lake.
This ETL (Extract, Transform, and Load) process centralised data to present a single view. However, not every data source can be easily subjected to an ETL process. And if a new data source is identified, or there is a change to an existing data source, then the ETL process needs to be changed, often requiring lots of resources, investment and taking weeks or months of effort with the data, most likely, already being vastly out of date.
Rather than bring the data to where the query is executed, it’s possible now to completely invert the process by taking the query to all your different data sources, regardless where they are. The key to this is two distinct elements. First, business experts determine the best way to share data in a timely and secure manner. Data owners are empowered to decide how to share data and set business rules, such as access control, on the data.
Then, a virtual data warehouse is leveraged using the rules determined by the business data owners and metadata to identify data sources and how the data from different sources can be joined. This decrentraised data platform allows organisations to continue working with analytics tools that are familiar to them. And when a new data source is needed, it can be added to the decentralised data platform in hours, not the months it can take with ETL solutions.
As well as being operationally more elegant and offering vastly improved performance at a lower cost and complexity, there are security benefits to leveraging a decentralised data platform. As data is no longer being extracted and copied into a centralised data repository, organisations leveraging this approach no longer need to create a centralised honeypot of data. And as data is not being extracted and copied across networks there is less chance of it being intercepted by unauthorised parties.
As the number of data sources and technologies used by organisations continue to expand into the cloud, creating a consolidated platform that enables users to conduct complex analysis regardless of where data sources is increasingly important. While the cloud has enabled more agile development, a decentralised data platform enables the holy grail true single view of the customer, provides rapid insights, faster decision-making and vastly reduces cost, time and complexity compared to traditional data management tools and platforms.
Vinay Samuel, Founder and CEO, Zetaris