dinsdag 24 oktober 2023

Why Data Governance is here to stay

More than a fairly stable Google Trend Index, proving that Data Governance issues won’t go away is the fact that “Johnny-come-lately-but-always-catches-up-in-the-end” Microsoft is seriously investing in its data governance software. After letting the playing field for innovators like Ataccama, Alation,  Alex Solutions and Collibra, Microsoft is ramping the functionality of its data catalogue product, Purview.

 

Google Trend Index "Data Governance"
Google Trend Index on "Data Governance"

The reason for this is twofold: the emerging multicloud architectures as well as the advent of the data mesh architecture driving new data ecosystems for complex data landscapes.

Without firm data governance processes and software supporting these processes, the return on information would produce negative figures.

In the next blog Defining a Data Mesh  I will define what a data mesh is about and in the following blog articles I will suggest a few measures needed to avoid data swamps. Stay tuned!

dinsdag 21 februari 2023

How will ChatGPT affect the practice of business analysis and enterprise architecture?

 

ChatGPT (Chat Generative Pre-trained Transformer), is a language model-based chat bot developed by OpenAI enabling users to refine and steer a conversation towards a desired length, format, style, level of detail, and language.  Many of my colleagues are assessing the impact of Artificial Intelligence products on their practice and the jury is still out there: some of them consider it a threat that will wipe out their business model and others see it as an opportunity to improve productivity and effectiveness of their practice.



I have a somewhat different opinion. Language training models use gigantic amounts of data to train the models but I am afraid if you want to use the Internet data you certainly have a massive amount of data but of dubious and not always verifiable quality.

General Internet data is polluted with commercial content, hoaxes and ambiguous statements that need strong cultural background analysis to make sense of it.

The data that has better quality than general Internet data is almost always protected by a copyright; Therefore use without permission is not always gentlemanlike to say the least.

Another source of training data are the whitepapers and other information packages you get in exchange for your data: e-mail, function, company,… These documents often start with stating a problem in a correct and useful way but then direct you to the solution delivered by their product.

The best practices in business analysis and enterprise architecture are -I am afraid- not on the Internet. They’re like news articles behind a paywall. So if you ask CHAT GPT a question like “Where can I find information to do business analysis for analytics and business intelligence?” You get superficial answers that –at best- provide a starting point to study the topic.


A screenshot of the shallow and casual reply. It goes on with riveting advice like “Stay Informed”, “Training and Certification”, “Networking”, “Documentation”,…

And the question “What are Best Practices in business intelligence” leads to the same level of platitudes and triteness:

  • “Align with business goals” who would have thought that?
  • “User involvement and collaboration” Really?
  • “Data Quality and Governance” Sure, but how? And when and where?

In conclusion: a professional analyst or enterprise architect has nothing to fear from ChatGPT. 


At best, it provides a somewhat more verbose and redacted answer to a question saving you the time to plough through over a billion answers from Google.


maandag 5 december 2022

Data Architecture as a Consequence of Organisation Design

 

Lingua Franca was involved in the data architecture of an organisation which name and type is of no interest for the case I am making. Namely,  the way an organisation functions and is structured determines the data architecture. It is a text book example of many organisations today.

The organisation was a merger of various business units which all used their own proprietary business processes, data standards and data definitions.

The CIO had a vision of well governed, standardised processes that would create a unified organisation that operated in a predictable and transparent manner.

Harmonised End to End Processes Are the Basis of Transparent Decision Making

Common dimensions and common facts

Shared facts and dimensions assure a scalable and manageable analytics architecture

The case for a Kimball approach in data warehousing was clear: if every department, every knowledge unit would use the same processes, the shared facts and common dimension architecture was a no brainer.

As the diagram suggests: it takes effort to make sure everybody is on the same page about the metrics and the dimensions but once this is established, new iterations will go smoothly and build trust in the data.

For more than 4 years, the resistance to change wore the CIO, data warehouse team and finally the data architect out when the CIO left the organisation. The new CIO decided to not continue the fight for harmonised processes and saw this as a reduced need for a data warehouse. If every business unit would use its own operational reporting, it would produce rapid results at a far lower cost than a data warehouse foundation delivering the reports. A new crew was on boarded: two ETL developers, two front end developers and a data architect.

Satisfying Clients in Their Operational Silos Creates Technical Debt

A third normal form data model for operational reporting

Cutting corners for fast delivery creates technical debt, that needs to be repaid

As this diagram suggests, the client defines his particular needs, asking for a report not on SKU level because he’s only interested in product sets. The sets require special handling so they are linked to specific shippers who have their delivery areas.  Although this schema may cause no problems for the frontend developer to produce a nice looking report, consolidating the information on corporate level will take time and effort.

The reality will prove differently, of course. If every business unit uses its own definitions, metrics and dimensions there is no chance of having correct, aggregated information for strategic decision making. To remedy this shortcoming, the new data architect will have to go back to 2008, publish date of Bill Inmon’s  DW 2.0. The idea is to create the operational report as fast as possible and after delivering the product refactor the underlying data to make them compatible with other data used in previous reports.

The result is a serious governance effort, lots of rework and an ever growing DW 2.0 in the third normal form that one day may contain sufficient enterprise wide data to produce meaningful aggregates for strategic direction. The Corporate Information factory (CIF) revisited so to speak.

Why the CIF Never Realised Any Value

In Inmon’s world, it was recommended to build the entire data warehouse before extracting any data marts. These data marts are aggregates, based on user profiles or functions in the organisation and are groupings of detailed data that may change over time.

This led to many problems on the sites I have visited during my career as a business analyst and data architect.

First and foremost: by the time you have covered the entire scope of the CIF, the world has changed and you can refactor entire parts of the  data model and reload quite a few data to be in synch with new realities. Doing this on a 3NF data schema can be quite complex and time and resource consuming. And then there is the data mart management problem: if requirements for aggregations change over time, keeping track of historical changes in aggregations and trends is a real pain.


About DW 2.0: the Data Quagmire



To anyone who hasn’t read this book: it’s the last attempt of the “father of data warehousing” to defend his erroneous Corporate Information Factory (CIF), adding some text data to a structured data warehouse in the third normal form. The book is full of conceptual drawings but that is all they are; not one implementation direction follows up on the drawings. Compare this to the Kimball books where every architectural concept is translated into SQL scripts and clear instructions and you know where the real value is.

With DW 2.0 the organisation is trying to salvage some of the operational reports’ value but at a cost, significantly higher than respecting the principle “Do IT right the first time”. The only good thing about this new approach is that nobody will notice the cost overrun because it is spread over numerous operational reports over time. Only when the functional data marts need rebuilding, may some people notice the data quagmire the organisation has stepped into.

Conclusion, to paraphrase A.D. Chandler: Data structure follows strategy



maandag 28 juni 2021

Managing a Data Lake Project Part III: Architecture Drives the Project Method

Remember the old days when the data warehouse was the only source of the facts and answered almost any business question, provided the data were available in the source systems? Today, more and more data is beyond our control. “Control” in the sense of precooked structures, well documented and well governed data objects. More and more data is generated from sources beyond our control. And only the data lake can facilitate comprehensive analytics.

To make clear how the architecture of a data lake drives the project approach, it is necessary we review the three major data warehouse architectures and their project approach before we present the new methods needed in a data lake environment.

 The Kimball architecture and its project approach


Ralph Kimball’s star schema approach is the most used -and as far as I am concerned- the most pragmatic low-threshold approach to data warehousing. Each dimension is constructed with an enterprise view and shared in the appropriate data marts. And each data mart represents a business process. For project managers, this means that an enterprise scan is needed to define the dimensions, followed by a study on the combination “information value times feasibility” to pick the order of execution. 

The Lindstedt architecture and its project approach


The great advantage of a data vault is its flexibility to adapt to new situations, new data sources and other changes in the data landscape. Like the Kimball method, it focuses on business processes and models these in a highly normalised way using hashes to “freeze” temporal links between objects and their attributes.  What this means to the project approach is obvious: we postpone the materialisation of a queryable schema until we are sure about the data persistence. In many of the projects we managed, a seamless transition from a data vault to star schema was made. For project managers, this means a heavy focus on the business process and a flexible way of representing all the processes and delivering queryable data whenever the need for it was expressed by the business.


The Corporate Information Factory architecture from Inmon and its project approach



The Inmon approach is something completely different from the previous methods. As of the early 1990s Inmon has made his case for a corporate information factory (CIF) that would take every data source in scope, build a target model in the third Normal Form (3NF) and once this Herculean task was competed it was finally time to deliver. In his method functional data marts would provide extracts from the CIF. Think of an HR data mart, a marketing data mart, a finance data mart, etc… No need to say this can only work in very stable environments where the external factors don’t influence too much the approach to analytics. In all the projects me and my colleagues have been involved this was the Never-ending Project. Please don’t go there. And if, by any chance there is a business case for this approach, allow for sufficient time and resources. You will need it. 


The data lake architecture and its project approach

A data lake project is a completely different story form the previous three: no more up front analysis of concepts, objects, entities and attributes that contribute to these concepts before building the data stores.

In a nutshell, a data lake project is about looking for cheap and simple storage like S3 on Amazon Web Services or ALS on Azure, making sure the ingestion data pipelines are in place to receive all sorts of data and once these data are in place, making sure they are ready for exploitation.  For project managers, this means a totally different project management flow. Contrary to the three previous architectures, there is no synching between business and tech: after a high level business analysis, the technical track will provide for data storage, data access and data cataloguing to make it exploitable for the business. 


woensdag 16 juni 2021

Managing a Data Lake Project Part II: A Compelling Business Case for a Governed Lake

 

In Part I A Data Lake and its Capabilities we already hinted towards a business case but in this blog we make it a little bit more explicit.




A recap from Part I: the data lake capabilities

The business case for a data lake has many aspects, some of them present sufficient rationale on their own but that depends of course on the actual situation and context of your organisation. Therefore, I mention about eleven rationales, but feel free to add yours in the comments.

 We are mixing on-premise data with Cloud based systems which causes new silos

The Cloud providers deliver software for easy switching your on-premise applications and databases to Cloud versions. But there are cases where this isn’t possible to do this in one fell swoop:

  • Some applications require refactoring before moving them to the Cloud;
  • Some are under such strict info security constraints that even the best Cloud security they can’t be relied on. I know of  retailer who keeps his excellent logistic system in something close to a bunker!
  • Sometimes the budget or the available skills are insufficient to support a 100 % Cloud environment, etc…

This provides already a very compelling business case for a governed data lake: a catalog that manages lineage and meaning will make the transition smoother and safer.

Master data is a real pain in siloed data storage, as is governance...

A governed data lake can improve master data processes by involving the end users in evaluating intuitively what’s in the data store. Using both predefined data quality rules and machine learning to detect anomalies and implicit relationships in the data as well as defining the golden record for objects like CUSTOMER, PRODUCT, REGION,… the data lake can unlock data even in technical and physical silos. 

We now deal with new data processing and storage technologies other than ETL and relational databases: NoSQL, Hadoop, Spark or Kafka to name a few

NoSQL has many advantages for certain purposes but from a governance point of view it is a nightmare: any data format, any level of nesting and any undocumented business process can be captured by a NoSQL database.

Streaming (unstructured) data is unfit for a classical ETL process which supports structured data analysis so we need to combine the flexibility of a data lake ingestion process with the governance capabilities of a data catalogue or else we will end up with a data swamp.

We don't have the time, nor the resources to analyse up front what data are useful for analysis and what data are not

There is a shortage of experienced data scientists. Initiatives like applications to support data citizens may soften the pain here and there but let’s face it, most organisations lack the capabilities for continuous sandboxing to discover what data in what form can be made meaningful. It’s easier to accept indiscriminately all data to move into the data lake and let the catalogue do some of the heavy lifting.

We need to scale horizontally to cope with massive unpredictable bursts of of data

Larger online retailers, event organisations, government e-services and other public facing organisations can use the data lake as a buffer for ingesting massive amounts of data and sort out its value in a later stage.  

We need to make a rapid and intuitive connection between business concepts and data that contribute, alter, define or challenge these concepts

This has been my mission for about three decades: to bridge the gap between business and IT and as far as “classical” architectures go, this craft was humanly possible. But in the world of NoSQL, Hadoop and Graph databases this would be an immense task if not supported by a data catalogue.  

Consequently, we need to facilitate self-service data wrangling, data integration and data analysis for the business users

A governed data lake ensures trust in the data, trust in what business can and can't do. This can speed up data literacy in the organisation by an order of magnitude.

We need to get better insight in the value and the impact of data we create, collect and store.

Reuse of well-catalogued data will enable this: end users will contribute to the evaluation of data and automated meta-analysis of data in analytics will reinforce the use of the best data available in the lake. Data lifecycle management becomes possible in a diverse data environment.

We need to avoid fines like those stipulated in the GDPR from the EU which can amount up to 4% of annual turnover!

Data privacy regulations need functionality to support “security by design” which is delivered in a governed data lake. Data pseudonimisation, data obfuscation or anonimisation come in handy when these functions are linked to security roles and user rights. 

We need a clear lineage of the crucial data to comply with stringent laws for publicly listed companies

Sarbanes Oxley and Basel III are examples of legislation that require accountability at all levels and in all business processes. Data lineage is compulsory in these legal contexts. 

But more than all of the above IT based arguments, there is one compelling business case for C-level management: speeding up the decision cycle time and the organisation’s agility in the market.

Whether this market is a profit generating market or a non-profit market where the outcomes are beneficial to society, speeding up decisions by tightening the integration between concepts and data is the main benefit of a governed data lake.

Anyone who has followed the many COVID-19 kerfuffles, the poor response times and the quality of the responses to the pandemic sees the compelling business case:

  • Rapid meta-analysis of peer reviewed research papers;
  • Social media reporting on local outbreaks and incidents;
  • Second use opportunities from drug repurposing studies;
  • Screening and analysing data from testing, vaccinations, diagnoses, death reports,…

I am sure medical professionals can come up with more rationales for a data lake, but you get the gist of it.

So, why is there a need for a special project management approach to a data lake introduction? That is the theme of Part III.  But first, let me have your comments on this blogpost.






woensdag 9 juni 2021

Managing a Data Lake Project Part I: A Data Lake and its Capabilities

 A data lake can provide us with the technology to cope with the challenges of various data formats arriving in massive amounts, too fast and diverse for a classic data pipeline resulting in a data warehouse. As a the data warehouse is optimised for analysis of structured data, the inflow of unstructured data strings, entire documents, JSONs with n levels of nesting, binaries, etc… is simply too much for a data warehouse.

A data lake is an environment that manages any type of data from any type of source or process in a transparent way for the business. In tandem with a data catalogue, a lake provides data governance and facilitates data wrangling,  trusted analytical capabilities as well as self-service analytics to name a few.

If we zoom in on these capabilities, we can list these as the basic requirements for a minimum viable product:

  • Automated discovery, cataloguing and classification of ingested data;
  • Collaborative options for evaluating the ingested data;
  • Governance of quality, reliability, security and privacy aspects as well as lifecycle management;
  • Facilitates data preparation for analytical purposes in projects as well as for unsupervised and spontaneous self-service analytics;
  • Provides the business end users with an intuitive search and discovery platform;
  • Archives data where and when necessary.

 

Generic data processing map
Data comes from events that lead to business processes as well as from outside events that may become part of the business processes

Some vendors launch the term “data marketplace” to stress the self-service aspects of a data lake. But this position depends on the analytical maturity of the organisation. If introduced too early it may provide further substantiation for the claim that:

“Analytics is a process of ingesting, transforming and preparing data for publication and analysis to end up in Excel sheets, used a “proof” for a management hypothesis”.

What makes a data lake ready for use?

Meta data: data describing the data in the lake: its providence, the data format(s), the business and technical definitions,…;

Governance: business and IT control over meaning, application and quality of data as well as information security and data privacy regulation;

Cataloguing: either by machine learning or precooked categories and rule engines, data is sorted and ordered according to meaningful categories for the business.

Structuring: data increases in meaning if relationships with other concepts are modelled in hierarchies, taxonomies and ontologies;

Tagging: both governed and ungoverned tags (i.e. user tags) dramatically improve the usability of the ingested data. If these tags are evaluated on practical use by the user community they become part of a continuous quality improvement process;

Hierarchies: identical to tagging, there may be governed and personal hierarchies in use;

Taxonomies: systematic hierarchies, based on scientific methods;

Ontologies: a set of concepts and categories in a subject area or data domain that shows their properties and the relations between them to model the way the organisation sees the world.


zaterdag 29 mei 2021

Managing a Data Lake Project

With the massive growth of online generated data and IoT data, the proportion of unstructured and semi-structured data constitutes the bulk of the data that needs to be analysed. Whereas a 50 Gigabyte data warehouse to facilitate analysis of structured data was quite an achievement up to now, this number dwindles compared to the unstructured and semi-structured data avalanche.

Data Avalanche?


Yes, because compared to the steady stream of data from transaction processing systems, we now have to deal with irregular flows and massive bursts of incoming data that needs to be adequately processed to provide meaning to the data.
New data sources emerge, other than social media and IoT data, like smart machines and machine learning systems generating new data, based on existing sources. Managing various data types and metadata in impressive volumes are just a few technical aspects which can be solved by technology. The HR- , legal- and organisational aspects are level more complex, but aspects these are not in scope of this series of blog posts. 
We are adding extra process and event based decision support to our management capabilities and that alone is worth the cost, the trouble and the change management efforts to introduce a data lake.

See you at the Webinar!

Wednesday 9th June you can tune in on a short webinar hosted by the Great IT Professional. You can still register via this link. The webinar will be followed by a series of articles on how to manage the Data Lake project. Stay tuned!

Bert Brijs Webinar on Managing a Data Lake Project