Posts tonen met het label AI. Alle posts tonen
Posts tonen met het label AI. Alle posts tonen

maandag 2 december 2024

About Data Governance, Trends and Interpretations

 The following graphs illustrate the opinions of 270 ICT professionals ranging from CEOs and CIOs to enterprise, solution and data architects, lead programmers, digital transformation managers and BI engineers, data scientists as well as DBAs and meta data managers. You name it, any job title in the survey is present. Of course, this is just a photo taken between February and October 2024: it was hard work reaching out to the relevant interviewee types but here are the results. Your remarks are welcome!

The graph below is music to my governance ears: a large majority supports the duopolistic governance model which is crucial if you want to survive in a volatile market. Although arguments like “efficiency” and “authority “ still support the business and ICT monarchies. Because mutual adjustment is just a waste of time to these government models, that is, in their perception…

ICT governance's guiding principles

If you take a “follow the money” approach, you get confirmation: a majority of IT professionals  considers it a duopolistic issue: 

Funding is a governance issue


Half of the respondents consider AI’s introduction as inevitable. Although much of its use is still “autocomplete on steroids” replacing a Google search and money is being burnt faster than ever in any ICT innovation, the first use cases are going to the market. (Stay tuned, we’re also working on one).

Artificial Intelligence: hype or reality?


Again, a strong majority of ICT Professionals see data and applications move to the Cloud. That is by no means ignoring the privacy issues with data hosted on the major US Cloud providers. But EU based initiatives like Open Telecom Cloud or OVH Cloud in France which has high ticket customers like Auchan, Louis Vuitton, Société Générale are beginning to show up on the radar. We notice multi cloud strategies are emerging to avoid calamities like the Office  365 outage the 25th November this year.


Applications and data move to the Cloud
OpenTelecom on Cloud sovereignty

Although most of the interviewees heard about robotic process automation (RPA) the combo “Process and Task Mining” were not that high as expected on the agenda. With an ageing workforce and a demographic collapse in the near future, the EU should invest every penny in automation. 


Process and Tasl Mining


With tools like Mendix, but also process and task mining tools like Celonis or analytical applications like KNIME Analytics Platform or Dataiku one would think that more ICT professionals would appreciate the advance in productivity of low code tools. In this photo, they’re somewhat sitting on the fence. Is it because developers remain faithful to their tools as they are unwilling to abandon their skill set to acquire  new one?

Mendix, KNIME Analytics Platform: low code tools


Amazing: most professionals embrace the Cloud but they’re not prepared to acept the logical consequence of Cloud architectures. Although complex to implement, zero trust is well-suited for remote work, cloud-based networking, and hybrid environments.

ZNA: zero trust network architectures


I confess, this last question was sort of a lie detector. And judging from the answers, not too many respondents were fabulating. 

Quantum computing adoption


Of course, the Low Countries show a disproportionately large number of respondents but still, 63% are outside our home market:

Countries of the respondents

About the survey

Between February and October 2024, we had to invest heavily in contacting the right ICT professionals to get meaningful answers. It took us quite a few mails, phone calls and even visits to obtain these results. Are these results representative ? I am not sure. Are they significant? Maybe. Are they inspiring? Most certainly as we use them in our ICT Literacy course to get the conversation going about business – ICT alignment. Give us your opinion in the comments.  







donderdag 4 april 2024

Why XAI will be the Next Big Thing

Nothing new under the sun

Or should I use the expression “plus ça change, plus c’est la même chose”? Because what’s at stake in large language models (LLM) like ChatGPT4 and others is the trade off between the model’s fit, its accuracy on the one hand and transparency, interpretability, and explainability on the other. This dilemma is as old as classical statistics: a simple regression model may be inaccurate but it is easily readable for end users without a large background in statistics.

Anyone can read from a graphical representation that there is a correlation between the office surface, the location class and the office rent.  But high dimensional analysis results from a neural network are less transparent and interpretable, let alone explainable.

The same goes for LLMs: there is no way a human domain expert can fathom the multitude of weight matrices used in determining the syntax relationship between words.

About hard to detect hallucinations

Anyone can see nonsense coming out of ChatGPT like responses inconsistent with the prompt. But what about pure fiction represented in a factual consistent and convincing way? If the end user is not a domain expert he will have trouble recognising the output.

The mitigation is called RAG (Retrieval Augmented Generation. It’s a technique that enables experts to add their own data to the prompt and ensure more precise generative AI output. But… then we’re missing the whole point of generative AI: to enable a broader audience than domain experts doing tasks for which they had little or no training or education.

Domain expertise is needed in most cases

Generating marketing and advertising content may work for low level copy like catalogue texts but I doubt, it will deliver the sort of ads you find on Ads of the World https://www.adsoftheworld.com/

I grant you the use case of enhancing the shopping experience as a “domainless” knowledge generator.  But most use cases like drug discovery, health care, finance and stock market trading or urban design to name a few require domain knowledge to prevent accidents from happening.

ChatGPT has serious issues with accuracy
Only 7% of the citations were accurate!


Take health care: a study from Bhattacharyya et al in 2023[1]  identified an astonishing number of errors in references to medical research. Among these references, 47% were fabricated, 46% were authentic but inaccurate, and only 7% were authentic and accurate. My friend, a medicine practitioner, was already frustrated by people googling their symptoms and entering his cabinet with the diagnosis and the treatment; with this tool I fear his frustrations will only increase… Many more examples can be found in other domains[2].

Hallucations galore in Generative AI

Another evolution in AI is about moving away from tagging by experts and replacing this process by using Self-supervised Learning (SSL, no, not the network encryption protocol).  Today’s applications in medicine produce impressive results but again, this approach still requires medical expertise. In the context of generative AI, self-supervised learning can be particularly useful for pre-training models on large amounts of unlabelled data before fine-tuning them on specific tasks. By learning to predict certain properties or transformations of the data, such as predicting missing parts of an image (inpainting) or reconstructing corrupted text (denoising), the model can develop a rich understanding of the data distribution and capture meaningful features that can then be used for generating new content.

Enter XAI

The European Regulation on Artificial Intelligence (AI Act)[3] which is in the final implementation process is a serious argument for avoiding sorcerer’s apprentices. Especially in high-risk AI applications, such as those used in healthcare, transportation, and law enforcement, the AI Act will make those applications subject to strict requirements, including data quality, transparency, robustness, and human oversight. Additionally, the Act prohibits certain AI practices deemed unacceptable, such as social scoring systems that manipulate human behaviour or exploit vulnerabilities.

This will foster the use of explainable AI at least for domains where already existing legislation is requiring transparency, e.g. Sarbanes Oxley, HIPAA and others. Professionals in banking, insurance, public servants deciding on subsidies and grants, HR professionals evaluating CVs are just a few of the primary beneficiaries of XAI.

They will need models where humans can understand how the algorithm works and tweak it to test its sensitivity. By doing so, they will get a better understanding of how the model came up with a certain result.

In short, XAI models may be simpler but better governed and they will grow in usability as new increments are added to the existing knowledge base. As we speak, sector specific general models are being developed, ready for enhancing them with your specific domain knowledge.



[1] High Rates of Fabricated and Inaccurate References in ChatGPT-Generated Medical Content.

Bhattacharyya M, Miller VM, Bhattacharyya D, Miller LE.

Cureus. 2023 May 19;15(5):e39238. doi: 10.7759/cureus.39238. eCollection 2023 May.

PMID: 37337480 Free PMC article.

[2] Athaluri SA, Manthena SV, Kesapragada VSRKM, Yarlagadda V, Dave T, Duddumpudi RTS. Exploring the Boundaries of Reality: Investigating the Phenomenon of Artificial Intelligence Hallucination in Scientific Writing Through ChatGPT References. Cureus. 2023 Apr 11;15(4):e37432. doi: 10.7759/cureus.37432. PMID: 37182055; PMCID: PMC10173677.

dinsdag 21 februari 2023

How will ChatGPT affect the practice of business analysis and enterprise architecture?

 

ChatGPT (Chat Generative Pre-trained Transformer), is a language model-based chat bot developed by OpenAI enabling users to refine and steer a conversation towards a desired length, format, style, level of detail, and language.  Many of my colleagues are assessing the impact of Artificial Intelligence products on their practice and the jury is still out there: some of them consider it a threat that will wipe out their business model and others see it as an opportunity to improve productivity and effectiveness of their practice.



I have a somewhat different opinion. Language training models use gigantic amounts of data to train the models but I am afraid if you want to use the Internet data you certainly have a massive amount of data but of dubious and not always verifiable quality.

General Internet data is polluted with commercial content, hoaxes and ambiguous statements that need strong cultural background analysis to make sense of it.

The data that has better quality than general Internet data is almost always protected by a copyright; Therefore use without permission is not always gentlemanlike to say the least.

Another source of training data are the whitepapers and other information packages you get in exchange for your data: e-mail, function, company,… These documents often start with stating a problem in a correct and useful way but then direct you to the solution delivered by their product.

The best practices in business analysis and enterprise architecture are -I am afraid- not on the Internet. They’re like news articles behind a paywall. So if you ask CHAT GPT a question like “Where can I find information to do business analysis for analytics and business intelligence?” You get superficial answers that –at best- provide a starting point to study the topic.


A screenshot of the shallow and casual reply. It goes on with riveting advice like “Stay Informed”, “Training and Certification”, “Networking”, “Documentation”,…

And the question “What are Best Practices in business intelligence” leads to the same level of platitudes and triteness:

  • “Align with business goals” who would have thought that?
  • “User involvement and collaboration” Really?
  • “Data Quality and Governance” Sure, but how? And when and where?

In conclusion: a professional analyst or enterprise architect has nothing to fear from ChatGPT. 


At best, it provides a somewhat more verbose and redacted answer to a question saving you the time to plough through over a billion answers from Google.


maandag 30 september 2019

Enterprise Architectures for Artificial Intelligence (II)


A generic model for primary processes



Every organisation is unique but most organisations share some basic principles in the way they operate. Business processes have some form (between 5 and 100%) of support by online transaction systems (OLTP). Business drivers like consumer demand, government regulations, special interest groups, technological evolutions, availability of raw materials and labour and many others influence the business processes intended to deliver a product or service that meets market demand within a set of constraints. These constraints can range from enforcing regulatory bodies to voluntary self-regulation and measures inspired by public relations objectives.
This is a high level approach of how AI can support business processes


AI and enterprise architecture
High level generic architecture

Business drivers are at the basis of business processes to realise certain business goals and delivering products for an internal or external customer.  These processes are supported by applications, the so-called online transaction processing (OLTP) systems.
Business process owners formulate an a priori scoring model that is constantly adapted by both microscopic transaction data as well as historic trend data from the data warehouse (DWH). Both data sources can blend into decision support data, suited for sharply defined data requirements as well as vague assumptions about their value for decision making.  The decisions at hand can be either microscopic or macroscopic. 

Introducing AI in the business processes


As an architect one of the first decisions to make is whether and when AI becomes relevant enough to become part of routine business processes. There are many AI initiatives in organisations but the majority is still in R & D mode or –at best- in project mode.  It takes special skills to determine when the transition to routine process management can provide some form of sustainable added value.
I am not sure if these skills are all determined and present in the body of knowledge of architects but here are some proposals for the ideal set of competences.
  • A special form of requirements management which you can only master if the added value as well as the pitfalls of AI in business processes are thoroughly understood,
  • As a consequence, the ability to produce use cases for the technology,
  • Master the various taxonomies to position AI in a correct way to make sure you obtain maximum value from the technology (more on this in a next post),
  • Have clear insights in the lifecycle management of the various analytical solutions in terms of data persistency, tuning of the algorithm and translation into appropriate action(s).


In the next post, I will elaborate a bit more on the various taxonomies to position AI in the organisation. 



donderdag 19 september 2019

Enterprise Architectures for Artificial Intelligence (I)


In the past three decades, I have seen artificial intelligence (AI) coming and going a couple of times. From studying MYCIN via speech technology in Flanders Language Valley to today’s machine learning and heuristics as used by Textgain from Antwerp University, the technology is here to stay this time.
Why? Because the cost of using AI has fallen dramatically not just in terms of hard and software but also in terms of acquiring the necessary knowledge to master the discipline.
Yet, most of the AI initiatives are still very much in the R&D phase or are used in limited scope. But here and there, e.g. in big (online) retail and telecommunications, AI is gaining traction on enterprise level.  And through APIs, open data and other initiatives, AI will become available for smaller organisations in the near future.
To make sure this effort has a maximum chance of success, CIOs need to embed this technology in an enterprise architecture covering all aspects: motivations, objectives, requirements and constraints, business processes, applications and data.
Being fully aware that I am trodding on uncharted territory, this article is –for now- my state of the art.

Introducing AI in the capability map

AI will enhance our capabilities in all areas of Treacy & Wiersema’s model, probably in a certain order. First comes operational excellence as processes and procedures are easier to describe, measure and monitor. Customer intimacy is the next frontier as the existing discipline of customer analytics lays the foundation for smarter interactions with customers and prospects.
The toughest challenge is in the realm of product leadership. This is an area where creativity is key to success. There is an approximation of creativity using what I call “property exploration” where a dimensional model of all possible properties of a product, a service, a marketing or production plan are mapped and an automatic cartesian product of all levels or degrees of each property with all the other properties is evaluated for cost and effectiveness. Sales pitch: if you want more information about this approach, contact us.

Capabilities and AI
Capabilities where state of the art AI can play a significant role
Examples of capabilities where AI can play a defining role. Some of these capabilities are already well supported, to name a few: inventory management (automatic replenishment and dynamic storage), cycle time management (optimising man-machine interactions), quality management (visual inspection systems), churn management (churn prediction and avoidance in CRM systems), yield management (price, customer loyalty, revenue and capacity optimisation) and talent management (mining competences from CVs).

Areas where AI is coming of age: loyalty management and competitive intelligence, R & D management and product development.

In the next post I will discuss a generic architecture for AI in support of primary processes; Stay tuned and… share your insights on this topic!