The
inimitable Yogi Berra already knew it: “Predictions are very hard,
especially about the future.” So please accept my apologies in advance, as
my prediction will likely be inaccurate in terms of both timing and scope when
it comes to the transition we are about to witness in the field of Generative
AI and its associated market.
The launch
of ChatGPT-5 has demonstrated that “parameter escalation” is subject to the law
of diminishing returns. Greenfield startups such as OpenAI are burning cash at
an astonishing rate, and at some point a shake-out will inevitably occur. The
giants—OpenAI (backed and integrated by Microsoft) and Gemini (from Google)—are
well positioned to survive. Their model performance is increasingly shaped not
merely by parameter counts, but also by architecture, training data quality,
deployment efficiency, and—crucially—the fact that their AI functions are
embedded into widely used software ecosystems, creating high entry barriers for
competitors. What will become of the many other models hosted on Hugging Face
remains an open question.
LLM users
are already aware that parameter escalation alone cannot eliminate the
remaining 1.5–2% hallucination rate. They are also keenly aware that their
interactions contribute intellectual property to the models, often without
compensation. Furthermore, they know that open models are vulnerable to prompt
injections, nonsensical outputs, and coordinated reputation attacks by
adversaries.
There is,
therefore, a market for closed models built on validated data curated by
experts. Beyond pattern recognition, such systems will deliver genuine
problem-solving capabilities. Subject matter experts will play a central role
in improving data quality—by uploading validated content, stress-testing
outputs with thousands of questions, and leveraging Retrieval-Augmented
Generation (RAG) to enhance reliability. The logical next step will be to
integrate rule-based algorithms for decision support—bringing us full circle to
the earliest AI systems of the 1960s, such as Mycin, a pioneering
pharmaceutical knowledge base.
Ultimately,
these systems will evolve toward mimicking human reasoning and judgment through
first-principles thinking.
Beneath the surface of press releases aimed at inflating incumbents’ P/E ratios or maximizing the IPO valuations of new entrants, something more substantial is brewing.
My bold prediction is this: in the long run, closed models will generate more value than today’s “gorillas,” who are largely providing the infrastructure for them.
And so, one day, another of Yogi Berra’s paradoxical dicta may well come true:
Geen opmerkingen:
Een reactie posten