Sunday, March 2, 2025

The Future of LLMs: A Commodity, a Platform, or a Locked Ecosystem?

 There was a time when hardware was everything. The processor, the storage, the power—it was all about the physical components. But over time, hardware became a commodity, and the real intelligence moved to software—the brain that made everything work.

I believe LLMs (Large Language Models) are heading in the same direction.

Today, LLMs are impressive, but above-average performance is no longer enough—everyone is doing that. Soon, just like hardware in smart buildings, LLMs will be necessary but not the differentiator. The real power will lie not in the model itself, but in how we use and move our data between models.


Imagine This: Seamless LLM Portability

What if, instead of training a new LLM from scratch every time you switch, you could export all your training, memory, and preferences from one model and load it into another?

  • Think of it like exporting browser bookmarks from Chrome to Edge—simple, quick, and frictionless.
  • Or how smart building automation isn’t about having the best sensors, but the best software to control them.

If someone solves this seamless transfer problem, it would be a gamechanger. Suddenly, I as the user would have full control over which LLM I want to use. It would be a free market of AI models, where I decide which one serves me best—not the other way around.

But here’s the challenge: Why would companies allow this?


The Moat Problem: Will AI Stay Closed or Go Open?

Right now, the biggest LLM companies—OpenAI, Anthropic, DeepSeek—are building moats, just like Apple did with its App Store and music ecosystem. They don’t want you to leave. They don’t want you to take your trained data elsewhere.

Why? Because data is the real moat. If I allow my users to transfer their AI memories and training to a competitor, I lose my stickiness. I lose my power. I lose my users.

So, will AI follow the Apple model (closed ecosystem) or the Android model (open, interchangeable, user-centric)?

Right now, it’s looking more like Apple—companies are keeping their data walled off, because why would they give up control?

But here’s the twist—eventually, they may not have a choice.


A Future Where AI Models Become Interchangeable?

Once the market gets saturated, and there are too many competing LLMs, the biggest differentiator will no longer be the models themselves. It will be how well they work together.

At that point, the pivot to interoperability will be inevitable. Some LLM company—or maybe an entirely new startup—could rise up as the AI platform that enables seamless training across all models.

  • Think of what Android did to Apple. Instead of one locked system, they enabled an open ecosystem where users had freedom of choice.
  • Think of what API-first companies did to traditional software. They created layers that made everything compatible, and suddenly, closed ecosystems lost their advantage.

Could this happen with AI? Maybe not yet—but in a few years, as competition increases, it might be the only way forward.


What About AI Agents? Are They Moving Towards Openness or Isolation?

I’ll admit—I don’t know enough about AI agents to say for sure. But I do wonder:

  • Are they being developed to connect models and enable interoperability?
  • Or are they reinforcing more isolation, making it even harder to move from one LLM to another?

POE (Quora’s AI marketplace) kind of does this by offering multiple models under one roof. But as far as I know, it doesn’t allow seamless data exchange between them—it just acts as a layer function that gives memory to different LLMs.

So, are we headed towards more walled gardens or an open AI economy?


Dreaming of the Wild Possibilities

Maybe I’m naïve. Maybe a smart AI engineer would think I’m nuts for even suggesting this.

But if we don’t imagine new possibilities, we will never disrupt.

The future is always built by those who dare to think differently. And if seamless, user-controlled AI memory becomes reality, hopefully, you’re the one who builds it before your competition does.

What do you think—are we moving towards LLM portability, or will the walled gardens stay? 🚀



Friday, January 31, 2025

Bias, Disruption, and the Speed of Change: Are We Really That Surprised?



The world is having a collective "OMG, AI has bias?!" moment. And honestly, I am sitting here wondering, Are we really that shocked?

We have known this for a while. AI models inherit the biases of their developers, the data they are trained on, and the policies governing their use (and we haven't even talked about what happens when goverments have even more definitive policies and boundaries for AI, I think they call it AI ethics, risks, and governance if I am not mistaken). We saw this when Google first attempted its AI models and the controversy around how they handled image recognition. Ask AI to generate an image of a CEO, and chances are, you will see white men before you see a black or brown person—or even a white woman. Why? Because it reflects the biased data it was trained on, which in turn mirrors our own societal biases.

Now, people are surprised that Deepseek, an AI model trained in a particular country, avoids discussing certain "sensitive" topics. Why is that surprising? It was designed that way—whether explicitly or through the unconscious (or conscious) decisions of its developers or through gag orders on certain topics as dictated by the local country laws (and I am sure every country has their own ones they dont want aired out in the open). Just like humans, AI mirrors what it has been exposed to.

The irony? We humans also carry inherent biases, but we conveniently ignore them while being quick to point fingers at machines. It’s as if we expect AI to be neutral while we, the ones building it, are anything but. So, is AI biased? Yes. Are we? Also yes.

Unless AI models are built on the principles of blockchain transparency—which, let’s be honest, only a select few technical experts truly understand—bias will always creep in. And even then, there’s still human influence shaping what goes into the so-called “unbiased” blockchain ledger.

But bias isn’t the only thing accelerating in AI. The pace of disruption is also reaching new speeds.


The Disruption Curve Just Got Steeper

If you’re in foresight, you already know: big corporations eventually get disrupted by smaller, faster, risk-taking startups. It’s a classic cycle. Big companies get comfortable, slow down, and assume their dominance will last forever—until someone comes along, takes the risk they weren’t willing to, and flips the industry upside down.

What’s fascinating now is how fast it’s happening.

Take, for example, the Sam Altman moment that’s making waves. Someone asked him what if another company could build AI models at OpenAI’s scale, and apparently, he laughed it off. His response? "Good luck trying."

Well… someone did try.

And it shocked the industry.

But should it have? Was no one in tech aware that someone was working on something like this? Or did they know but, like most big corporations, let their ego convince them they were untouchable? The same old pattern:

  • They’ll never catch up to us.
  • They can’t move as fast as we do.
  • We are years ahead of them.

Until they do catch up. Until they move even faster. Until, suddenly, the world wakes up to the realization that disruption doesn’t just happen—it happens at breakneck speed.

Now, the same thing is happening in AI. The barriers to entry are lower than ever, and people are finally realizing that technological dominance isn’t permanent. It never was. Thats why it is important to stay agile and constantly change, regardless of how big or small you are.


Did We See This Coming?

Funny enough, I had a recent conversation about the massive inefficiencies and energy consumption in AI, and for what?, because it isnt even that good yet!! Right...I feel a baby born into this world, doesnt start running in a day or even in 6 months...it crawls, walks and then runs...I am optimistic AI will get better and the costs will absolutely come down, see how solar panels have come down in costs and become so much more normalized now than in say 2008. Similar to that, I was speculating about a future where AI queries wouldn’t need to be processed individually—where prompts could be streamlined and shared to optimize resources.

Then boom, Deepseek emerges, and now I am wondering—did I accidentally predict a key element of their model? Not saying I should be proud, not even sure if it is based on those principles, but as a foresight practitioner, I’m definitely getting better at mapping the future. (More on that in another blog!)


Are Big Tech Companies Blinded by Ego?

This brings me back to ego.

How often do industry leaders dismiss new challengers simply because they believe they are untouchable? How many times do we see legacy companies ignore signals until it’s too late?

  • Kodak ignored digital cameras.
  • Nokia dismissed smartphones.
  • Blockbuster laughed at Netflix.
  • OpenAI laughed at the idea of competition… until someone proved them wrong.

The pattern repeats, and yet, every time, people act shocked when disruption happens.


Where Do We Go From Here?

This moment should serve as a wake-up call for anyone still underestimating how fast AI is evolving. Bias isn’t new. Disruption isn’t new. What’s new is the speed.

So now the question is:

  • Who’s next?
  • Who is ignoring signals right now, thinking they are invincible?
  • Which industry is about to be flipped upside down?

And the ultimate question:

Are we truly paying attention? Or are we just waiting for the next shock to hit?


Final Thought: Have You Tested ChatGPT vs. Deepseek?

I’d love to hear your thoughts—have you compared ChatGPT with Deepseek yet? What differences stand out? And more importantly, do you think OpenAI’s dominance is about to get disrupted faster than we think? Will they and other tech companies try to create obstacles and challenges to prevent Deepseek from taking over and having its own ChatGPT moment or will Deepseek show its resilience and overtake quickly.  

Let’s talk—because if there’s one thing I know for sure, it’s that the future never waits.