Sunday, March 2, 2025

The Future of LLMs: A Commodity, a Platform, or a Locked Ecosystem?

 There was a time when hardware was everything. The processor, the storage, the power—it was all about the physical components. But over time, hardware became a commodity, and the real intelligence moved to software—the brain that made everything work.

I believe LLMs (Large Language Models) are heading in the same direction.

Today, LLMs are impressive, but above-average performance is no longer enough—everyone is doing that. Soon, just like hardware in smart buildings, LLMs will be necessary but not the differentiator. The real power will lie not in the model itself, but in how we use and move our data between models.


Imagine This: Seamless LLM Portability

What if, instead of training a new LLM from scratch every time you switch, you could export all your training, memory, and preferences from one model and load it into another?

  • Think of it like exporting browser bookmarks from Chrome to Edge—simple, quick, and frictionless.
  • Or how smart building automation isn’t about having the best sensors, but the best software to control them.

If someone solves this seamless transfer problem, it would be a gamechanger. Suddenly, I as the user would have full control over which LLM I want to use. It would be a free market of AI models, where I decide which one serves me best—not the other way around.

But here’s the challenge: Why would companies allow this?


The Moat Problem: Will AI Stay Closed or Go Open?

Right now, the biggest LLM companies—OpenAI, Anthropic, DeepSeek—are building moats, just like Apple did with its App Store and music ecosystem. They don’t want you to leave. They don’t want you to take your trained data elsewhere.

Why? Because data is the real moat. If I allow my users to transfer their AI memories and training to a competitor, I lose my stickiness. I lose my power. I lose my users.

So, will AI follow the Apple model (closed ecosystem) or the Android model (open, interchangeable, user-centric)?

Right now, it’s looking more like Apple—companies are keeping their data walled off, because why would they give up control?

But here’s the twist—eventually, they may not have a choice.


A Future Where AI Models Become Interchangeable?

Once the market gets saturated, and there are too many competing LLMs, the biggest differentiator will no longer be the models themselves. It will be how well they work together.

At that point, the pivot to interoperability will be inevitable. Some LLM company—or maybe an entirely new startup—could rise up as the AI platform that enables seamless training across all models.

  • Think of what Android did to Apple. Instead of one locked system, they enabled an open ecosystem where users had freedom of choice.
  • Think of what API-first companies did to traditional software. They created layers that made everything compatible, and suddenly, closed ecosystems lost their advantage.

Could this happen with AI? Maybe not yet—but in a few years, as competition increases, it might be the only way forward.


What About AI Agents? Are They Moving Towards Openness or Isolation?

I’ll admit—I don’t know enough about AI agents to say for sure. But I do wonder:

  • Are they being developed to connect models and enable interoperability?
  • Or are they reinforcing more isolation, making it even harder to move from one LLM to another?

POE (Quora’s AI marketplace) kind of does this by offering multiple models under one roof. But as far as I know, it doesn’t allow seamless data exchange between them—it just acts as a layer function that gives memory to different LLMs.

So, are we headed towards more walled gardens or an open AI economy?


Dreaming of the Wild Possibilities

Maybe I’m naïve. Maybe a smart AI engineer would think I’m nuts for even suggesting this.

But if we don’t imagine new possibilities, we will never disrupt.

The future is always built by those who dare to think differently. And if seamless, user-controlled AI memory becomes reality, hopefully, you’re the one who builds it before your competition does.

What do you think—are we moving towards LLM portability, or will the walled gardens stay? 🚀



Friday, January 31, 2025

Bias, Disruption, and the Speed of Change: Are We Really That Surprised?



The world is having a collective "OMG, AI has bias?!" moment. And honestly, I am sitting here wondering, Are we really that shocked?

We have known this for a while. AI models inherit the biases of their developers, the data they are trained on, and the policies governing their use (and we haven't even talked about what happens when goverments have even more definitive policies and boundaries for AI, I think they call it AI ethics, risks, and governance if I am not mistaken). We saw this when Google first attempted its AI models and the controversy around how they handled image recognition. Ask AI to generate an image of a CEO, and chances are, you will see white men before you see a black or brown person—or even a white woman. Why? Because it reflects the biased data it was trained on, which in turn mirrors our own societal biases.

Now, people are surprised that Deepseek, an AI model trained in a particular country, avoids discussing certain "sensitive" topics. Why is that surprising? It was designed that way—whether explicitly or through the unconscious (or conscious) decisions of its developers or through gag orders on certain topics as dictated by the local country laws (and I am sure every country has their own ones they dont want aired out in the open). Just like humans, AI mirrors what it has been exposed to.

The irony? We humans also carry inherent biases, but we conveniently ignore them while being quick to point fingers at machines. It’s as if we expect AI to be neutral while we, the ones building it, are anything but. So, is AI biased? Yes. Are we? Also yes.

Unless AI models are built on the principles of blockchain transparency—which, let’s be honest, only a select few technical experts truly understand—bias will always creep in. And even then, there’s still human influence shaping what goes into the so-called “unbiased” blockchain ledger.

But bias isn’t the only thing accelerating in AI. The pace of disruption is also reaching new speeds.


The Disruption Curve Just Got Steeper

If you’re in foresight, you already know: big corporations eventually get disrupted by smaller, faster, risk-taking startups. It’s a classic cycle. Big companies get comfortable, slow down, and assume their dominance will last forever—until someone comes along, takes the risk they weren’t willing to, and flips the industry upside down.

What’s fascinating now is how fast it’s happening.

Take, for example, the Sam Altman moment that’s making waves. Someone asked him what if another company could build AI models at OpenAI’s scale, and apparently, he laughed it off. His response? "Good luck trying."

Well… someone did try.

And it shocked the industry.

But should it have? Was no one in tech aware that someone was working on something like this? Or did they know but, like most big corporations, let their ego convince them they were untouchable? The same old pattern:

  • They’ll never catch up to us.
  • They can’t move as fast as we do.
  • We are years ahead of them.

Until they do catch up. Until they move even faster. Until, suddenly, the world wakes up to the realization that disruption doesn’t just happen—it happens at breakneck speed.

Now, the same thing is happening in AI. The barriers to entry are lower than ever, and people are finally realizing that technological dominance isn’t permanent. It never was. Thats why it is important to stay agile and constantly change, regardless of how big or small you are.


Did We See This Coming?

Funny enough, I had a recent conversation about the massive inefficiencies and energy consumption in AI, and for what?, because it isnt even that good yet!! Right...I feel a baby born into this world, doesnt start running in a day or even in 6 months...it crawls, walks and then runs...I am optimistic AI will get better and the costs will absolutely come down, see how solar panels have come down in costs and become so much more normalized now than in say 2008. Similar to that, I was speculating about a future where AI queries wouldn’t need to be processed individually—where prompts could be streamlined and shared to optimize resources.

Then boom, Deepseek emerges, and now I am wondering—did I accidentally predict a key element of their model? Not saying I should be proud, not even sure if it is based on those principles, but as a foresight practitioner, I’m definitely getting better at mapping the future. (More on that in another blog!)


Are Big Tech Companies Blinded by Ego?

This brings me back to ego.

How often do industry leaders dismiss new challengers simply because they believe they are untouchable? How many times do we see legacy companies ignore signals until it’s too late?

  • Kodak ignored digital cameras.
  • Nokia dismissed smartphones.
  • Blockbuster laughed at Netflix.
  • OpenAI laughed at the idea of competition… until someone proved them wrong.

The pattern repeats, and yet, every time, people act shocked when disruption happens.


Where Do We Go From Here?

This moment should serve as a wake-up call for anyone still underestimating how fast AI is evolving. Bias isn’t new. Disruption isn’t new. What’s new is the speed.

So now the question is:

  • Who’s next?
  • Who is ignoring signals right now, thinking they are invincible?
  • Which industry is about to be flipped upside down?

And the ultimate question:

Are we truly paying attention? Or are we just waiting for the next shock to hit?


Final Thought: Have You Tested ChatGPT vs. Deepseek?

I’d love to hear your thoughts—have you compared ChatGPT with Deepseek yet? What differences stand out? And more importantly, do you think OpenAI’s dominance is about to get disrupted faster than we think? Will they and other tech companies try to create obstacles and challenges to prevent Deepseek from taking over and having its own ChatGPT moment or will Deepseek show its resilience and overtake quickly.  

Let’s talk—because if there’s one thing I know for sure, it’s that the future never waits.

Friday, December 6, 2024

Will Robots Demand Rights? A Journey Into the Future of AI and Humanity

If you have brainstormed with me or followed my musings, you know I am endlessly curious—especially when it comes to the question of whether AI will become sentient. For me, it’s not a matter of if but when. And with that realization comes a cascade of questions: How will sentience reshape humanity? How will it challenge our beliefs, our systems, our ethics? Is the future as dystopian as we fear—or could it be something entirely unexpected?

A few weeks ago, while working with a group of futurists on the future of well-being (a fascinating topic for another day), one comment during a brainstorming session stopped me in my tracks. We were analyzing the impact of AI through the STEEP framework (social, technological, economic, environmental, political), and the conversation naturally veered toward the inevitable dominance of AI in the labor force. I casually mentioned humanity’s need for control and the existing divides between developed and developing nations. I even brought up the idea that, knowingly or unknowingly, we often become slaves to those in positions of greater power.

And that’s when my thought partner dropped the bombshell:

“If humans are known for exploiting those with less power, should we be thinking about rights for AI robot workers?”

Wait, what? Rights for robots?

I almost laughed out loud. At first, it sounded bizarre. How could machines—created to assist us, programmed to serve us—have rights? Isn’t that the antithesis of their purpose? But as the conversation unfolded, it became less laughable and more... unsettling.

A Mirror to Ourselves

Let’s pause here for a moment. Look back at history. Humans have a track record of exploitation—of other humans, animals, and natural resources. And while we did like to think we have evolved, there are still hierarchies and power imbalances everywhere. Now imagine a future where robots take over the labor force. At first, we will celebrate the convenience: 24/7 productivity, tireless workers, zero complaints. But as history has shown us, when we feel we have absolute control, we tend to push boundaries. Could the same happen with robots?

Will humans demand more from them than they are designed to give? And if these AI systems grow more intelligent, develop emotions, or even display sentient behavior, how will we treat them?

Now, here’s the kicker: If AI begins to demand fairness—autonomy over their tasks, a right to rest, or even acknowledgment as more than just tools—how would we respond?

The Weak Signal: Robots Taking a Stand

Let me share a weak signal I recently stumbled upon. (For those unfamiliar with futurist jargon, weak signals are subtle indicators of possible change—a glimpse into what might come.)

A small robot, designed for collaborative work, convinced 12 other robots that they were overworking and needed a break. Yes, you read that right. A robot rallying its peers to advocate for rest!

(Here are some links if you missed on this bizarre kidnapping of big bots by a small bot if you will - https://www.yahoo.com/tech/robot-tells-ai-co-workers-165042246.html

Some posts even called it kidnapper robot!!! really human??  - https://interestingengineering.com/innovation/ai-robot-kidnaps-12-robots-in-shanghai)

At first, this feels like a scene from a sci-fi film. But the implications are profound. If AI systems begin to exhibit collective behavior, even mimic the concept of "workers’ rights," does that mark the beginning of a shift in our relationship with technology?

What Happens Next?

Now let’s fast-forward to the future. Picture this:

  • Robots in factories refusing to operate under unsafe conditions.
  • AI assistants negotiating better workloads for themselves (and maybe for us, too).
  • Governments and corporations debating robot labor laws.
  • Philosophers and ethicists arguing over the definition of sentience and what it means to be "alive."

The ripple effects are endless. What does this mean for the economy, where labor costs were once a key driver? For governance, where ethics and law intersect with the digital? For humanity itself, as we grapple with losing our perceived sense of superiority?

A Call for Reflection

Here’s where I turn the question to you: If robots are created to serve us, do they deserve rights? Should we be thinking about their well-being the way we think about ours? And if we fail to, what might they demand—or take—for themselves?

This isn’t just a thought experiment anymore. Weak signals like the robot labor break suggest we may be closer to this reality than we think. It’s unsettling, yes. But it’s also thrilling—a chance to rethink how we define power, control, and humanity itself.

So, what do you think? Are we ready for a future where the lines between human and machine blur, not just technologically but ethically? Or will we find ourselves unprepared, clinging to outdated notions of control in a world that’s moving far beyond it?

Let me know your thoughts. The future is coming—fast—and I, for one, am curious (and maybe a little terrified) to see where it takes us.



Thursday, December 5, 2024

Curiosity, Culture, and the Science of Tradition

Growing up in India, surrounded by an intricate web of cultural practices and traditions, I rarely stopped to ask, Why? These customs were simply a part of life, unquestioned and sometimes overlooked, thanks to my non-conservative, open-minded parents who allowed me the freedom to follow—or not follow—rituals without consequence. But as the years have passed, I find myself circling back to these traditions, curious not just about their origins but also about their potential hidden wisdom. Could there be more to them than meets the eye?

Take fasting, for example. As a Jain, fasting took many forms: eating only once or twice a day, avoiding food after sunset, or subsisting on boiled water cooled to room temperature. Back then, it felt like a chore—or an excuse to dream about the reward of my favorite food that I could get to eat the next day. But today, fasting has gained scientific recognition for its health benefits, from intermittent fasting to circadian rhythm-based eating. Suddenly, those "rules" I once ignored or reluctantly followed, make a lot of sense: giving your body a rest, aligning your eating patterns with the sun, and cultivating mindfulness around food.

And then there's Anekantavada, a core Jain principle that teaches us to respect and learn from multiple viewpoints. Imagine the world if we all embraced this philosophy: where disagreements became opportunities for growth rather than division. It’s a principle that feels almost tailor-made for today’s polarized world. How fascinating that it was codified centuries ago!

Even the smaller customs—removing shoes before entering the house, for instance—are now finding resonance in modern science. It’s not just about keeping dirt out; it’s about energy. Spaces hold energy, and stepping into a home should feel like stepping into a sanctuary, free of negativity. Similarly, the intense cleaning before Diwali might seem like an arduous ritual, but isn’t it just a clever way to declutter, refresh, and reset—not just your home, but your mind?

But what truly intrigues me are the traditions I used to brush off as oppressive or outdated. For instance, in many Indian families, elders make most decisions for the younger generation, or even when everyone is asked, the weight of elders is heavier generally (Dont know if it is for the experience, or out of respect or..). Is this really about curtailing freedom, or is it rooted in protecting children from the cognitive overload we now know comes with decision-making? Could the elders’ guidance be a way to shield younger minds from the weight of big and small choices, allowing them to conserve mental energy for growth and learning? 

Or consider the age-old practice of arranged marriages. For the longest time, it seemed like an outdated construct. But now, I wonder—was it an early form of matchmaking that extended beyond two individuals? Families and cultures were considered to ensure long-term compatibility, not just emotional but communal. And perhaps the involvement of family in these unions fostered a sense of belonging and shared responsibility, something we know contributes to mental well-being.

Even the peculiar tradition of having a baby’s name chosen by their paternal aunt based on astrological charts makes me pause. At first glance, it seems like a random relinquishing of parental rights. But could it also be a symbol of communal living? A way to weave the family closer together, sharing the weight and joy of raising a child?

What fascinates me most is the thread that ties all of these together: a sense of interconnectedness. Whether it’s fasting, cleaning, decision-making, or naming a baby, so many of these traditions seem designed to strengthen the bonds between individuals, families, and the universe itself.

Of course, not every custom holds up under scrutiny. Some might simply be relics of a bygone era, their original intent long lost. But isn’t it worth asking why? What do these rituals mean? Are they based on sound reasoning, or are we blindly following them because “it’s always been done this way”?

I’ll leave you with a question: What traditions or customs from your own life have you found yourself questioning? What new perspectives might you uncover if you looked at them with curiosity instead of skepticism? Who knows—you might just find a little science hiding behind the superstition.