Thursday, July 17, 2025

Sustainability Should Not Spill

 There are few things more disheartening than watching your lunch hit the pavement. Especially when it is not your fault. Especially when you are hungry, short on time, and really looking forward to those roasted veggies with that perfectly spicy hot salsa with some guac and sour cream, all bursting with flavors from Chipotle.

The first time it happened, I brushed it off. Maybe the bag had a tear. Maybe it was a fluke. But when the same thing happened again—and then a third time—I could no longer convince myself this was a one-off. The pattern was clear: Chipotle’s new “sustainable” paper bags and the associated paper bowl containers were failing. And they were failing fast.

For context, I live less than a mile from my local Chipotle. A short five-minute drive. No hills. No potholes. Nothing dramatic. Yet every single time, that brown paper bag came back soggy at the bottom, stained and weakened by the heat and moisture trapped inside. And it was not just the bag—it was the paper container inside it. The box that held the burrito bowl was equally unfit for the job, especially when it came to holding hot salsa or just carrying the weight of the meal. Together, they formed a perfect storm of seepage and failure.

If it were just the bag giving way, I might still have salvaged the meal. But when the container itself starts soaking through and giving up, there is no coming back. The bag collapses, the contents spill—and suddenly, dinner becomes disappointment.

I have had this food spill in my garage. Once, on my apartment walkway. I am lucky it has not happened inside the car. But each time, the aftermath is the same: wasted food, wasted money, wasted time, and a growing sense of frustration with a brand that claims to champion sustainability. And don't even get me started on having to clean all this up, while I am absolutely starving now, and hangry!!

Now, I am someone who believes deeply in sustainable design. I just recently founded a repair enablement platform, FixTogether. I have spent time in manufacturing, where packaging goes through rigorous tests—compression, drop, humidity. If you are going to label something “sustainable,” should it not also be functional? Should it not consider the real-life journey of a takeout order—from kitchen counter to customer’s hand?

Because here is the thing: sustainability is not just a checkbox on a product label. It is a system. And when that system fails, it creates more waste than it saves. A bag that breaks and spills food is not sustainable—it is performative. A box that cannot hold its contents without leaking? That is not innovation. That is oversight.

It makes me wonder: are companies testing these “green” solutions in real-life conditions? Are they pressure-testing them not just in climate-controlled labs but in warm kitchens, humid cars, and short drives that somehow mimic a small monsoon inside a brown bag? Are they weight testing them at all?

When I worked in manufacturing, we ran packaging through every scenario we could imagine. We dropped it. We shook it. We heat-tested, weight-tested, moisture-tested. Why? Because packaging is not a formality—it is a promise. A promise that the product inside will reach you whole.

Food, arguably, carries even more sensitivity. It is personal. It is comfort. It is need. And when packaging fails—not just once but repeatedly—it is not just a bad design. It is a breach of trust.

Which brings me to the question I cannot stop thinking about:
Who is sustainability really serving here—people or marketing?

Because when sustainable packaging leads to more food being wasted, more customers left scrambling, more moments of frustration rather than ease... is it really sustainable? Or is it just a feel-good label slapped on something no one bothered to stress-test?

And perhaps more importantly:
Are these companies actually listening to customer feedback? Or have they created the illusion of listening—surveys, help chatbots, PR statements—without the will to act?

True sustainability is not just about biodegradable materials. It is about systems thinking. Collaboration. Co-designing with the user in mind. It is asking: does this work for the customer, for the environment, and for the economics of waste?

Right now, it feels like Chipotle—and perhaps others—are skipping that last mile of thoughtfulness. And that last mile, quite literally, is where the bag breaks.

I get it—change is hard. And sustainability is messy. There is no perfect solution. But perfection is not what customers are asking for. We are asking for awareness. For responsiveness. For products that consider the entire lifecycle—from the manufacturing floor to the apartment floor where someone might be picking up spilled rice and guacamole, wondering if dinner is still salvageable.

If you claim to care about the planet, then you must care about people too. Because the two are not separate. They are deeply intertwined. Real sustainability holds both in balance: the ecological and the emotional, the systems and the stories.

To any company listening: your customer is not just a data point. He/She is a person standing in his/her garage, burrito bowl spilled, hungry, tired, and feeling let down by a decision you made in the name of “green.”

And to every reader who has had a similar moment—maybe with a compostable fork that snapped mid-meal, or a compostable straw which did not last more than 2 sips of your drink, or a recycled bottle that leaked in your bag—you are not alone. You are not too picky. You are not the exception.

You are the reality check that brands need.

So let us ask the harder questions. Let us not be pacified by the word “sustainable” until it is backed by design that works. Let us speak up—kindly, clearly, persistently—until companies realize that if sustainability is going to stick, it should not spill.






Sunday, July 13, 2025

Seats of Empathy

It was not the delayed flight that disappointed me most, or the irony that the snacks—an entire box of Pringles, surprisingly generous—were among the best I had seen on a U.S. domestic airline.

It was something deeper. A fracture that went unnoticed by most, but impossible for me to ignore.

I had not flown United in nearly a decade. American Airlines had earned my loyalty over the years—with preferred status, seamless rebookings, and, on international partners like Qatar, small gestures that made me feel remembered. Valued. Seen.

But this flight with United reminded me how quickly trust can unravel when a brand’s culture is not lived out by the very people who carry it forward.


Before I even reached the airport, United’s app was refreshingly transparent: our flight would be delayed by thirty minutes due to the crew needing mandatory rest. A reasonable delay, and one I appreciated being informed about in advance. That level of proactive communication gave me hope—perhaps things had changed since my last flight with them.

But what happened at the gate revealed something else entirely.

Five minutes before boarding, a new delay was announced. The flight attendants, it turns out, were not scheduled to be picked up from their hotel until after our original departure time. A logistical oversight that puzzled me: how could an airline miss such a basic detail? Do these scheduling gaps happen often? More importantly, who holds the accountability?

I watched time stretch and uncertainty thicken. Another delay followed—this time due to weather. What began as a short delay ballooned to over two and a half hours. I would miss my meeting. My day’s purpose was gone.

And yet, what struck me most was not the operational chaos. It was what happened next.

The plane finally boarded. I noticed several empty seats in the Economy Plus section. Some passengers—perhaps hoping for a small reprieve after hours of delay—had quietly moved forward, easing themselves into those unused spots. And then, the announcement came. Not as a welcome. Not as an accommodation. But as a warning, a correction.

“If you are seated beyond Row 21, please return to your assigned seat. Upgrades to the front are available for purchase.”

It was not just an upsell. It was a warning. A subtle but sharp reminder: empathy was not part of this equation.

That moment? That was the culture speaking. Loud and clear.


I understand business models. I understand incentives. But I also understand people.

That announcement—on the heels of a frustrating series of events—landed like a slap. It told passengers that even after we had endured delays, missed connections, and a clear breakdown in scheduling communication, we were still being asked to pay more. No empathy. No acknowledgment. Just a script. Just a quota.

The airline might argue it was policy. But what is policy without wisdom?

Had the crew instead invited passengers with the tightest connections to move forward—to offer even the smallest chance at reclaiming lost time—it would have transformed the tone of the entire experience. Even if the delay was out of their hands, the empathy would not have been.

That is the moment when culture shows itself. Not in the livery, not in the lounge, but in the quiet, consequential decisions frontline employees make under stress.


As a strategist, I could not help but reflect.

There is a profound disconnect when your people are not aligned with your brand’s values and empowered to live them. When KPIs reward the wrong behaviors, you are not just losing revenue opportunities—you are eroding trust. Alienating loyalty. Turning passengers into skeptics.

This is not just about airlines. This is a mirror for every business leader.

Are your metrics inadvertently encouraging short-term thinking over long-term brand affinity? Are your employees equipped—and trusted—to make decisions that reflect your company’s deeper promise?

Culture is not a plaque on the wall. It is a decision made at Row 21, seat by seat.


So here is the question I leave you with:

If your frontline employee had to choose between earning a few dollars through a policy-driven upsell or saving the trust of a customer through a moment of empathy—which would they choose?

And more importantly—what have you trained them to value?

Because sometimes, the real upgrade a customer is seeking is not a better seat. It is a better experience. A brand that remembers why people fly in the first place: to get somewhere that matters.

On time. With care. And just enough humanity to feel like we are more than just a boarding group.




Sunday, March 2, 2025

The Future of LLMs: A Commodity, a Platform, or a Locked Ecosystem?

 There was a time when hardware was everything. The processor, the storage, the power—it was all about the physical components. But over time, hardware became a commodity, and the real intelligence moved to software—the brain that made everything work.

I believe LLMs (Large Language Models) are heading in the same direction.

Today, LLMs are impressive, but above-average performance is no longer enough—everyone is doing that. Soon, just like hardware in smart buildings, LLMs will be necessary but not the differentiator. The real power will lie not in the model itself, but in how we use and move our data between models.


Imagine This: Seamless LLM Portability

What if, instead of training a new LLM from scratch every time you switch, you could export all your training, memory, and preferences from one model and load it into another?

  • Think of it like exporting browser bookmarks from Chrome to Edge—simple, quick, and frictionless.
  • Or how smart building automation isn’t about having the best sensors, but the best software to control them.

If someone solves this seamless transfer problem, it would be a gamechanger. Suddenly, I as the user would have full control over which LLM I want to use. It would be a free market of AI models, where I decide which one serves me best—not the other way around.

But here’s the challenge: Why would companies allow this?


The Moat Problem: Will AI Stay Closed or Go Open?

Right now, the biggest LLM companies—OpenAI, Anthropic, DeepSeek—are building moats, just like Apple did with its App Store and music ecosystem. They don’t want you to leave. They don’t want you to take your trained data elsewhere.

Why? Because data is the real moat. If I allow my users to transfer their AI memories and training to a competitor, I lose my stickiness. I lose my power. I lose my users.

So, will AI follow the Apple model (closed ecosystem) or the Android model (open, interchangeable, user-centric)?

Right now, it’s looking more like Apple—companies are keeping their data walled off, because why would they give up control?

But here’s the twist—eventually, they may not have a choice.


A Future Where AI Models Become Interchangeable?

Once the market gets saturated, and there are too many competing LLMs, the biggest differentiator will no longer be the models themselves. It will be how well they work together.

At that point, the pivot to interoperability will be inevitable. Some LLM company—or maybe an entirely new startup—could rise up as the AI platform that enables seamless training across all models.

  • Think of what Android did to Apple. Instead of one locked system, they enabled an open ecosystem where users had freedom of choice.
  • Think of what API-first companies did to traditional software. They created layers that made everything compatible, and suddenly, closed ecosystems lost their advantage.

Could this happen with AI? Maybe not yet—but in a few years, as competition increases, it might be the only way forward.


What About AI Agents? Are They Moving Towards Openness or Isolation?

I’ll admit—I don’t know enough about AI agents to say for sure. But I do wonder:

  • Are they being developed to connect models and enable interoperability?
  • Or are they reinforcing more isolation, making it even harder to move from one LLM to another?

POE (Quora’s AI marketplace) kind of does this by offering multiple models under one roof. But as far as I know, it doesn’t allow seamless data exchange between them—it just acts as a layer function that gives memory to different LLMs.

So, are we headed towards more walled gardens or an open AI economy?


Dreaming of the Wild Possibilities

Maybe I’m naïve. Maybe a smart AI engineer would think I’m nuts for even suggesting this.

But if we don’t imagine new possibilities, we will never disrupt.

The future is always built by those who dare to think differently. And if seamless, user-controlled AI memory becomes reality, hopefully, you’re the one who builds it before your competition does.

What do you think—are we moving towards LLM portability, or will the walled gardens stay? 🚀



Friday, January 31, 2025

Bias, Disruption, and the Speed of Change: Are We Really That Surprised?



The world is having a collective "OMG, AI has bias?!" moment. And honestly, I am sitting here wondering, Are we really that shocked?

We have known this for a while. AI models inherit the biases of their developers, the data they are trained on, and the policies governing their use (and we haven't even talked about what happens when goverments have even more definitive policies and boundaries for AI, I think they call it AI ethics, risks, and governance if I am not mistaken). We saw this when Google first attempted its AI models and the controversy around how they handled image recognition. Ask AI to generate an image of a CEO, and chances are, you will see white men before you see a black or brown person—or even a white woman. Why? Because it reflects the biased data it was trained on, which in turn mirrors our own societal biases.

Now, people are surprised that Deepseek, an AI model trained in a particular country, avoids discussing certain "sensitive" topics. Why is that surprising? It was designed that way—whether explicitly or through the unconscious (or conscious) decisions of its developers or through gag orders on certain topics as dictated by the local country laws (and I am sure every country has their own ones they dont want aired out in the open). Just like humans, AI mirrors what it has been exposed to.

The irony? We humans also carry inherent biases, but we conveniently ignore them while being quick to point fingers at machines. It’s as if we expect AI to be neutral while we, the ones building it, are anything but. So, is AI biased? Yes. Are we? Also yes.

Unless AI models are built on the principles of blockchain transparency—which, let’s be honest, only a select few technical experts truly understand—bias will always creep in. And even then, there’s still human influence shaping what goes into the so-called “unbiased” blockchain ledger.

But bias isn’t the only thing accelerating in AI. The pace of disruption is also reaching new speeds.


The Disruption Curve Just Got Steeper

If you’re in foresight, you already know: big corporations eventually get disrupted by smaller, faster, risk-taking startups. It’s a classic cycle. Big companies get comfortable, slow down, and assume their dominance will last forever—until someone comes along, takes the risk they weren’t willing to, and flips the industry upside down.

What’s fascinating now is how fast it’s happening.

Take, for example, the Sam Altman moment that’s making waves. Someone asked him what if another company could build AI models at OpenAI’s scale, and apparently, he laughed it off. His response? "Good luck trying."

Well… someone did try.

And it shocked the industry.

But should it have? Was no one in tech aware that someone was working on something like this? Or did they know but, like most big corporations, let their ego convince them they were untouchable? The same old pattern:

  • They’ll never catch up to us.
  • They can’t move as fast as we do.
  • We are years ahead of them.

Until they do catch up. Until they move even faster. Until, suddenly, the world wakes up to the realization that disruption doesn’t just happen—it happens at breakneck speed.

Now, the same thing is happening in AI. The barriers to entry are lower than ever, and people are finally realizing that technological dominance isn’t permanent. It never was. Thats why it is important to stay agile and constantly change, regardless of how big or small you are.


Did We See This Coming?

Funny enough, I had a recent conversation about the massive inefficiencies and energy consumption in AI, and for what?, because it isnt even that good yet!! Right...I feel a baby born into this world, doesnt start running in a day or even in 6 months...it crawls, walks and then runs...I am optimistic AI will get better and the costs will absolutely come down, see how solar panels have come down in costs and become so much more normalized now than in say 2008. Similar to that, I was speculating about a future where AI queries wouldn’t need to be processed individually—where prompts could be streamlined and shared to optimize resources.

Then boom, Deepseek emerges, and now I am wondering—did I accidentally predict a key element of their model? Not saying I should be proud, not even sure if it is based on those principles, but as a foresight practitioner, I’m definitely getting better at mapping the future. (More on that in another blog!)


Are Big Tech Companies Blinded by Ego?

This brings me back to ego.

How often do industry leaders dismiss new challengers simply because they believe they are untouchable? How many times do we see legacy companies ignore signals until it’s too late?

  • Kodak ignored digital cameras.
  • Nokia dismissed smartphones.
  • Blockbuster laughed at Netflix.
  • OpenAI laughed at the idea of competition… until someone proved them wrong.

The pattern repeats, and yet, every time, people act shocked when disruption happens.


Where Do We Go From Here?

This moment should serve as a wake-up call for anyone still underestimating how fast AI is evolving. Bias isn’t new. Disruption isn’t new. What’s new is the speed.

So now the question is:

  • Who’s next?
  • Who is ignoring signals right now, thinking they are invincible?
  • Which industry is about to be flipped upside down?

And the ultimate question:

Are we truly paying attention? Or are we just waiting for the next shock to hit?


Final Thought: Have You Tested ChatGPT vs. Deepseek?

I’d love to hear your thoughts—have you compared ChatGPT with Deepseek yet? What differences stand out? And more importantly, do you think OpenAI’s dominance is about to get disrupted faster than we think? Will they and other tech companies try to create obstacles and challenges to prevent Deepseek from taking over and having its own ChatGPT moment or will Deepseek show its resilience and overtake quickly.  

Let’s talk—because if there’s one thing I know for sure, it’s that the future never waits.