2
3 Comments

The next Commerce ui is no ui: why shopping should start before you reach for your phone

…and how we’re building the last mile of it at Mujo AI.

We still treat the smartphone like a magic wand. You notice you’re down to two decent plates. Or your run ends with the same cracked bottle you’ve been tolerating for months. The ritual starts: unlock, search, scroll, compare, read reviews, add to cart, get distracted, abandon.

Most of the time, it’s not that you don’t want the thing.
It’s that the system makes you work for it. This isn’t a “users are lazy” problem.
It’s a system-design problem. Our tools sit next to our lives instead of living inside them.

LLMs didn’t fix shopping. They just moved the friction.

Open any chat model and type:

“Pick a TV for me.”

You’ll get a plausible answer… that quietly assumes a bunch of things it doesn’t know:

  • your room size and viewing distance,

  • how much glare you get during the day,

  • what ports your existing gear needs,

  • whether you hate motion smoothing and soap-opera effect,

  • if you care more about sports than movies.

You can prompt forever:

  • “Add constraint X.”

  • “Make it cheaper.”

  • “I already have a Sonos setup.”

Or the system could just know you.

Two big gaps keep us stuck in “phone + search + tabs” land:

  1. Sensing. Devices don’t really see what we see. They don’t know your 2018 foot massager is dying, that your kid has outgrown their car seat, that your only frying pan is permanently warped.

  2. Actuation. Even when something knows, it can’t act. You still get a search results page, not a decision. You still have to compare, filter, and format the choice into something you trust.

LLMs helped with language. They didn’t change the surrounding system.


We’re already testing the new interface — it just looks awkward

The next interface for commerce won’t be “a better app”.
It will be ambient systems: small devices plus models that see, listen and suggest in the background.

You can see early prototypes already:

  • AI pins that promise “AI on your lapel” instead of in your hand — and then struggle with latency, reliability and basic UX.

  • Smart glasses that quietly add a camera and multimodal AI, so they can recognize objects and answer questions about what you’re looking at while you walk.

  • Headsets and wearables that are starting to treat “what you see and do all day” as a primary signal, not an afterthought.

  • On the pure software side, shopping assistants inside marketplaces that answer “what’s a good option for X?” instead of making you type keyword puzzles.

And when third-party agents started automating shopping in the browser “as a human”, some marketplaces moved fast to shut them down.

The direction of travel is obvious:

  • less screen, more agent

  • less typing, more context

  • fewer “sessions”, more continuous understanding

We’re just still in the ugly-prototype era.

What ambient commerce actually has to do

If you strip the hype away, an ambient commerce system has four jobs:

  1. Observe your environment — with consent.
    Glasses, pins, watches, home sensors: anything that can notice objects, condition, location, usage patterns.

  2. Infer your preferences.
    Your real price bands. Brands you avoid. Materials you won’t tolerate. Constraints like “dishwasher-safe, no plastic”, “carry-on size only”, “works in a small apartment”.

  3. Forecast your needs.
    Replacement cycles, seasonality, upcoming events, life changes. Not just “you bought a tent, here’s more tents” — but “you bought a tent, your calendar shows a trip, your shoes are worn out, here’s what’s missing”.

  4. Compose a decision, not a results page.
    Instead of “here are 812 water bottles”, it should say:

    “You’re running with a cracked bottle. Here are three that match your style, fit your usual budget, clip to your bag, and don’t retain smell. Here’s the quick why for each.”

You glance. You nod. Done.

That’s ambient algorithms with taste.


Why this matters for commerce (beyond “cool gadgets”)

There are entire categories you’ll never search for because you don’t even know they exist — or you only remember them when it’s already annoying.

  • If the system sees you run three times a week, carry a bag with a carabiner, and hate bottles that smell, it can propose a better bottle before you articulate the problem.

  • If it infers wear and tear from how often you use that foot massager, it can flag the upgrade model that actually solves your pain points.

  • If it notices you only have two intact plates and a dinner on the calendar, it can nudge you with a short list that fits your kitchen, dishwasher, and taste.

The conversion uplift doesn’t come from louder ads or bigger hero banners.

It comes from timing + relevance + almost zero effort.

But even if all this sensing and inference works, there’s a boring, brutal problem left.


The last mile is not search. It’s creative.

Even if an ambient agent knows exactly what you need, something still has to persuade you:

  • visuals that match your taste and constraints,

  • copy that answers your objections instead of repeating the spec sheet,

  • formatting that fits marketplace rules and passes AI / computer-vision checks.

That’s the “last mile” we’re focused on at Mujo AI:

taking a product signal and generating a complete, on-brand listing — gallery + copy — that’s actually ready to ship.

Not:

  • a random pretty picture,

  • or text that ignores layout,

  • or a template that breaks on mobile.

A proper listing is a mini-funnel:

Hero → Benefits → Proof / Comparison → Lifestyle → Variants,
with titles and bullets that fit, read easily on mobile, obey rules, and don’t hallucinate.

Marketplace algorithms and shoppers both respond to that structure, whether they consciously notice it or not.

A few concrete vignettes (you’ll recognize these)

  • Running shoes: Your watch sees your mileage creeping up and your gait data shows mild over-pronation, so instead of 200 models the system serves a 3-option card tuned to your distance, gait and price band.

  • Home office kit: Your laptop camera has watched your neck angle all day and your watch flags wrist strain, so it assembles a small ergonomic bundle—stand, mouse, light bar—already laid out in a listing that explains what actually gets better for your body.

  • Kids’ car seat: Photos and height logs show your kid has outgrown their seat and local rules say it’s time to change, so you get a short, compliant comparison of three models that fit your car, budget and safety preferences.

In each vignette, the creative,images + copy, does the last bit of work your brain still needs: “Yes, this is for me.”


Where Mujo fits in (and why we started “from the end”)

We began with the output, not the sensors:

From one product photo → a complete listing (gallery + copy), ready for Amazon/Shopify/Etsy.
No prompt marathons. No stitching tools. No guessing.

Why start here? Because the minute ambient systems exist, they’ll need a reliable way to compose persuasive, compliant, on-brand creatives at scale. That’s a harder problem than it sounds:

  • Copy ↔ layout fit: Headlines that actually fit; bullets that scan on mobile.

  • Visual legibility: Text-on-image contrast and safe areas that pass AI/computer-vision checks.

  • Variant lock: Strawberry vs peach stays identical in angle/light/geometry; only label/color change.

  • Exports that “download & post”: 1:1, 4:5, A+ modules, filenames, ZIP structure—all done.

That’s the engine we’re building.


How Mujo AI works today

  • Understand the product & buyer via Agent: detect use cases, audiences, and the real benefits behind the specs.

  • Write titles, bullets and descriptions that fit: generate and refine marketplace-compliant copy in a structured copy editor, tuned for length, clarity and platform guidelines.

  • Generate the gallery as a funnel in seconds: hero, benefits, comparison, lifestyle and more — not just a random photo dump.

  • Edit in the Mujo Design Editor: tweak layouts, swap scenes and adjust copy in an e-commerce-first, drag-and-drop editor where everything stays layered and reusable.

  • Reuse at scale with Bulk: save a project as a template and apply it to dozens of SKUs — Mujo AI rewrites the copy and regenerates scenes per product while keeping structure, fonts, colors and tone of voice.

  • Export: download marketplace-ready sets with the right ratios, formats and file structure, or keep iterating inside Mujo AI.

And yes, on the roadmap (and already partly in testing): richer multi-photo input, deeper brand kits, smarter Bulk flows for collections and bundles, team roles and permissions, and social-first exports.

https://mujoai.com


posted to Icon for Mujo AI
Mujo AI
  1. 2

    Looking ahead, this feels like a big leap for commerce UX. I wonder if there are regions or user segments where this will lag (older users, privacy-sensitive segments). Thanks for sharing the prototype direction and for pulling the curtain back on the ‘last mile’ work

  2. 2

    Great read. Love the idea that real commerce begins long before a screen is involved. The sensing–actuation gap is such an under-discussed bottleneck.

    1. 1

      Yeah, that point hit me too. We talk so much about UI polish, but barely about the gap between noticing a need and acting on it