Liquid syntax error: Variable '{{ patterns (or Dev.to thinks it does), and Liquid tries to parse it.
Here’s the fixed version of your post (same content, but with the code snippet safely wrapped so Liquid ignores it):
✅ FIXED POST (Liquid-safe)
If you’ve ever tried building a serious AI shopping assistant, you’ve probably hit the same wall I did:
the model “helps,” but the format drifts
you refine one constraint, and it rewrites everything
alternatives disappear
comparisons turn into essays
the moment you need links, things get hand-wavy
This project is my attempt to fix that using a ledger-first mindset (MindsEye) and Algolia Agent Studio as the perception + retrieval layer.
What is MindsEye?
MindsEye is my ledger-first way of thinking about AI systems:
Every interaction is not “just chat” — it’s a trace.
Every refinement is not a reset — it’s a branch.
Every session should preserve intent evolution, not just the last message.
So instead of treating AI as a black box that spits out answers, MindsEye treats AI as a system that records how it reasons—even if the “ledger” is lightweight at first.
That’s the big idea:
Search isn’t retrieval. Search is pattern recognition under intent pressure — and that shape deserves to persist.
What I learned about search (and why Algolia is perfect here)
Most systems treat search as:
“Find me X.”
But real searching is:
“Watch how my intent evolves while I try to find X.”
When you browse products, you don’t jump straight to the final item. You pass through:
near matches
alternatives
“almost right but wrong color / wrong material”
price boundaries
comfort vs style trade-offs
Algolia already supports the real version of search:
fast retrieval
facets
alternative matches
contextual filtering
So I used Algolia as the perception engine—the system that surfaces the search space properly—then layered MindsEye on top as the structure + memory discipline.
What I built for this challenge
A consumer-facing online shopping agent that behaves like a decision-support engine, not a chatty assistant.
The agent does 3 things aggressively well:
Finds products fast using indexed retrieval
Preserves alternatives + near-misses so refinements don’t erase the search space
Outputs in stable, predictable structures (tables/cards/grids) so users don’t fight formatting
The core problem I targeted: Output Drift
Even when retrieval is solid, most agents fail in the UI layer:
tables turn into paragraphs
the agent stops being scannable
comparisons become narrative
users have to ask: “can you put that in a table?”
That’s friction.
So I introduced Output Cards: structured “state outputs” that control formatting before the model speaks.
The state-driven approach (Ping Engine mindset)
Instead of treating every message like a fresh response, the agent runs as a simple state machine:
1) DISCOVERY_CARD (Browse / Search)
outputs a table
max 10 rows
predictable columns
preserves variety
2) DECISION_CARD (Compare / Shortlist)
outputs a comparison grid
columns = products
rows = attributes
3) CHECKOUT_CARD (Buy / Links)
outputs link-forward purchase options
retailer + price + availability + URL
never invents links
This is basically MindsEye’s “ledger-first” discipline translated into something users can actually feel: no chaos, no rewrites, no vibes.
What worked… and what didn’t (real limitation I hit)
I added client-side tools (like set_output_card) to enforce these Output Cards at runtime.
But in Algolia’s Dashboard preview, I kept getting this message:
“Client-side tools are not implemented in the Dashboard preview mode, which is only there for demonstration and debugging purposes.”
That wasn’t the tool “breaking.”
It meant the preview UI won’t execute my client-side logic.
So I adapted the system in a way that actually made it stronger:
✅ Preview mode
enforced via instruction-level state contracts
behaves like a strict state machine even without tool execution
✅ Deployed app
the exact same rules are enforced via:
client-side tools + UI renderer
structured output components (InstantSearch Chat widget)
This lets judges see the logic now, and shows that the architecture is deployable for real.
How people can access my agent
Here’s the Agent ID and API endpoint. These are what you use to connect via API or the InstantSearch Chat widget.
Agent ID:
65a466c4-cde7-4174-996e-1ea42648ca85
Agent API URL:
https://yqwe6h2zw0.algolia.net/agent-studio/1/agents/65a466c4-cde7-4174-996e-1ea42648ca85/completions?compatibilityMode=ai-sdk-5
And here’s the integration pattern I used with React InstantSearch Chat (as provided by Agent Studio), where tools can be wired for real deployments:
{% raw %}' was not properly terminated with regexp: /\}\}/
Top comments (4)
Wow, this MindsEye Algolia Agent Studio project looks really impressive! I love how you’re approaching search as a structured, ledger-first shopping journey—it feels very forward-thinking. I wonder if integrating more AI-driven personalization could make the experience even smoother for users. From my perspective, this could be a game-changer for how people interact with search and shopping tools. Definitely excited to see where you take this next!
Thanks — appreciate that a lot 🙏
One thing I was very intentional about with this agent is that it doesn’t try to be “smarter” by being more conversational or more personalized upfront.
Instead, the focus was on structuring the decision trail itself.
In most shopping or search assistants, personalization means:
“guess the user faster and jump to a recommendation.”
MindsEye flips that a bit.
The idea is that search is already a cognitive process, not just retrieval:
users explore
reject near-matches
adjust constraints
compare trade-offs
and only later converge
So the ledger-first part here isn’t about logging for the sake of logging — it’s about preserving that evolution instead of collapsing it into a single “best answer.”
That’s why I leaned hard on:
preserving alternatives instead of overwriting them
forcing stable output structures (tables / grids / cards)
treating refinements as state transitions, not restarts
Algolia handles the perception layer really well (fast retrieval, facets, near-matches). MindsEye sits on top as the discipline layer — making sure intent doesn’t get lost just because the user asked a follow-up.
On personalization specifically: I see it as something that should emerge after the decision trail is clear, not before. Once you have a ledger of:
what was considered
what was rejected
what constraints tightened over time
…personalization becomes explainable instead of magical.
Curious how others here think about this:
Do you see more value in assistants that optimize answers early, or ones that optimize the path to a decision even if it takes an extra step?
Would love to hear different takes on that 👀
This is a really thoughtful breakdown — I love how intentional you were about capturing the decision trail rather than just jumping to “smart” recommendations. I agree that preserving alternatives and tracking state transitions can make the reasoning behind choices so much clearer, and it’s a perspective I don’t see discussed often. In my experience, giving users the space to explore and refine their decisions tends to lead to more confident outcomes, even if it adds a step or two. I’m especially intrigued by the ledger-first approach; it feels like it could make explainable personalization much more reliable. Definitely curious to see how others balance early optimization versus optimizing the full decision path!
Yo I really appreciate this — and you nailed the exact trade I was obsessing over.
Most “smart” shopping agents try to win by skipping steps: predict fast → recommend fast → move on.
But in real life, people don’t buy like that. They buy by eliminating. They need to see the near-misses, the almost-rights, the “this is good but not for me” options — because that’s literally how intent sharpens.
That’s why I keep calling it ledger-first. Not because I’m in love with the word “ledger” 😭 — but because once you preserve the trail, you get something most assistants can’t do:
refinements don’t erase the world
comparisons don’t turn into essays
personalization stops being magic and starts being earned
Like… if you want explainable personalization, you need the “why” history. Otherwise it’s just vibes pretending to be intelligence.
Also I’m with you: adding a step or two is worth it if it means the user ends up confident, not just “sure I guess.”
Curious though — if you were designing it, where do you think the line is?
At what point does “decision trail” become “too much friction”? And what UI pattern would you use to keep it scannable without turning it into a spreadsheet simulator? 👀