1. All Blogs
  2. Product | Algolia
  3. Ai | Algolia
  4. E-commerce | Algolia
  5. User Experience | Algolia
  6. Algolia
  7. Engineering | Algolia

How I built an AI stylist that curates and visualizes outfits — with Algolia and genAI

Published:

I’m Oscar Meunier, a Solutions Engineer at Algolia, and recently I set out to build something a little different — an AI stylist that curates and visualizes fashion outfits automatically. 

It started with a simple question: could AI help merchandisers design on-brand curations of outfits faster, without losing creative control?

If you want to follow along, you can also watch my presentation and live demo on this topic from this year’s Algolia DevCon:

The challenge of “on brand”

If you’ve ever worked with a merchandising team, you know how much creative effort goes into maintaining a cohesive look. They’re curating category pages, pinning items, building seasonal sets — and doing it all while balancing brand rules, product visibility, and aesthetics.

These rules can be subtle (“don’t mix Adidas shoes with Nike socks”) or situational (“show swimwear in summer, jackets in winter”). They’re easy for a human to see but hard to formalize for a machine.

So, automation sounds great — until you try to define what looks good.

Why “frequently bought together” isn’t enough

A common approach is the “frequently bought together” model. It’s useful, but messy. Real-world purchases don’t always make sense together – I might buy a phone charger, a T-shirt, and some milk in one order. That doesn’t mean they belong in the same outfit.

So I decided to build something that could actually understand style.

Building the AI stylist

I realized I already had everything I needed — Algolia’s AI Search and retrieval power, plus modern generative AI models all connected together by Agent Studio.

Here’s what I built: a demo e-commerce site where I can search for an item — say, flip-flops — and instantly see outfits built around it. Each outfit includes an image showing how those items might look together: a summer outfit, a casual outfit, and so on.

I can even generate new outfits on demand. Behind the scenes, an AI agent looks at the selected item, interprets the style request, queries Algolia for relevant products, and then generates a visual image for the look.

It’s not instant — image generation currently takes about a minute — but these models are improving fast. It’s likely the slowest it’ll ever be.

How it works 

The system works in three steps:

  1. The agent constructs the search and filters based on the prompt (“summer look”, “streetwear”).

  2. Algolia retrieves matching items from the product catalog.

  3. The agent selects the best combination and, finally, generates an image of the outfit.

You can think of it like this:

  • The agent handles the creative direction.

  • Algolia enforces the guardrails — filtering, merchandising rules, and relevance.

I also used the Generative Experiences toolkit, which lets non-technical users manage prompts and preview results before publishing. It’s built right into the dashboard, reducing dependence on developers.

Algolia is the retrieval layer

Retrieval is often the hardest part of building AI experiences. At Algolia, that’s our core strength.

Our search engine is fast, understands fuzzy intent, and incorporates popularity metrics via AI Ranking — helping not just find what’s relevant but what’s desirable. Vector search combined with traditional keyword search makes it even stronger: it helps the agent expand beyond literal keyword matches to understand semantic intent. Since LLMs are verbose and expressive, they work beautifully with hybrid vector & keyword retrieval. We give customers the ability to define what’s relevant, clean and enrich data, and handle latency and scale, so customers can build what’s best for their businesses. 

In short, Algolia brings structure, performance, and relevance to the chaos of generative output.

Who this helps 

Big fashion brands may already have creative teams and high production budgets. Yet, they may still want to give it a try to stay on the leading edge of merchandising. For brands with massive catalogs and slim margins, the economics are different. Generating styled product images on demand could make a huge difference and give your brand an edge against big box retailers or marketplaces. 

And it’s not just fashion. The same concept could apply to:

  • Furniture: generating a room layout based on your preferences.

  • Tech: visualizing a custom setup.

  • Automotive or cosmetics: building “looks” instead of single products.

Challenges and next steps

The biggest limitation today is cost. While building this demo, it cost around $40–50 in image generation — not bad compared to a photo shoot, but still a factor.

Interface design is another frontier. We’re still figuring out what “good UX” looks like in generative experiences. My demo works, but I’d love to see experiments where users can click an image to add all products to cart or browse through entire looks.

Finally, context is everything. The more context we give the agent — user preferences, search history, current filters — the more personal and frictionless the experience becomes.

Final thoughts

We’re all experimenting in this space. Search experts are reinventing what “search” even means in the age of generative AI. There’s no clear roadmap — and that’s exciting. The best thing we can do is keep building, testing, and learning.

If this project sparks an idea for your own AI-powered experience, great — that’s the point. Give it a try yourself with a free account, or get in touch with us to be an early user of Agent Studio.

Recommended

Get the AI search that shows users what they need