1. All Blogs
  2. Product
  3. AI
  4. E-commerce
  5. User Experience
  6. Algolia
  7. Engineering

Agent Studio — Building converting, business-aware agents with Algolia

Published:

Listen to this blog as a podcast:

Algolia customers will find a new beta of an AI Assistant in their dashboard, which was built using Agent Studio, a new AI toolkit now in beta. While many AI assistants can be helpful, they can "hallucinate" answers. One nice thing about Agent Studio is that it avoids many hallucinations because it uses your up-to-the-minute data.

For example, last month when I asked it: “What’s new with Algolia?”

whats-new-with-algolia.webp

The answers it supplies are actually real, recent developments. The AI mentioned:

How does the AI know about all of this? It’s retrieving information from a structured search index and augmenting its generated responses with that data, an approach appropriately called Retrieval Augmented Generation, or RAG.

Where Algolia’s Agent Studio comes in

You might have read about RAG elsewhere and tried it yourself. It’s a very fun technology to work with! But here’s the unique part about our perspective here at Algolia: we’ve been working behind the scenes to slowly perfect every component of this workflow. Think about it — in production, a RAG pipeline has four distinct moving parts:

  1. Ingestion — Storing structured information that the LLM should know
  2. Retrieval — Evaluating records by hybrid keyword and vector similarity
  3. Ranking — Reordering potential search results based on business signals
  4. Answering — Generating responses from real, grounded knowledge

Algolia’s stack of search and discovery tools maps onto this exactly.

business-aware-RAG-pipeline.webp

The first three have been our bread and butter for years now:

  1. IngestionAlgolia’s connectors help you ingest data from wherever you’re currently storing it into a searchable index.
  2. RetrievalNeuralSearch is a hybrid keyword and vector search solution that understands the idea behind a query, instead of only surfacing exact matches (which is important, because it’s unlikely that the LLM is going to find exactly the right words for every query).
  3. Ranking — Our AI Ranking feature lets you customize the order of relevant search results by real business priorities, like amplifying products with higher profit margins.

Completing the pipeline

Step 4 is what we’ve been working on most recently. Algolia’s Agent Studio combines our other industry-leading search tools with the LLM of your choice to build AI assistants that are grounded, current, and aligned with business priorities. It generates responses to user queries that incorporate context from the earlier steps in the pipeline, so the tweaks you make to improve the typical user search experience on your site ripple downstream into the AI assistant.

That sounds great in theory — it looks good in practice too. Here’s a full Next.js component with real code that just works out of the box:

'use client';

import { DefaultChatTransport } from 'ai';
import { useState } from 'react';
import { TooltipProvider } from "@/components/ui/tooltip"
import { useChat } from "@ai-sdk/react"
import { ArrowUpIcon } from "lucide-react"
import { Button } from "@/components/ui/button"
import { Tooltip, TooltipContent, TooltipTrigger } from "@/components/ui/tooltip"
import { AutoResizeTextarea } from "@/components/autoresize-textarea"

export default function Page() {
  const { messages, sendMessage, status } = useChat({
    transport: new DefaultChatTransport({
      api: `https://APPLICATION_ID.algolia.net/agent-studio/1/agents/AGENT_ID/completions?stream=true&compatibilityMode=ai-sdk-4`,
      headers: {
        'x-algolia-application-id': 'APPLICATION_ID',
        'x-algolia-api-key': 'SEARCH_API_KEY',
      }
    })
  });
  const [input, setInput] = useState('');

  const handleSubmit = (e: React.FormEvent<HTMLFormElement>) => {
    e.preventDefault();
    if (input.trim()) {
      sendMessage({ text: input });
      setInput('');
    }
  }

  const handleKeyDown = (e: React.KeyboardEvent<HTMLTextAreaElement>) => {
    if (e.key === "Enter" && !e.shiftKey) {
      e.preventDefault()
      handleSubmit(e as unknown as React.FormEvent<HTMLFormElement>)
    }
  }

  return (
    <TooltipProvider>
      <main
        className={"ring-none mx-auto flex h-svh max-h-svh w-full max-w-[35rem] flex-col items-stretch border-none"}
      >
        <div className="flex-1 content-center overflow-y-auto px-6">{messages.length ? (
          <div className="my-4 flex h-fit min-h-full flex-col gap-4">
            {messages.map((message, index) => (
              <div
                key={index}
                data-role={message.role}
                className="max-w-[80%] rounded-xl px-3 py-2 text-sm data-[role=assistant]:self-start data-[role=user]:self-end data-[role=assistant]:bg-gray-100 data-[role=user]:bg-blue-500 data-[role=assistant]:text-black data-[role=user]:text-white"
              >
                {message.parts.map((part, index) =>
                  part.type === 'text' ? <span key={index}>{part.text}</span> : null,
                )}
              </div>
            ))}
          </div>
        ) : (
          <header className="m-auto flex max-w-96 flex-col gap-5 text-center">
            <h1 className="text-2xl font-semibold leading-none tracking-tight">Basic AI Chatbot Template</h1>
            <p className="text-muted-foreground text-sm">
              This is an AI chatbot app template built with <span className="text-foreground">Next.js</span>, the{" "}
              <span className="text-foreground">Vercel AI SDK</span>, and <span className="text-foreground">Vercel KV</span>.
            </p>
            <p className="text-muted-foreground text-sm">
              Connect an API Key from your provider and send a message to get started.
            </p>
          </header>
        )}</div>
        
        <form
          onSubmit={handleSubmit}
          className="border-input bg-background focus-within:ring-ring/10 relative mx-6 mb-6 flex items-center rounded-[16px] border px-3 py-1.5 pr-8 text-sm focus-within:outline-none focus-within:ring-2 focus-within:ring-offset-0"
        >
          <AutoResizeTextarea
            onKeyDown={handleKeyDown}
            onChange={v => setInput(v)}
            value={input}
            placeholder="Enter a message"
            className="placeholder:text-muted-foreground flex-1 bg-transparent focus:outline-none"
          />
          <Tooltip>
            <TooltipTrigger asChild>
              <Button
                variant="ghost"
                size="sm"
                className="absolute bottom-1 right-1 size-6 rounded-full"
                disabled={status !== 'ready'}
              >
                <ArrowUpIcon size={16} />
              </Button>
            </TooltipTrigger>
            <TooltipContent sideOffset={12}>Submit</TooltipContent>
          </Tooltip>
        </form>
      </main>
    </TooltipProvider>
  );
};

Want to do it yourself? Just clone here, npm install, swap in your credentials, and host wherever you’d like.

RAG, but business-aware

Most search engines aim to surface what’s relevant. But semantic similarity is just scratching the surface! In an ecommerce context, for example, the goal is sales, so whatever helps a potential customer feel understood and catered to is just as relevant. When it comes to other use cases, like documentation search, the goal might be to "hand" users what they're looking for and minimize the number of queries a user has per session. That means that in a production setting with real-world goals and metrics, RAG using some other generic search engine fetches what’s technically accurate, but not necessarily what’s important. With Algolia though, you get to choose what relevance means for your application, in addition to semantic similarity.

This is why having your RAG system living right alongside your data storage and search engine tools in the same ecosystem is a huge relief. Algolia’s AI Ranking lets you inject business priorities directly into the retrieval process, skipping the janky integrations you get when trying to merge third-party tools.

What does this mean in practice? All from one platform, you can get your AI assistant to:

  • Downplay out-of-stock items in favor of in-stock ones
  • Prioritize high-margin products
  • Surface recent docs over outdated ones
  • Boost engagement-driven content over low-value content

After all, management wanted an AI assistant on the site to help customers and boost sales in the first place, right? So what’s the point in adding one if it isn’t aligned with those business goals? Algolia puts that all at your fingertips.

How to get your business-aware RAG workflow running today

Step 1: Index metadata

Your records should contain data that represents business signals, like recency, popularity, or profitability. These values quantify how much your business hopes a user will see a particular search result. This is how the search engine knows what’s really important. (Learn more in this blog post on how and why events drive conversions)

A word of advice: Keep this additional data lean. DIY implementations of RAG usually naively shove every piece of info that could possibly ever be helpful in the AI prompt, but that just doesn’t work. LLMs are easily confused and bewildered by too much information. Name the keys descriptively and stick to a consistent, minimal structure so that the LLM makes better decisions. For example, instead of this:

{
  "id": "prod_9182",
  "productInfo": "SuperPhone X - Released July 2025 - In Stock: 17 - Margin: 42% - 4.5/5 stars from 18,442 reviews",
  "description": "The SuperPhone X is the best phone in our line, designed for professionals and gamers alike. Outperforms competitors.",
  "metadata": {
    "lastUpdated": "07/21/2025 at 3:30PM",
    "tags": "smartphone,high margin,profitable,flagship,inventory17,review score 4.5, release date 2025-07"
  },
  "relatedTextDump": "See more at our ecommerce portal, includes shipping policies, warranty text, and unrelated FAQs...",
  "internalNotes": "Marketing wants to boost this aggressively, ignore low competitor pricing."
}

Try something like this:

{
  "id": "prod_9182",
  "title": "SuperPhone X",
  "body": "A high-performance smartphone released July 2025, designed for professionals and gamers.",
  "url": "<https://example.com/products/superphone-x>",
  "updatedAt": "2025-07-21T15:30:00Z",
  "stock": 17,
  "marginTier": "high",
  "popularity": 0.82,
  "rating": 4.5
}

The first example is full of cluttered, unparsed, or irrelevant information and it gives the LLM a lot of heavy lifting to do. The second example is atomized, lean, and descriptive, so the LLM has everything it needs right away.

Step 2: Specify the basis for reranking

Algolia’s AI Ranking blends business signals with semantic relevance, reshuffling the results list according to what you’ve defined as most important. NeuralSearch is already taking care of quantifying semantic matches, but you need to actually tell Algolia what business signals matter. Go into your Algolia dashboard in your new index’s configuration tab and click Ranking and Sorting. This short guide will give you some tips about how to get the most out of the custom ranking options.

Something to remember: Algolia’s AI Ranking doesn’t act like a tasteless salesman, ignoring what the customer wants and pushing upsells. It just gently gives priority to things that benefit the business most, as you’ve defined them. When user requests conflict with business ideals, you can tell the LLM to just disclose its constraints and tell the truth, like “That item’s out of stock; here are in-stock alternatives.”

Step 3: Feed it into Agent Studio

Create an agent using the straightforward UI in the Agent Studio dashboard. You can even use predefined configurations for specific use cases if that suits you.

create-agent-screenshot.webp

You get to define the LLM’s system prompt and get as specific with it as you’d like.

agent-prompt-screenshot.webp

You can even make it speak in Pig Latin and end every conversation with dad jokes, if you’d like. We don’t recommend using this in oduction-pray.

The LLM sees those reranked results in its context window alongside your custom prompt, so every answer it produces is grounded in hard data and business-optimized. Try including specific directives to enforce citing its sources and defaulting to “I don’t know” instead of hallucinated information, like this:

- Always answer from Algolia search results.
- Cite each claim with record.url. If no relevant results, say "I can't find that" and redirect to other relevant items.
- Prefer in-stock and recent items; never hide out-of-stock—offer alternatives.

Now, you can loop this API into whatever frontend you’d like! The code sample above used AI SDK, since that’s a very common frontend UI to use. All we did was swap the API URL string for our Algolia backend — one little switch that adds real depth and usefulness to an existing implementation.

You can get started right now

LLMs alone can’t keep up with your business goals — that’s why they hallucinate and spit out old, useless information. Algolia fixes that by grounding answers in your own real data, reranking results by real business priorities, and giving your assistant potential to be helpful for real.

With Connectors, NeuralSearch, AI Ranking, and now Agent Studio, you get a production-ready pipeline where the model retrieves, reasons, and stays aligned with what matters.

Register for a free Algolia account to start building with the Agent Studio beta today.

Recommended

Get the AI search that shows users what they need