Other Types
Filter
  1. All Blogs
  2. Product
  3. AI
  4. E-commerce
  5. User Experience
  6. Algolia
  7. Engineering

Our secret recipe for speed: From search-as-you-type to AI search

Published:

Listen to this blog as a podcast:

Estimated time to read: 5 minutes

Speed has always been part of Algolia’s DNA. From the moment we started building our engine, we made a commitment: every keystroke, every query, and every result should feel instantaneous. That promise of “search-as-you-type”—results updating with every character—wasn’t just a nice UX trick. It was a massive engineering challenge. Behind every single keypress, millions of documents are scanned, gigabytes of data are traversed, and all of it happens in less than 50 milliseconds.

Over the past decade, I’ve worked as part of Algolia’s engine team—responsible for maintaining and evolving the algorithms that make this speed possible. In this post, I’ll take you behind the scenes of our journey: how we went from the early days of blazing-fast text search to the cutting-edge world of AI search, all while keeping speed a first-class citizen.

Last month, I gave a presentation on this topic, which you can watch below, or keep reading to learn more. 

Building for speed from day one

When we set out to build Algolia, we could have used off-the-shelf solutions. It would’ve been simpler, faster to ship—but not faster to search. To truly control performance, we had to write our own engine, our own algorithms, and our own data structures.

Today, that means around 200,000 lines of modern C++ power every search, every facet, every vector. That’s a lot of code to maintain, but it’s also what allows us to fine-tune every byte and every clock cycle to hit our performance targets.

Speed isn’t just about raw computation. It’s about how you organize and access data. And for that, we rely heavily on a few fundamental structures that make search fast.

The data structures behind instant search

At the core of our engine are data structures optimized for one thing: finding what you need as quickly as possible. There are three methods which we can use to build faster search.

Tries: Building words letter by letter

tries.jpgtrie is a tree where each node represents a letter. Instead of storing entire words, we build them letter by letter. For example, if you store “cat” and “cab,” the “c” and “a” nodes are shared. Only the last letters differ.

This structure lets us instantly check if a word exists and enables features like prefix search and typo tolerance. It’s the reason we can handle queries like “televi…” before the user finishes typing “television.”

Ternary search trees: Compact and efficientternary-search-trees.jpg

We also use ternary search trees, which store letters in a way that saves space—especially for rare words. Each node can have up to three children: left (for smaller letters), middle (for the next letter in a word), and right (for larger letters).

This allows us to pack more data into memory without sacrificing lookup speed.

Radix trees: compressing for space and speed

radix.jpg

Finally, radix trees take things a step further by compressing paths. Instead of having one node per letter, a node can represent multiple letters at once. That’s ideal when words share long prefixes or when you’re dealing with lots of unique terms.

At Algolia, we don’t pick one—we use all of them.

The hybrid radix tree: Our secret ingredienthybrid-radix.jpg

We developed what we call a hybrid radix tree, combining the strengths of tries, ternary search trees, and radix trees. Our engine dynamically chooses the best structure for each branch based on how many words it needs to store and how similar they are.

This adaptability gives us the best of all worlds: minimal space usage, optimal search depth, and lightning-fast lookups.

These hybrid trees are everywhere in our stack—used not only for full-text search, but also for facets, synonyms, rules, and even vector search.

Inverted lists: Finding the right records

Once we know how to find words quickly, the next challenge is connecting those words to the records they appear in. That’s where inverted lists come in.

An inverted list maps each word to the list of documents containing it. So when you search for “television,” the engine jumps straight to the list of all matching records. These lists are sorted according to the ranking formula you’ve configured, which makes combining and comparing them fast and efficient.

When a query has multiple terms—say “Philips television 32”—our engine fetches each inverted list in parallel and intersects them to find the records that contain all three terms. Millions of records are scanned and ranked in real time, yet results still appear almost instantly.

Extending speed to AI search

The same data structures that made Algolia famous for text search now power our AI search capabilities.

In vector search, we turn text into a list of floating-point numbers—vectors—that capture the meaning or “concepts” behind the text. These vectors are then indexed just like words in our hybrid trees and inverted lists.

This is the key insight: AI vectors are just another form of data that can be indexed, sorted, and retrieved efficiently. By applying our proven speed-first architecture, we bring the same sub 50-millisecond experience to semantic and AI-driven search.

In fact, we go a step further and compress vectors into binary hashes which retain the meaning but make results much, much faster. We also combine vector results with keyword results in a solution called NeuralSearch. As users type a query, we can return results that match intent much faster than a standard vector search engine.

What we learned

Over the years, we’ve learned that speed doesn’t come from magic—it comes from meticulous engineering. From designing efficient data structures to writing our own algorithms, every decision we make is guided by one principle: speed enables everything else.

Sorting data efficiently is the foundation not only of fast search, but also of personalization, reranking, and now, AI search. The same building blocks that powered search-as-you-type continue to power our most advanced capabilities today.

Even as AI changes what search can do, our commitment to speed remains the same. It’s not just a performance metric—it’s part of who we are.

More reading

If you’d like to dive deeper into Algolia’s engine internals, I highly recommend the blog by our founder, Julien Lemoine, titled Inside the Algolia Engine. It’s full of fascinating insights into the algorithms that still power our products today.

Recommended

Get the AI search that shows users what they need