In the world of AI search and discovery, events are the fuel that powers accuracy and optimization. AI models heavily rely on vast amounts of high-quality event data to learn, make accurate predictions, and drive meaningful improvements.
This article will explain how the AI search machine learning models are using events to optimize results for Algolia NeuralSearch.
If you’re new to search, it’s worth pausing for a moment to learn how search works. Every query is processed in three steps: query understanding, retrieval, and ranking.
Historically, keyword search engines used term frequency to determine relevance and ranking for a given query. New machine learning models move beyond keyword matching to query understanding. When it comes to AI, each term is converted into a mathematical expression called a vector embedding. Queries are also vectorized. Then the machine learning models can mathematically compare a search query with a search record to understand its meaning.
Vector search is a way to use vector embeddings to find related objects that have similar characteristics using machine learning models that detect semantic relationships between objects in an index. The image above shows a simplified view of vector embeddings in 3D vector space. Real-world vectors can have hundreds of dimensions.
AI search algorithms can understand the searcher’s intent, but ordering results from most to least important is harder. For example, if someone searches your online clothing site for a “blue top” an AI search engine will understand that “top” is a synonym for “shirt” or “sweater” but how it ranks results matters just as much — your visitors don’t want to comb through pages of content to find what they’re looking for. Events help improve that relevance.
Events can be used to determine which fields best represent the meaning of a record (and index), and with what weighting. When I say “fields” I’m referring to the fields of a record in an index, such as the example below. Each field can be assigned a “weight” that can be used to boost or bury a result for any given search query. Technically, we calculate the relationship between the query and the events (as signals) to establish the significance of each field in determining the outcome; i.e. which fields should be considered to optimize for the outcome represented by the event (e.g., a conversion).
name | Polyester windbreaker jacket |
description | Made of 100% taffeta polyester; body lining is 60% cotton/40% polyester jersey; Sleeve lining made of 100% polyester taffeta Rib-knit sleeve cuffs and hem made of 97% polyester/3% spandex. Detachable hood and inner locker loop Full zip closure Slant welt pockets Imported |
Color | BLUE |
auxdescription | ROYAL BLUE |
categoryPath1 | Uniform Shop
Unisex |
categoryPath2 | Jackets & Coats
School Uniforms Jackets |
categoryPath3 | Jackets |
This process trains an ‘expression’ of fields and associated weightings, which is then used to ‘vectorize’ each record. The expression must be provided for the engine to perform the vectorization process.
Technically, yes. An expression is simply a list of fields (from the record), and associated weightings (a numerical value between 0.0 and 1.0). However, determining which fields to use and to what weighting is extremely difficult for a person. To achieve a near-optimal expression is practically impossible, but to even generate an expression which yields reasonable results poses many challenges.
Consider the following real example; an expression trained on conversions, with the record excerpt shown above.
name:0.51401407, categoryPath3:0.4297026, categoryPath2:0.3915629, categoryPath1:0.33121085, color:0.20838235, auxdescription:0.17819962
The selected fields appear reasonable enough, as are the ordering of the weightings. However, note that the description field is not included in the expression, although to a person, it may intuitively ‘best represent the meaning’ of the record. Also bear in mind, by comparison to many customers, this is an example of a better-structured record.
Consider instead the following real example; an expression trained on clicks, with the record excerpt from another customer’s index, with (typically) messier data quality.
categoryLevel3Name:0.22846536, variantTopStyle:0.22119533, categoryLevel4Name:0.21810511, tagName:0.2164325, variantFirmness:0.21558715, tagKeyWords:0.19900157, allTagNames:0.19536306 saleKeyWords:0.17452367, H1:0.17995015
title | MEMORY FOAM Queen Double King Single Mattress Bed |
description | Sleep deeply with the All-New Memory Foam Sleep Mattress made from all natural fibers. . With new and improved 7-zone back-healthy pocket-spring system combined with 7 different support levels, it will relieve stern and long-lasting back pain. |
allTagNames | Furniture Mattresses Single Mattress Double Mattress King Single MattressAll Natural Mattress Top Selling Mattresses King Size Mattress Queen Mattress |
categoryLevel1Name | Furniture |
categoryLevel2Name | Mattresses |
categoryLevel3Name | Bedroom |
categoryLevel4Name | Couples Mattress |
saleKeyWords | Natural Sleep king mattress king size mattress queen mattress queen size mattress double mattress single mattress king single mattress |
h1 | Single Mattresses |
tagKeyWords | mattress Mattress matress Matress bedroom mattress matresses Matresses single mattress MattressNatural mattressOrganic mattress Mattress matress single bed |
tagName | Organic Mattress |
variantFirmness | Soft |
Again, in this example, description is not used, but neither is title. tagKeyWords and saleKeyWords include many repeated words, and both tagName and h1 contain the same information. The inclusion of variantFirmness – as a relatively very important field – may also come as a surprise to a user.
These two examples are intended to illustrate the difficulties associated with training an optimal expression. With events, we can remove this complexity, and automatically determine which fields should be considered when training the expression, and with what associated weighting.
One question we get is why we need machine learning to determine the importance of each field. I mean, you can evaluate each field and determine which ones are most important, right?
We learned this first hand when building NeuralSearch. Initially, the neural expression was being hand-crafted by our team. We had years of experience with customer datasets and search configurations. Even in those highly-capable hands, the resulting expressions were very different.
Consider the two customer examples from above:
Trained by Events | Human Expert |
name:0.51401407,
categoryPath3:0.4297026, categoryPath2:0.3915629, categoryPath1:0.33121085, color:0.20838235, auxdescription:0.17819962 |
name:1.0,
categoryPath3:0.6, categoryPath2:0.6, categoryPath1:0.4 |
Most of the selected fields have been appropriately identified, and in the same weighted ‘order’; however, the relative weightings are different. The nDCG@10 — a method we can use to measure the relevance for a particular query/results pair — for the expression trained by events was measured at ~0.6; the nDCG@10 for the expression configured by the human expert was measured at ~0.4. This is an extremely significant difference in search performance, to have been only affected by the expression.
Trained by Events | Human Expert |
categoryLevel3Name:0.22846536,
variantTopStyle:0.22119533, categoryLevel4Name:0.21810511, tagName:0.2164325, variantFirmness:0.21558715, tagKeyWords:0.19900157, allTagNames:0.19536306 saleKeyWords:0.17452367, h1:0.17995015 |
dealTitle:1.0,
categoryLevel4Name:0.4, allTagNames:0.4, categoryLevel3Name:0.3, dealDescription:0.3 categoryLevel2Name:0.2, categoryLevel1Name:0.1, |
There are more significant differences between these two expressions: most of the fields selected by the human expert are not included in the event-trained expression, and the weighting scales are not close.
Additionally, NeuralSearch is continuously improving and field weights are adjusted automatically over time. Search trends are continuously changing, new long tail queries are created, new products and pages are added or removed from your index. It necessitates automatic updating behind the scenes.
Current Algolia customers who already have events connected to transition to NeuralSearch seamlessly provided they have collected sufficient data to provide feedback to the machine learning algorithms. New customers will need to set up events and generate enough data to determine the best field weights to help overcome the cold start problem.
Sign up today to join the waitlist for the self-service edition of Algolia NeuralSearch. By starting today, you can configure events and be ready to jump in with AI-powered search when it’s available!
Emma Wilson
Director of Product ManagementPowered by Algolia AI Recommendations