1. Set up Click and Conversion Analytics. (This step is performed by the engineering team)
2. Set up index or query for A/B testing. There are two ways of creating A/B tests:
The first is to show results from the same index with different parameter added. This allows you to test things like enabling personalization, re-ranking, rules, facet re-ordering and many other settings.
The second is to show results form a different index. If you want to test changes to the base relevance like which attributes are searchable, or which custom ranking factors to use, you will need a separate index, usually a replica.
3. Run the A/B test:
Use the A/B test tab of the Dashboard to create your test
Start the A/B test.
4. Access A/B testing analytics on the Dashboard to view detailed analytics for each variant independently
Step 1: Add a selected custom attribute (for example, salesRank attribute) to your main catalog index (this is index A in your A/B test)
For example: numberOfLikes attribute, will allow sorting the search results based on the “likes” data that was gathered from site users. Alternatively, salesRank attribute will allow sorting search results based on the internally gathered sales data.
For step by step instructions refer to the guide “Configuring Custom Ranking for Business Relevance”
Step 2: Create index B as a replica of A.
1. Load the dashboard section “Index”
2. Select the correct index
3. In the top left section click on the “New” button
4. In the drop down menu select “Replica”
In the pop-up window “Create a new replica” type in the replica name (Index B)
Select the the type: “standard” or “virtual”
Review before saving the replica index
Step 3: Adjust index B’s settings by sorting it’s records based on the selected custom attribute (for example, salesRank attribute)
Load the dashboard section “Index”
Verify the correct replica index is selected
In the top navigation menu select “Configuration” tab
Click on “Ranking and Sorting”
Click on the button “Add sort-by attribute” and select the relevant attribute. (For example: salesRank)
Note: the ranking attributes need to be pre-configured by the data engineering team.
Once you add a sort-by attribute, it will appear on the top of the “Ranking and sorting” list. Select the desired order for this attribute: ascending or descending and save your changes.
Note: While each ranking and sorting attribute’s position can be changed by dragging and dropping it higher or lower on the list, it is not recommended to do so.
Review the changes before saving
Step 4: Create an A/B test
Tips for creating A/B test:
Load the dashboard section “A/B testing”
Verify the correct application is selected
In the top right section click on the “New test” button
Select an informative and self explanatory name for the test
Select “Variant A” - an index currently used for the live search
Select “Variant B” - a replica of index chosen as a variant A with the configuration changes applied to it that you would like to test.
Define the percentage of traffic that will be split and directed to each variant
Set a test duration. The recommended minimum duration is 30 days.
Review the changes before saving
The new A/B test will appear on the A/B testing overview section of the dashboard, where you can view and analyse it once the user data will start aggregating
Step 1: Add a new custom attribute (for example, short_description attribute) to your main catalog index (this is index A in your A/B test). You will want to test if setting a new attribute (in this example, a new short description to each of your records) as a searchable attribute will improve your relevance.
Tip: Because you are testing a search time setting for your index, you can use A as both indices in your test.
Step 2: Create and run an A/B test
Load the dashboard section “A/B testing”
Verify the correct application is selected
In the top right section click on the “New test” button
Select an informative and self explanatory name for the test. For example, “Testing the new short description”
Create your A/B test with index A as both variants
Set the selected searchable attribute (in this example, short_description) as the varying searchable attribute.
Define the percentage of traffic that will be split and directed to each variant. For example, give B only 30% usage (70/30) - because of the uncertainty: you’d rather not risk degrading an already good search with an untested attribute.
Set a test duration. The recommended minimum duration is 30 days.
Review the changes before saving
The new A/B test will appear on the A/B testing overview section of the dashboard, where you can view and analyse it once the user data will start aggregating
Once you reach 95% confidence, you can judge the improvement and the cost of implementation to see whether this change is beneficial.
Step 1: Create Rules for your main catalog index (this is A in the test).
For a step-by-step instruction, please refer to “Category merchandising” and “Search merchandising” sections of this guide.
For example: promote your new Kindle product record using rules
Step 2:
Load the dashboard section “A/B testing”
Verify the correct application is selected
In the top right section click on the “New test” button
Select an informative and self explanatory name for the test. For example, “Testing the newly-released Kindle merchandising”
Create your A/B test with index A as both variants
As the varying query parameter, add enableRules.
Define the percentage of traffic that will be split and directed to each variant. For example, give B only 30% usage (70/30) - because of the uncertainty: you’d rather not risk degrading an already good search with an untested attribute.
Set a test duration. The recommended minimum duration is 30 days.
Review the changes before saving
The new A/B test will appear on the A/B testing overview section of the dashboard, where you can view and analyse it once the user data will start aggregating
Once you reach 95% confidence, you can judge the improvement and the cost of implementation to see whether this change is beneficial.