In this article, I’ll dive into a technical showcase from the Algolia DevBit on June 25th, 2025. I set out to demonstrate how the Algolia MCP server, combined with a workflow automation tool like n8n, can accelerate the development of practical applications. This led me to build a LinkedIn bookmark manager, a project that serves as PoC of how anyone can leverage AI and our new MCP server to quickly transform a simple idea into a fully-functional tool with unprecedented speed.
In professional circles, a lot of the best information and articles are shared on LinkedIn.
If you’re like me – too busy to read everything and need to save articles to read and share later – then this big list of “Saved Posts” probably looks familiar.
On LinkedIn, it’s easy to save. But there’s no way to search, filter, or sort your saved content.
After 11+ years at Algolia helping customers build fast, delightful Search and Discovery experiences, I’ve developed a deep sensitivity to friction in content navigation. So when I land on a website that doesn’t let me filter or search with facets, keyword suggestions, or instant results, I can’t help but feel frustrated. As a software engineer at heart and a product person obsessed with well-crafted UX, I expect better. Search should feel effortless and responsive, not like a scavenger hunt.
I realized that with a bit of code and the help of an LLM (e.g. Claude 4.0) I can fix this issue by making my LinkedIn blog experience searchable with Algolia. How? by creating a small app that:
Algolia’s new MCP server was my inspiration for this application. Like a personal assistant for all things Algolia, the MCP server makes it easy to build indexes in Algolia using natural language and AI chat prompts. The LLM takes on this responsibility, leveraging the Algolia MCP server, and making the integration with Algolia easier than ever. Despite Algolia docs being easy to navigate, with this method, you don’t need to understand how to interact with the Algolia API.
To make my prototype work with LinkedIn, I built a Chrome extension I call “LinkedIn Post Saver.”
For the back end, I needed something to push my saved content to Algolia to leverage its search capabilities. For this, I used the AI-powered workflow on the n8n platform. This is a popular low-code/no-code workflow automation platform that lets you orchestrate strong backend logic with minimal effort. You can use code or you can just drag and drop blocks.
The AI Agent node on n8n lets you plug in an AI model, like Open AI’s ChatGPT or Anthropic’s Claude, and interact with it directly inside your workflow. Instead of coding yourself, you can instruct the AI Agent to generate code, then review and iterate on it, saving time and effort.
With your n8n workflow in listening mode, any time you “Save” a LinkedIn post, a request payload with all the content of that post gets sent to your n8n workflow and starts navigating across multiple nodes – including nodes to enrich linkedin post with metadata that can enhance the search experience using AI.
When the n8n workflow is complete, the post content is automatically indexed in Algolia, then to the n8n LLM powered agent, and the Algolia MCP server. Every time a new blog is sent through the workflow, it’s added to the index.
For my app, I went a few steps further and built a nice front end for the search experience. With my LinkedIn blog content indexed and running on Algolia, I can use it to search my saved LinkedIn content by date, by author, keywords and other filters.
The first step is to add a “Save” button to LinkedIn with a Chrome extension. I built the extension using Claude Desktop with the following two prompts. (Try it yourself or get the Chrome extension code on GitHub).
The n8n workflow is composed of three functional parts:
In the first block, Webhook receives the HTTP request and the body of the saved LinkedIn post. Switch moves the workflow to the next functional area.
This block contains the AI Agent node for AI Data Enrichment. This is where you plug in Claude 4 Sonnet (or other AI model). Use prompts to instruct the AI to return and structure metadata in the format you want.
Here’s the prompt I used: “You are an expert content classifier helping enrich bookmarks from LinkedIn. You should analyze post content and return structured metadata in a valid JSON format.”
Specify what fields you want extracted and include them in your prompt, such as:
At this stage, the input is the payload, the LinkedIn post. The output is the structured data according to the specified fields. Two final nodes in this block, Merge and Reshape JSON, merge new content with the previous content and use Claude to reshape JSON with JavaScript.
The workflow now advances to the final block, the indexing stage. Here I used an AI Agent again, but this time, it’s linked to the Algolia MCP server. This new protocol pioneered by Anthropic standardizes the way LLMs talk to APIs. It’s like a USB for APIs, letting LLMs talk to different APIs without needing custom integrations.
The MCP server aligns perfectly with Algolia’s developer-first focus and API-first design. We’ve released two Algolia MCP servers on GitHub, one in JavaScript and one in GO.
With the MCP server, I don't need to know anything about Algolia’s API endpoint, how to pass credentials, or any other parameter. It all happens with prompts. When I type in, “You have access to the saveObject and partialUpdateObject methods,” the AI Agent does it for me.
Here’s a sample prompt I used that includes workflow steps for the AI Agent to execute:
You are an AI agent inside an n8n workflow. Your task is to route a request to the correct Algolia MCP tool based on the <action> field. Follow the instructions below precisely.
---
### Workflow Steps
0. On the "bookmark_manager" index
1. Read the <action> value from the input JSON: "{{ $json.body.action }}"
2. Select the appropriate Algolia tool:
- If the action is "save", use the saveObject tool
- If the action is "remove", use the partialUpdateObject tool
- update the attibute "isDeleted: true"
- make sure the objectID value is a string
3. Call the selected tool with the following parameters:
- Use the full "payload" JSON as the requestBody
4. Handle unsupported or missing actions:
- If the action is not "save" or "remove", return the following message:
"Unsupported or missing action: '<action>'"
<input JSON payload>
{{ JSON.stringify($json.body.payload) }}
</input JSON payload>
Note:
- Return success: false, with the error message in the error output attribute
This method works with fewer instructions, too. However, the more precise your prompt is, the fewer errors you’ll encounter.
Once the AI Agent fulfills this workflow, you can see the results. As an input, it forms a valid Algolia request using saveObject, then passes that to Algolia for indexing.
Both the n8n platform and Algolia MCP server make it simple to build and test prototypes quickly and get apps up and running fast. With an easy way to connect LLMs to your API workflows, there’s no reason not to get experimental, creative, and innovative. You can use this workflow to leverage Algolia’s powerful search capabilities and unlock intelligent, real-time user experience for all kinds of use cases.
If you would like to find out more about my LinkedIn bookmark manager, watch my full DevBit presentation, Prototype an AI-powered bookmark manager with n8n and Algolia MCP server. Or take a look at the project in Github.
To continue the conversation, you can find me on Discord.
To learn more about Algolia’s MCP server and the next generation of AI agents, visit https://www.algolia.com/developers/lp-mcp.
Alexandre Collins
Business Strategy and Optimization @ Algolia