πŸš€MCP Guide

Introduction

The Model Context Protocol (MCP)arrow-up-right πŸ”Œ is a protocol for communicating with LLMs, primarily used for exchanging information (context) or performing actions.

πŸš€ enVector supports using MCP with various AI applications. You can interact with your LLMs like Claude through natural language inputs.

This guide supports how to start enVector MCP in details.

Prerequisites

Note: The base MCP library requires Python 3.10+ while pyenvector supports 3.9+.

Supporting tools

  1. get_index_list : Get the list of indexes in enVector.

  2. get_index_info : Get information about a specific index in enVector.

  3. create_index : Create an index in enVector.

  4. insert : Insert vectors and the corresponding metadata into enVector index. Supports to specify embedding model to get embedding vectors to insert.

  5. search : Perform vector search and Retrieve Metadata from enVector. Supports to specify embedding model to get embedding vectors to search.

Getting Started

  1. Run enVector server via Docker Compose or enVector Cloudarrow-up-right.

  2. Clone the envector-mcp-serverarrow-up-right and setting up the Python environment.

    See more details in envector-mcp-server manualsarrow-up-right.

  3. (Optional) Run MCP server with the following example command:

    This is optional when you run MCP server via AI application (e.g. Claude Desktop). See more details in Run MCP Server.

  4. Connect your AI application (e.g. Claude) to the MCP server by configuring settings and Chat with enVector πŸš€

    See more details in Connect MCP Server.

Connect MCP Server

Connect to Remote MCP Server

  1. Run MCP server on your Python environment with the following example command:

    See more CLI options on CLI Options for running MCP server.

  2. Configure your MCP settings .some-mcp-config.json in your application.

Run MCP Server via AI application

Some AI applications support to run MCP server on their applications directly, e.g. Claude Desktop. Configure your MCP settings .some-mcp-config.json in your application.

MCP Server Options

CLI Options

Arguments to run Python scripts:

  • πŸ’» MCP execution

    • --mode: MCP execution mode, supporting http (default) and stdio transports.

    • --host: MCP HTTP bind host. The default is 127.0.0.1.

    • --port: MCP HTTP bind port. The default is 8000.

    • --address: MCP HTTP bind address. Overrides --host and --port if provided.

    • --server-name: MCP server name. The default is envector_mcp_server.

  • πŸ”Œ enVector connection

    • --envector-address: enVector endpoint address ({host}:{port} or enVector Cloud endpoint ends with .clusters.envector.io).

    • --envector-cloud-access-token: access token of enVector Cloud.

  • πŸ”‘ enVector options

    • --envector-key-id: enVector key id (identifier).

    • --envector-key-path: path to enVector key files.

    • --envector-eval-mode: enVector FHE evaluation mode. Recommend to use rmp (default) mode for more flexible usage.

    • --encrypted-query: whether to encrypt the query vectors. The index is encrypted by default.

    ⚠️ Note: MCP server holds the key for homomorphic encryption as MCP server is a enVector Client.

  • βš™οΈ Embedding options

    • --embedding-mode: Mode of the embedding model. Supports femb (FastEmb; by default), hf (huggingface), sbert (SBERT; sentence-transformers), and openai (OpenAI API). For openai, required to set environmental variable OPENAI_API_KEY.

    • --embedding-model: Embedding model name to use enVector. The sentence-transformers/all-MiniLM-L6-v2 set as default, which dimension is 384.

Use environment variables

Copy .env.example to .env and configure .env as you want.

The detailed description of each options can be found in CLI Options.

Next Steps

Last updated