Secure Context Memory with enVector MCP

This document presents an use case for building a Secure Context Memory for LLMs using enVector MCP.

Context Memory for Personalized AI Assistants

Today, many users regularly use conversational AI services such as ChatGPT and other chat-based assistants in their daily lives. While these systems are trained on massive datasets and can generate high-quality responses, they share an inherent limitation: they cannot retain information beyond their training data.

To compensate for this limitation, most AI services automatically store user conversation histories and build internal knowledge databases on top of them. By leveraging past conversations, preferences, and contextual signals as memory, these systems can deliver more accurate and personalized responses over time.

However, this approach introduces a fundamental problem.

User conversations often contain personal or confidential data, such as health information, financial records, lifestyle patterns, workplace documents, and sensitive perspectives. Storing such data on servers operated by service providers—such as OpenAI or other AI platforms—creates serious privacy and security risks.

Through numerous real-world security incidents and data breaches, we have learned how dangerous it can be to store personal data on external systems. As a result, there is a growing demand for AI services that deliver personalized experiences while keeping the data secure.

Context Memory with enVector MCP

enVector MCP was designed in response to this challenge.

By using enVector, developers can build personalized AI assistants while ensuring that sensitive information is used only in encrypted states.

The core principles of enVector are as follows:

  • Similarity search is performed on encrypted vectors

  • The server performs search operations without knowing the contents of the data

  • The results can be decrypted only by the user; the server sees ciphertext only and has no access to plaintext

In other words, the retrieval pipeline used in traditional LLM context memory systems (incl. RAG) can be implemented entirely over encrypted data.

enVector MCP acts as the control plane governing this secure retrieval process.

Context Retrieval

This secure context memory architecture is especially valuable in security-sensitive domains such as:

  • Personal health and medical records

  • Long-term conversational history and mental health context

  • Personal asset management and investment data

  • Confidential internal enterprise information

Preparing enVector MCP for LLMs

  1. First, to connect enVector MCP to Claude (or your LLM of choice), see MCP Guide.

  2. Set Rule Prompts appropriate for your use case and your LLMs, for example,

Context Memory Example

Users naturally share important personal information during everyday conversations.

For example, consider a scenario where a user stores their health checkup results as long-term context memory.

The following example of personal medical record is stored via enVector MCP in fully encrypted form:

This data belongs entirely to the user.

Without enVector, such information would typically be stored directly by the AI service provider. With enVector, however, the server cannot interpret the contents—only encrypted vectors and ciphertext exist.

Later, the user may ask questions such as:

  1. “Recommend supplements that would be suitable for me”

  2. “Recommend insurance products”

enVector MCP embeds the user’s query and performs similarity search over the encrypted memory store. Throughout this process, the server remains unaware of the data’s meaning, and the retrieved results are decrypted only on the user side.

As a result, the AI assistant can generate responses that accurately reflect the user’s health history and personal context—while ensuring that sensitive personal data is never exposed to external systems.

search-normal.png
no context memory
search-with-envector.png
context memory with enVector

Last updated