A new open-source project from Signal's creator aims to solve the privacy problem inherent in cloud-based AI assistants by encrypting prompts and responses before they leave the user's device.
The fundamental tension of modern AI assistants is that they require sending your private thoughts, code, and data to a third-party server to function. Moxie Marlinspike, the creator of Signal, is attempting to resolve this with Confer, an open-source project that implements end-to-end encryption for AI conversations.

What's Claimed
Confer is an open-source AI assistant framework designed to keep user data private through encryption. The core promise is that conversations with an AI model remain encrypted even while being processed by a cloud service. The user sends encrypted prompts, the service processes them, and returns encrypted responses—all without the service provider ever seeing the plaintext content.
This isn't Marlinspike's first attempt at solving this problem. His previous project, Warrant Canary, explored similar concepts but never gained significant traction. Confer appears to be a more mature implementation of the same privacy-first philosophy applied to the current AI landscape.
What's Actually New
The technical innovation here is applying Signal's proven Double Ratchet algorithm to AI chat interactions. In traditional E2EE messaging, both parties need to participate in the encryption protocol. With Confer, the AI model itself becomes a participant in the encryption scheme.
Here's how it works:
- Client-side encryption: The user's device encrypts prompts using a session key derived from the user's credentials and the AI service's public key.
- Secure processing: The encrypted prompt is sent to the AI service, which processes it in an encrypted state or decrypts it within a trusted execution environment.
- Encrypted response: The AI's response is immediately encrypted with the same session key before being sent back.
- Zero-knowledge architecture: The service provider never has access to the plaintext conversation, only the encrypted blobs.
The project leverages hardware security features like Intel SGX or AMD SEV to create secure enclaves where decryption and AI processing can occur without exposing data to the host system. This approach mirrors how Signal handles contact discovery—using private set intersection and other cryptographic techniques to minimize data exposure.
The Hard Problems
Confer faces several significant technical challenges that explain why this hasn't been solved already:
Performance overhead: Encryption adds latency. For large language models that already take seconds to generate responses, adding cryptographic operations creates a noticeable delay. The project needs to balance security with usability.
Model updates and fine-tuning: If the AI model is constantly being updated, how do you maintain encryption guarantees? The secure enclave needs to load new model weights while preserving the integrity of the encryption scheme.
Input size limitations: LLMs process massive context windows. Encrypting and decrypting megabytes of text for each interaction creates computational bottlenecks. Confer likely uses streaming encryption, but this adds complexity.
Trusted hardware requirements: The security model depends on trusting hardware vendors (Intel, AMD) and their implementation of secure enclaves. These have had vulnerabilities in the past, like Spectre and Meltdown.
Practical Applications
Despite the challenges, Confer could enable several use cases:
- Enterprise AI adoption: Companies could use cloud AI services for sensitive code review, financial analysis, or legal document processing without exposing proprietary data.
- Healthcare: Medical professionals could use AI for diagnosis assistance while maintaining HIPAA compliance.
- Journalism: Reporters could use AI to help structure stories or check facts without revealing sources or unpublished information.
- Personal productivity: Individuals could use AI for journaling, therapy-like conversations, or personal financial planning with actual privacy.
Current State and Limitations
Confer is currently in early development. The GitHub repository shows active commits but no stable release. The project documentation acknowledges several limitations:
- Only supports specific hardware with trusted execution environments
- Limited to certain AI model architectures that can run in secure enclaves
- No formal security audit yet
- Performance degradation of 20-50% compared to unencrypted AI calls
The project also faces a fundamental trust question: even with E2EE, users must trust that the AI model itself isn't compromised or biased. The encryption protects data in transit and at rest on the server, but doesn't address what the AI does with that data once decrypted.
The Broader Context
Confer enters a crowded field of privacy-preserving AI technologies. Microsoft has been working on encrypted AI processing through Azure Confidential Computing. OpenAI and others offer business tiers with data processing agreements, but these rely on legal protections rather than technical ones.
What distinguishes Confer is its open-source nature and its lineage. Marlinspike has a track record of building privacy tools that actually work and gain adoption. Signal's success came from making encryption invisible to users—Confer aims to do the same for AI interactions.
The project also raises questions about the future of AI architecture. If privacy becomes a first-class requirement rather than an afterthought, it could push the industry toward federated learning, on-device processing, or entirely new distributed AI paradigms.
For now, Confer represents an important experiment in whether we can have both powerful AI assistants and meaningful privacy. The technical challenges are substantial, but the alternative—accepting that all our AI interactions must be surveilled—is increasingly untenable.
Project Resources:
- Confer GitHub Repository
- Signal's Technical Documentation (for context on encryption protocols)
- Intel SGX Documentation (underlying trusted execution technology)

Comments
Please log in or register to join the discussion