A Privacy-First AI Assistant? Early Observations on Proton’s Lumo
- Tiffany Quach
- Nov 2, 2025
- 2 min read
Updated: 6 days ago
Most consumer AI assistants include some version of the same fine print: user inputs may be retained and used to improve models. For individuals and companies handling sensitive information, that tradeoff is not always acceptable.
Lumo, an AI assistant recently launched by the Swiss company behind Proton Mail, takes a different approach. It is designed as a privacy-first AI tool, with encryption and data-handling practices that diverge from many mainstream consumer AI services.
With the caveat that this is based on early use, a few aspects of Lumo stand out.
Zero-Access Encryption by Design
According to Proton, Lumo uses zero-access encryption, meaning that only the user can access conversation content. Even Proton’s own engineers are not able to read prompts or responses.
This model mirrors Proton’s broader approach to email and storage services and is materially different from AI systems that retain plaintext prompts or logs internally.
No Use of Prompts for Model Training
Many consumer AI platforms rely on user inputs to train or refine large language models, profile users, or improve downstream services.
Lumo explicitly states that prompts and responses are not used for model training. For privacy-sensitive use cases (whether personal, professional, or regulated), this distinction matters.
Open-Source Models Hosted in Europe
Lumo runs on a mix of open-source AI models hosted on Proton’s European infrastructure. From a data protection perspective, this may be relevant for users concerned about data localization, cross-border transfers, or alignment with European privacy norms.
While infrastructure location alone does not resolve compliance questions, it is an increasingly common factor in AI vendor evaluations.
Tradeoffs: Capability vs. Privacy
Lumo is not positioned as a replacement for the most powerful general-purpose AI tools on the market. In terms of raw capability, it currently lags behind leading commercial models.
That said, for scenarios where privacy is the primary constraint, having a less powerful but more privacy-protective option can be useful.
Minimal Friction to Use
Two additional design choices stand out:
Lumo does not require users to create an account
The interface is intentionally simple
Reducing account creation and identity linkage can meaningfully limit data collection and downstream risk.
A Broader Trend Toward Privacy-Conscious AI
Lumo reflects a broader shift in the AI ecosystem. As awareness grows around data retention, model training, and secondary use of inputs, some users are actively seeking tools that opt out of the default “train on everything” model.
Whether privacy-first AI assistants gain broader adoption remains to be seen. But their emergence signals that data practices are becoming a competitive differentiator, not just a compliance footnote. (And I personally like the purple cat mascot.)


Comments