Most personal AI products use one model. Mandaire uses two, doing different jobs.

When you chat with a single AI assistant about your life, the assistant must see your raw data to be useful. The data is the input; the conversation is the output. Once the assistant has the data, you have lost control of it. Mandaire splits this into two roles, on purpose.

The reasoning model

An AI model running inside your Mandaire holds the full picture of your communications, calendar, and contacts. It is the only model that sees your raw data. You choose the provider: a local model running on your own hardware, or a cloud provider you select directly.

This is the model that does the work. It runs inside the boundary you control.

The rendering model

The chat client you already use, whether Claude, ChatGPT, Gemini, or any MCP-compatible client, is the rendering model. It never sees your raw data. It only sees the disclosure-filtered briefings the reasoning model produces. You bring the chat client. You keep the chat client. You can switch chat clients without changing your Mandaire.

This is the model that talks to you. It only ever sees what your Mandaire has decided is appropriate to expose.

Between the reasoning model and the outside world, a deterministic policy decides what goes through.

Every query that reaches your Mandaire from outside passes through a disclosure layer before any answer leaves. Audience tier, query purpose, and recipient relationship all factor into the policy. The disclosure layer is deterministic code, not another AI model. It cannot be talked into bypassing its own rules. That guarantee is specifically about the disclosure engine; the reasoning model upstream is still an LLM, and the threat model for it is described below under prompt injection.

Per-audience filtering

What you tell your closest confidant is not what you tell a colleague. Mandaire builds the audience model into the policy, so the same underlying picture produces different outputs for different recipients without your having to redact each one by hand. The audience for any given conversation is set by which chat client is connected: Mandaire's MCP server exposes a different tier to a personal chat than to a shared one. Per-recipient policies refine this further, and you can edit them.

Read-only against connected services by default

Mandaire reads from Gmail, Calendar, iMessage, WhatsApp, and the other sources you connect; it does not send mail, modify calendars, or write to your message threads on those services without an explicit per-action approval step. The disclosure engine enforces this regardless of what the reasoning model suggests. The read-only-by-default scope is about writing back to your source-of-truth services. Mandaire continuously emits structured outputs into the rendering channel you choose, whether chat client, Matrix, or outbound disclosure. That is the product working.

Audit trail

Every external query and every disclosure decision is logged as metadata: who asked, when, what audience tier, what categories of information were exposed, what tool was called, what the status code was. The bodies of queries and responses are not retained, in line with the same minimal-retention principle described in the privacy policy. The audit log lets you see what your Mandaire has done and with whom, without itself becoming a second copy of the conversations you wanted private.

Prompt injection and untrusted inputs

The reasoning model reads inbound channels (email, messaging, calendar invites) that can contain adversarial content. The primary defense is structural: the reasoning model produces only typed, schema-validated intermediate representations, not free-form text. Each field in the schema declares what categories of content it can carry; the disclosure engine validates the structure, and applies audience-tier policy on each field, before any value reaches the rendering model. The reasoning model has no path to send free-form output past this validation. As a backstop, certain content classes (raw OAuth tokens, encryption keys, payment credentials) are explicitly forbidden from any field in any output regardless of input. Per-user isolation means cross-user injection is architecturally impossible.

Honest about what this does and does not prevent. Schema validation prevents arbitrary tool calls and output shapes, not all semantic injection. An attacker who places adversarial instructions inside an email body or a calendar invite can still have that text carried through a typed summary or evidence field and potentially influence the rendering model that reads it. Mandaire treats all source-derived text as untrusted evidence and labels it as such when exposed to rendering models, so the rendering model can apply its own caution. The schema and the threat model are part of the open-source release. The threat is real and well known; the defenses are imperfect but specific.

Key custody and recovery

Your encryption key is derived from a secret only you hold. You back the secret up the way you back up any other irreplaceable secret: a password manager, a written copy in a safe, a hardware key, or a recovery sheet you store somewhere physical. If you lose the secret with no backup, your data is unrecoverable, the same way a lost hardware-wallet seed makes a wallet unrecoverable. Mandaire cannot reset it for you, and we cannot recover it on your behalf. This is the trade-off that makes the sovereignty pitch real. Multi-factor authentication is required for the user-to-Mandaire connection itself, separately from the data-encryption key, and authentication events are recorded in your audit log.

Where the key lives during operation. The reasoning model is a long-running daemon and needs the derived key in memory while it processes inbound channels. Your secret is held on a small local agent on a device you control, typically a Mac. The agent forwards the derived key to the server over a mutual-TLS channel at session start, the daemon holds it in process memory only, and the key is wiped from memory whenever the daemon stops or restarts. The key is never written to disk on the server. The threat model this addresses is cold-storage compromise (lost or seized disk, snapshot of a stopped VM, backup recovery): an attacker with the disk gets ciphertext only. The threats this does not address, and which you should price into your decision: hypervisor-level RAM inspection by the cloud provider during an active session; insider access by Mandaire operations staff during an active SSH session, logged in your audit trail but not prevented. The escape valve for both is self-host on hardware you control, where there is no cloud-provider hypervisor and no Mandaire ops access.

The source of truth stays at the source. Mandaire indexes; it does not duplicate.

For each connected service, Mandaire keeps a derived index that makes your data fast to query, and a pointer back to where the original lives. Email bodies live in your mail provider. Messages live in their bridges. Calendar events live in your calendar. Mandaire knows where everything is and how it connects, but the canonical copy stays at the canonical source.

Why it matters

If a Mandaire index is corrupted or lost, it can be rebuilt from the canonical source. If a canonical source is corrupted, your data is still in the original service, untouched. Mandaire's indexes are derived; they are never the only copy of anything.

Binary attachments (photos, videos, audio, PDFs) are referenced, not copied. When you genuinely need the bytes, Mandaire fetches them live from the source for that request and does not retain them. The metadata stays with Mandaire; the bytes stay with the source.

The legal owner of your data is the same legal entity as the legal owner of your account.

Most cloud services hold your data on their infrastructure under their billing relationship. Even when they promise not to look, they are the legal custodian. Mandaire is structured differently.

You own the cloud account directly

If you choose the managed configuration, the cloud account at DigitalOcean, Hetzner, Vultr, or similar is in your name. You pay the cloud provider directly. Mandaire never holds your cloud bill, never has billing access, and is never the legal custodian of your data.

Mandaire operates the box, you own the box

Think of it less like a SaaS and more like managed hosting. Mandaire provides the software, the operational expertise, and the support. You own the server, the data on it, and the legal relationship with the cloud provider.

You can leave at any time

If Mandaire stops operating tomorrow, you keep your cloud account, your data, your encryption keys, and a working software stack. The exit cost is zero. There is no lock-in mechanism we could use against you because the architecture does not give us one.

Self-host is always an option

If you want to run Mandaire entirely on hardware you own, you can. The reference architecture is documented on this page; the encryption module is on track for release as open source with reproducible builds in Q3 2026, and the repository link will be published when it is. The managed configuration exists for convenience, not because the self-hosted path is second-class.

What "Mandaire operates the box" actually means

In the managed configuration, Mandaire's ops team has SSH access to the server in your cloud account in order to install software, apply security patches, and respond when things break. This is the same operational arrangement as managed Wordpress or managed Postgres. The honest framing is: Mandaire holds an operational key to a server you legally own, and every action that key authorizes is logged in the audit trail you can review. The reasoning model, when running, holds plaintext data in process memory; when no query is in flight, the data on disk is encrypted with a key derived from a secret you hold. If you want zero operational access by Mandaire staff to your data, the self-hosted path provides that.

Mandaire itself is free during private beta. Your cloud and AI costs are not zero.

A category of AI products advertises "free" while hiding the underlying compute and API costs. Mandaire does not do this. Here is what running a personal Mandaire actually costs, with realistic 2026 ranges.

One Pro subscription, one provider, the most common path

A $20-per-month Pro subscription on Claude, ChatGPT, or Gemini covers both the reasoning model that runs inside your Mandaire and the rendering model you talk to in chat. Same provider, same key, same bill. Add roughly $5 to $20 per month for a small VPS at Hetzner, DigitalOcean, or Vultr where Mandaire runs. Realistic total for a typical active user during beta: under $40 per month, all in.

Local reasoning model on the same cloud box

Roughly $40 to several hundred per month for a GPU-class VPS instance running an open-weights model on the server you own. No third-party AI provider sees your data. Picks the privacy-maximizing path without requiring you to host hardware. You may still pay $20 per month for a chat client subscription on the rendering side, or use a free-tier MCP client.

Self-host on hardware you already own

$0 in cloud costs, $0 in AI costs if you use a local model on both sides, plus the time to administer the server. Right if you have spare hardware (a Mac mini or a NAS with enough memory) and the comfort to maintain a Linux process. Mandaire ops staff have no path to your data on this configuration.

Mandaire itself

Mandaire's software (encryption layer, reasoning engine, disclosure engine, connectors, support) is free for individual use during private beta. The core self-hosted software will remain free for individuals after beta. A small flat management fee for the operated stack — covering setup, updates, security patches, monitoring, and backups — will apply on the managed-hosting path when Mandaire moves out of beta; pricing will be published before general availability. Paid professional and team plans will offer richer disclosure controls and shared-context capabilities for users who want them. We do not monetize your data and we do not run ads.

A memory feature inside a single provider is different from a sovereign personal data fabric.

Major AI providers now offer memory features that retain context across conversations. These are useful inside the provider's surface. They differ from Mandaire in three structural ways.

One provider versus all your providers

A provider's memory only sees what you told that provider. Mandaire reads across email, messaging, calendar, contacts, photos, notes, docs, and files, and holds the unified picture. The provider's memory is a per-conversation cache. Mandaire is a personal data fabric.

Vendor server versus your sovereignty

A provider's memory lives on the provider's servers under the provider's policy. Mandaire lives on infrastructure you own under encryption keys you hold. Policy guarantees and architectural guarantees are not the same thing.

Disclosure mediation, not just retention

A memory feature stores facts about you. Mandaire decides what to expose, to whom, and in what form. The reasoning model and the rendering model are separated specifically so the rendering model never has to be trusted with your raw data.

A personal AI that knows you, on infrastructure you own.

If you want a thinking partner that holds your full picture without surrendering it, Mandaire is in private beta and free for individuals.

Request early access
Private beta