Privacy Shield sits between you and any cloud AI service (Claude, Gemini, ChatGPT). Before your text is sent to the AI, it removes personal information and replaces it with harmless placeholder tokens. When the AI responds, it swaps the tokens back to real values.
This works for any conversation — emails, chat, document analysis, code review, anything.
Cloud AI services process your data on their servers. Even with good privacy policies, sending personal names, addresses, phone numbers, and other identifying information to external services is a risk. Privacy Shield removes that risk by ensuring no identifiable data ever leaves your machine.
You type:
Can you help me draft a reply to Sarah Jones at Acme Corp? She wants to meet at our office at 14 Oak Lane, Bristol next Thursday.
After Privacy Shield (what the AI sees):
Can you help me draft a reply to [PERSON_1] at [ORG_1]? She wants to meet at our office at [ADDRESS_1] next Thursday.
AI responds:
Sure! Here's a draft: "Hi [PERSON_1], Thanks for reaching out. We'd be happy to meet at [ADDRESS_1]. Would 2pm work for you?"
After de-obfuscation (what you see):
Sure! Here's a draft: "Hi Sarah Jones, Thanks for reaching out. We'd be happy to meet at 14 Oak Lane, Bristol. Would 2pm work for you?"
Phase 1 — Known Data: You maintain a config file listing your personal information (your name, family names, addresses, emails, employer, etc.). The system reliably replaces all of these every time. This is fast and 100% accurate.
Phase 2 — Automatic Detection: Microsoft Presidio (a lightweight PII detection engine, ~12MB) scans for anything Phase 1 missed — names of people you haven't listed, phone numbers mentioned in emails, locations, etc. It uses a confidence threshold to avoid replacing things that aren't actually personal data.
[PERSON_1] throughout a conversation, so the AI understands relationships