Very valid concern – this is the #1 question for any accessibility-based app.
Quick Answer: MiniAi doesn’t see your screen. it only reads text
When you trigger it explicitly. Here is the exact flow:
1. Select your text and press ⌥Space
2. At that very moment, the app reads:
– (a) the text you selected, and
– (b) ~2 sentences before + 2 sentences after from the same text field
Surrounding context helps explain the cloud better
(e.g. knowing what “it” refers to in a paragraph)
3. That bundle is sent over HTTPS to the cloud (cloud-sonnet-4)
4. Nothing is stored – not locally, not on our servers. Discarded
Right after the reaction.
Important Scope Limitations:
– Reference is taken only from the currently focused text area –
Not the entire screen, not other apps, not your clipboard
– No screenshots, no OCR, no background monitoring, no keystroke logging
– If you don’t press ⌥Space, nothing will be read.
Why Accessibility Permission: macOS requires reading selected
Text from other apps + immediate surrounding context. same mechanism
Raycast, PopClip, Text Expander, etc.
What we put on our backend: Only an anonymous session token and a
Daily usage counter. No email, no account signup, no content.
Full privacy policy: https://miniia.dev/en/privacy
The easiest way to verify: Turn off accessibility permissions
System Settings. App stops working – because without your explicit
Selection, there is nothing to read in it. This is also the best proof that it is not so
To record anything silently.
Would love to delve deeper into any of these – privacy is such a thing
This deserves real scrutiny, not hand waving.
Thanks for your comment! Have a nice day😄
<a href