This connects to something I keep running into with health software specifically: the default architecture often decides the trust model before the user ever gets a real choice.
Cloud APIs for AI feel low-friction until you map out what is actually leaving the device. For health data, legal evidence, or anything involving a vulnerable user, that exit point is not neutral. It changes the breach surface, the recovery options, and what happens to the user when the company changes its terms or gets acquired.
Local-first is not always the right answer for AI inference. But the question of what stays on device and what leaves should be an intentional design decision, not a default.
I wrote about this from the health-data side today because I ran into the same problem building PainTracker. blog.paintracker.ca/stop-putting-health-data-in-t…