This model seems to me to presume that programmers, QA, and business unit testers are untrustworthy either in fact or in perception.
It seems like good practise to me to only rely on trust when it's needed. When people don't need to have access to something, don't give it to them.
I don't think you can reasonably assume that among all the employees in a big corporation, there's nobody who is curious about his ex-girlfriend's medical record, or nobody who might have opened a questionable email attachment.
Believing organizations can trust their internal employees with data is also not really in agreement with history, I feel. Various CIA spies, Snowden leaks*, Facebook, Snapchat, others ... And that's the big ones, not the individual employees looking up their neighbours.
*(I'm personally glad Snowden leaked that data, but from an information security perspective, it's not something to use as example).
If that's unconvincing, it's also very likely plain illegal in Europe to give random programmers access to people's medical data. Really fast way to lose millions of euros.
TLDR: if your answer to the titular question is "yes", maybe don't post it online.
Another problem with using production data for tests, unrelated to privacy, is that if you're designing something new, there is no production data to test with.