AB
I have been working on a project to prevent prompt injections for AI agents. but this method of indirect prompt injections can and will bypass most of the security measures they have in place. Good work pointing it out, but this gets me wondering, how can we secure our agents against such attacks ?
