Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>You can define a communication protocol between agents that fails when the communicating agent has been prompt injected

Good luck with that.



Yeah, how exactly would that work?


A schema with response metadata (so responses that deviate from it fail automatically), plus a challenge question that's calibrated to be hard enough that the disruption of instruction following from prompt injection can cause the model to answer incorrectly.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: