Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Yeah that's a bad look. If I have an API key visible in my code does that get packaged up as a "prompt" automatically? Could it be spat out to some other user of a model in the future?

(I assume that there's a reason that wouldn't happen, but it would be nice to know what that reason is.)



I wonder how hard it is to fish the keys out of the model weights later with prompting . Presumably possible to literally brute force it by giving it the first couple chars and maybe an env variable name and asking it to complete it


I'm also interested in the details on how this works in practice. I know that there was a front page post a few weeks ago about how Cursor worked, and there was a short blurb about how sets of security prompts told the LLM to not do things like hard code API keys, but nothing on the training side.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: