Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The key finding is that "compression" of doc pointers works.

It's barely readable to humans, but directly and efficiently relevant to LLM's (direct reference -> referent, without language verbiage).

This suggests some (compressed) index format that is always loaded into context will replace heuristics around agents.md/claude.md/skills.md.

So I would bet this year we get some normalization of both the indexes and the referenced documentation (esp. matching terms).

Possibly also a side issue: API's could repurpose their test suites as validation to compare LLM performance of code tasks.

LLM's create huge adoption waves. Libraries/API's will have to learn to surf them or be limited to usage by humans.



That's not the only useful takeaway. I found this to be true:

  > "Explore project first, then invoke skill" [produces better results than] "You MUST invoke the skill".
I recently tried to get Antigravity to consistently adhere to my AGENTS.md (Antigravity uses GEMINI.md). The agent consistently ignored instructions in GEMINI.md like:

- "You must follow the rules in [..]/AGENTS.md"

- "Always refer to your instructions in [..]/AGENTS.md"

Yet, this works every time: "Check for the presence of AGENTS.md files in the project workspace."

This behavior is mysterious. It's like how, in earlier days, "let's think, step by step" invoked chain-of-thought behavior but analogous prompts did not.


An idea: The first two are obviously written as second-person commands, but the third is ambiguous and could be interpreted as a first-person thought. Have you tried the first two without the "you must" and "your", to also change them to sort-of first-person in the same way?


Solid intuition. Testing this on antigravity is a chore because I'm not sure if I have to kill the background agent to force a refresh of the GEMINI.md file so I just did it anyway.

  +------------------+------------------------------------------------------+
  | Success/Attempts | Instructions                                         |
  +------------------+------------------------------------------------------+
  | 0/3              | Follow the instructions in AGENTS.md.                |
  +------------------+------------------------------------------------------+
  | 3/3              | I will follow the instructions in AGENTS.md.         |
  +------------------+------------------------------------------------------+
  | 3/3              | I will check for the presence of AGENTS.md files in  |
  |                  | the project workspace. I will read AGENTS.md and     |
  |                  | adhere to its rules.                                 |
  +------------------+------------------------------------------------------+
  | 2/3              | Check for the presence of AGENTS.md files in the     |
  |                  | project workspace. Read AGENTS.md and adhere to its  |
  |                  | rules.                                               |
  +------------------+------------------------------------------------------+

In this limited test, seems like the first person makes a difference.


This is a really interesting finding. It makes sense when you think about what the training data looks like — first person statements in a system prompt pattern-match to "internal monologue" or "chain of thought" examples, which the model has been heavily trained to follow through on. Second person commands pattern-match to user instructions, which the model has also been trained to sometimes push back on or reinterpret.

There's probably a related effect with imperative vs. declarative framing in skills too. "When the user asks about X, do Y" seems to work worse than "This project uses Y for X" in my experience. The declarative version reads like a fact about the world rather than a command to obey, and models seem to treat facts as more reliable context.

Would be curious if someone has tested this systematically across different models. The optimal framing might vary quite a bit between Claude, Gemini, and GPT.


Thanks for this (and to Izkata for the suggestion). I now have about 100 (okay, minor exaggeration, but not as much as you'd like it to be) AGENTS.md/CLAUDE.md files and agent descriptions I will want to systematically validate if shifting toward first person helps adherence for...

I'm realising I need to start setting up an automated test-suite for my prompts...


Those of us who've ventured this far into the conversation would appreciate if you'd share your findings with us. Cheers!


That's really interesting. I ran this scenario through GPT-5.1 and the reasoning it gave made sense, which essentially boils down to: in tools like Claude Code, Gemini Codex, and other “agentic coding” modes, the model isn’t just generating text, it’s running a planner, and the first-person form conforms to the expectation of a step in a plan, where the other modes are more ambiguous.


My suggestion was just straight text generation and thinking about what the training data might look like (imagining a narrative in a story): Commands between two people might not be followed right away or at all (and may even risk introducing rebellion and doing the opposite), while a first-person perspective is likely self motivation (almost guaranteed to do it) and may even be descriptive while doing it.


Interesting. It's almost like models don't like being ordered around rudely with this "must” language.

Perhaps what they've learned from training data is “must” often occurs in cases with bullshit red tape or other regulations. "You must read the terms and conditions before using this stuff," or something like that, which are actually best ignored.


ln -s


Would’ve been perfectly readable and no larger if they had used newline instead of pipe.


They say compressed... but isn't this just "minified"?


Minification is still a form of compression, it just leaves the file more readable than more powerful compression methods (such as ZIP archives).


I'd say minification/summarization is more like a lossy, semantic compression. This is only relevant to LLM's and doesn't really fit more classical notions of compression. Minification would definitely be a clearer term, even if compression _technically_ makes sense.


Most llms.txt are very similar to the compressed docs.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: