> That 20ms is a smoking gun - it lines up perfectly with the mysterious pattern we saw earlier!
Speaking of smoking guns, anybody else reckon Claude overuses that term a lot? Seems anytime I give it some debugging question, it'll claim some random thing like a version number or whatever, is a "smoking gun"
Yes! While this post was written entirely by me, I wouldn't be surprised if I had "smoking gun" ready to go because I spent so much time debugging with Claude last night.
Serious question though, since AI seems to be so all capable and intelligent. Why wouldn't it be able to tell you the exact reason that I could tell you just by reading the title of this post on HN? It is failing even at the one thing it could probably do decently, is being a search engine.
I've had gemini tell me "We are debugging this problem here in İstanbul" and talking about an istanbul evening, trying to give uplifting or familiar vibes while being creepy.
I think there was a setting about time and location which finally got rid of that behavior.
It's not just a coincidence, it's the emergence of spurious statistical correlations when observations happen across sessions rather than within sessions.
Or the "Eureka! That's not just a smoking gun, it's a classic case of LLMspeak."
Grok, ChatGPT, and Claude all have these tics, and even the pro versions will use their signature phrases multiple times in an answer. I have to wonder if it's deliberate, to make detecting AI easier?
Without knowing how LLM's personality tuning works, I'd just hazard a guess that the excitability (tendency to use excided phrases) is turned up. "smoking gun" must be highly rated as a term of excitability. This should apply to other phrases like "outstanding!" or "good find!" "You're right!" etc.
smoking gun, you're absolutely right, good question, em dash, "it isn't just foo, it's also bar", real honest truth, brutal truth, underscores the issue, delves into, more em dashes, <20 different hr/corporate/cringe phrases>.
Maybe we need a real AI which creates new phrases and teaches the poor LLMs?
Looking back we already had similar problems, when we had to ask our colleagues, students, whomever "Did you get your proposed solution from the answers part or the questions part of a stackoverflow article?" :-0
That's the point though, it doesn't reflect human usage of the word. If
delve were so commonly used by humans too, we wouldn't be discussing
how it's overused by LLMs.
You might see certain phrases and mdashes ;-) rather often, because … these programs are trained on data written by people (or Microsoft's spelling correction) which overused them in the last n years? So what should these poor LLMs generate instead?
They love clichés, and hate repeating the same words for something (repetition penalty) so they'll say something like "cause" then it's a "smoking gun" then it's something else
Speaking of smoking guns, anybody else reckon Claude overuses that term a lot? Seems anytime I give it some debugging question, it'll claim some random thing like a version number or whatever, is a "smoking gun"