Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does Claude Code perform any different than browser Claude for writing tasks?

I recently wrote a long Markdown document, and asked Claude, ChatGPT, Grok, and Gemini to improve it.

Comparing outputs, it was very close between Gemini and Claude, but I decided that Claude was slightly better-written.



The models are the same, but the actual prompts sent to the model are likely somewhat different because of the agentic loop - so I would imagine (without having done the experiments) there will be slight differences. Unclear whether they will be more or less than the variance in responses sent multiple times to the same experience (e.g., Claude.ai variance vs. Claude Code variance vs. variance between Claude.ai and Claude Code). Would be an interesting controlled experiment to try!




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: