Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> The thing is, your mistake isn't just distrusting the language model, it's trusting the search engine.

There is a rather substantial difference between a search engine, which suggests sources which the reader can evaluate based on their merits, and a language model, whose output may or may not be based on any sources at all, and which cannot (accurately) cite sources for statements it makes.

> Similar degrees of caution and skepticism must be applied to results from both ML and traditional search engines.

This is a fairly ridiculous statement.



This is a fairly ridiculous statement.

Really? Have you used Google lately -- say, in the past 6-12 months?


I personally use search engines on a daily basis. They link me to external websites that I can trust or distrust to varying degrees depending on my prior experience with them and the amount of further research I put in.

If a person is in the habit of using a search engine like a chat bot by typing in questions AskJeeves-style and then believing what text pops up in the info cards above the ads (which are themselves above the search results), I could see how the distinction between chat bots and search engines could seem trivial.

The similarity between chat bots and search engines breaks down significantly if the user scrolls down past the info cards and ads and then clicks on a link to an external website. At that point in the user experience it is no longer like chatting with a confident NPC.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: