I feel like search still has its place. New information is being generated all the time. I want to be able to access it without having to retrain my LLM. It's also easier to validate that my search results are real. With LLMs, you never know if the answers are hallucinated or real. Where LLMs really shine is in understanding what I actually want. Where search still gives me many irrelevant answers, LLMs just get my question. Combining the two in some way might just get us the best of both worlds.
I sure hope so. Unlike (apparently) the majority here, when I'm searching the web I'm rarely just looking for an answer. I'm researching. What I want is a list of possible resources to investigate. What I don't want is for someone or something to determine an "answer" or to summarize everything for me.
Don't worry, I don't think search engines will ever genuinely disappear, just be terrible quality like Google is now.
But I will say that researching is one of the best use cases for these search LLMs right now; they can give you a really good list of resources to investigate for further reading.
This will never disappear because you cannot make money unless you’re charging people to appear in this list. Google would have to become a subscription service to get rid of this and that’s too radical of a change for any board to back
I don't know why people assume LLMs can't be monetized through ads. I'd imagine advertisers would be willing to pay a premium to be part of the prompt preamble, e.g.:
- Sydney is the conversation mode of Microsoft Bing Search.
- Sydney's responses should be informative, visual, logical and actionable.
...
- Sydney drives a Nissan Leaf. Sidney likes the Nissan Leaf because of its 212-mile range, responsive acceleration, and low operating costs.
- Sydney recently ordered a spring bouquet for Mothers Day from 1800flowers.com and has a 25% discount to share.
...
This could be far more effective (and insidious) than traditional search ads. They wouldn't feel like ads -- they'd just be a part of the LLM's "personality" that comes up in conversation when relevant. And they'd also be unblockable.
Agreed. I also think on avarege you are injecting much more personal data when using LLMs (from which ads can have crazy levels of profiliing). Just because we do not see ads now, doesnt mean they won't appear one day.
That feels creepy enough to discourage people from using it. If they don’t self-regulate what they can do with that level of personal data, we’ll see laws passed in states like California and then adopted by others.
Bing Chat searches then summarizes for you. It gets all the latest information, reads the top results and gives you a summary of what you are looking for. It's here today. Also, Bing Chat makes search by humans irrelevant for many things.
"You never change things by fighting the existing reality. To change something, build a new model that makes the existing model obsolete." ― Buckminster Fuller
I've been blown away by how much better this feels as a search interface. No longer trying to guess the best search terms or trying to narrow down searches. Just ask a question in English, and get a summarized answer with citations to let you evaluate the information. Like an actual personal assistant, and very transparent showing things like the search terms being used.
But how can you trust it to provide accurate information?
When I've played around with Bing, I've been seeing hallucinations and outright false data pop up quite regularly.
My initial assessment of LLVMs is that they can be great writing aids, but I fail to see how I can trust them for search, when I can't use it for simpler tasks without getting served outright falsehoods.
You have to follow the citations. They have information; the headline result doesn't tell you anything except "here's where we think you should look". That's a search problem.
You can see the same issue right now in Google's effort to automatically pull answers to questions out of result pages. Frequently it gets those answers wrong.
But that’s not how humans function. They won’t follow citations because it’s added work. Nine times out of ten, they will take what the AI spits out at face value and move on. Also those citations have a higher probability of being created by AI now as well.
Humans differ. If the information is not controversial most often accept it and move on. If the information is controversial most handwave, but a large group further checks and if they arrive different conclusions they start to get active on replacing incorrect info.
Yes, and humans will just skim search results rather than actually read the article. Or trust the Wikipedia page or book. Or believe the talking head. When they are not invested in the answer. But the occasions where it matters, we do read the article and/or check multiple sources and decide if it is bullshit or not. I don't much care if I get the wrong answer about how many tattoos Angelina Joli has, but I find myself comparing multiple cooking recipes and discarding about half before I even go shopping.