

295·
2 days agoNo TF its not. The AI can only output hallucinations that are most statistically likely. There’s no way to sort the bad answers from the good. Google at least supplies a wide range of content to sort through to find the best result.


No TF its not. The AI can only output hallucinations that are most statistically likely. There’s no way to sort the bad answers from the good. Google at least supplies a wide range of content to sort through to find the best result.
Interesting. LLMs have no ability to directly do anything put output text so the tooling around the LLM is what’s actually searching. They probably use some API from bing or something, have you compared results with those from bing because I’d be interested to see how similar they are or how much extra tooling is used for search. I can’t imagine they want to use a lot of cycles generating only like 3 search queries per request, unless they have a smaller dedicated model for that. Would be interested to see the architecture behind it and what’s different from normal search engines.