• @elrik
    link
    English
    223 days ago

    The context is not the same. A snippet is incomplete and often lacking important details. It’s minimally tailored to your query unlike a response generated by an LLM. The obvious extension to this is conversational search, where clarification and additional detail still doesn’t require you to click on any sources; you simply ask follow up questions.

    With Gemini?

    Yes. How do you think the Gemini model understands language in the first place?

    • @woelkchen
      link
      English
      023 days ago

      The context is not the same.

      It’s not the same but it’s similar enough when, as the article states, it is solely about short summaries. The article may be wrong, Google may be outright lying, maybe, maybe, maybe.

      Google, as by far the web’s largest ad provider, has a business incentive to direct users towards the web sites, so the website operators have to pay Google money. Maybe I’m missing something but I just don’t see the business sense in Google not doing that and so far I don’t see anything approximating convincing arguments.

      Yes. How do you think the Gemini model understands language in the first place?

      Licensed and public domain content, of which there is plenty, maybe even content specifically created by Google to train the data. “the Gemini model understands language” in itself hardly is proof of any wrongdoing. I don’t claim to have perfect knowledge or memory, so it’s certainly possible that I missed more specific evidence but “the Gemini model understands language” by itself definitively is not.