• Leon@pawb.social
    link
    fedilink
    English
    arrow-up
    25
    arrow-down
    1
    ·
    edit-2
    3 days ago

    That’s a misrepresentation of what LLMs do. You feed them a fuckton of data and they, to oversimplify it a bit, put these concepts in a multi-dimensional map. Then based on input, it can give you an estimation of an output by referencing said map. It doesn’t search for anything, it’s just mathematics.

    It’s particularly easy to demonstrate with image models, where you could take two separate concepts, like say “eskimo dog” and “daisy” and add them together.

    When you query ChatGPT for something and it “searches” for it, it’s either fitted enough that it can reproduce a link directly, or it calls a script that performs a web search (likely using Bing) and compiles the result for you.

    You could do the same, just using an actual search engine.

    Hell, you could build your own “AI search engine” with an open weights model and a little bit of time.

    • korendian@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      7
      ·
      3 days ago

      It depends on the model and who made it, as well as what you are asking. If its a well known fact, like “who was president in 1992”, then it is math as you say, and could be wrong, but is more often right than not, but if it’s something more current and specific, like “what is the best Italian restaurant in my area” then it does in fact so the search for you, using google maps and reviews and other data.