• @[email protected]
    link
    fedilink
    345 minutes ago

    I find it amusing that everyone is answering the question with the assumption that the premise of OP’s question is correct. You’re all hallucinating the same way that an LLM would.

    LLMs are rarely trained on a single source of data exclusively. All the big ones you find will have been trained on a huge dataset including Reddit, research papers, books, letters, government documents, Wikipedia, GitHub, and much more.

    Example datasets:

    • @andrewta
      link
      121 minutes ago

      Rules of lemmy

      Ignore facts, don’t do research to see if the comment/post is correct, don’t look at other comments to see if anyone else has corrected the post/comment already, there is only one right side (and that is the side of the loudest group)

  • @[email protected]
    link
    fedilink
    103 hours ago

    “AI” is a parlor trick. Very impressive at first, then you realize there isn’t much to it that is actually meaningful. It regurgitates language patterns, patterns in images, etc. It can make a great Markov chain. But if you want to create an “AI” that just mines research papers, it will be unable to do useful things like synthesize information or describe the state of a research field. It is incapable of critical or analytical approaches. It will only be able to answer simple questions with dubious accuracy and to summarize texts (also with dubious accuracy).

    Let’s say you want to understand research on sugar and obesity using only a corpus from peer reviewed articles. You want to ask something like, “what is the relationship between sugar and obesity?”. What will LLMs do when you ask this question? Well, they will just attempt to do associations and to construct reasonable-sounding sentences based on their set of research articles. They might even just take an actual semtence from an article and reframe it a little, just like a high schooler trying to get away with plagiarism. But they won’t be able to actually mechanistically explain the overall mechanisms and will fall flat on their face when trying to discern nonsense funded by food lobbies from critical research. LLMs do not think or criticize. Of they do produce an answer that suggests controversy it will be because they either recognized diversity in the papers or, more likely, their corpus contains reviee articles that criticize articles funded by the food industry. But it will be unable to actually criticize the poor work or provide a summary of the relationship between sugar and obesity based on any actual understanding that questions, for example, whether this is even a valid question to ask in the first place (bodies are not simple!). It can only copy and mimic.

    • @[email protected]
      link
      fedilink
      138 minutes ago

      Why does everyone keep calling them Markov chains? They’re missing all the required properties, including the eponymous Markovian property. Wouldn’t it be more correct to call them stochastic processes?

      • @[email protected]
        link
        fedilink
        133 minutes ago

        Because it’s close enough. Turn off beam and redefine your state space and the property holds.

    • @spongebue
      link
      11
      edit-2
      6 hours ago

      Machine learning has some pretty cool potential in certain areas, especially in the medical field. Unfortunately the predominant use of it now is slop produced by copyright laundering shoved down our throats by every techbro hoping they’ll be the next big thing.

  • @Tabooki
    link
    95 hours ago

    They already do that. You’re being a troglodyte.

    • @[email protected]OP
      link
      fedilink
      74 hours ago

      Hmmm. Not sure if I’m being insulted. Is that one of those fish fossils that looks kind of like a horseshoe crab?

      • @Glytch
        link
        94 hours ago

        You’re thinking of a trilobite

      • @Tabooki
        link
        -14 hours ago

        Dictionary Definitions from Oxford Languages · Learn more noun (especially in prehistoric times) a person who lived in a cave. a hermit. a person who is regarded as being deliberately ignorant or old-fashioned.

  • @[email protected]
    link
    fedilink
    207 hours ago

    You could feed all the research papers in the world to an LLM and it will still have zero understanding of what you trained it on. It will still make shit up, it can’t save the world.

  • lattrommi
    link
    fedilink
    English
    24 hours ago

    I think I read this post wrong.

    I was thinking the sentence “We could be saving the world!” meant ‘we’ as in humans only.

    No need to be training AI. No need to do anything with AI at all. Humans simply start saving the world. Our Research Papers can train on Reddit. We cannot be training, we are saving the world. Let the Research Papers run a train on Reddit AI. Humanity Saves World.

    No cynical replies please.

  • @[email protected]
    link
    fedilink
    298 hours ago

    Both are happening. Samples of casual writing are more valuable to use to generate an article than research papers though.

    • FaceDeer
      link
      fedilink
      67 hours ago

      Yeah. Scientific papers may teach an AI about science, but Reddit posts teach AI how to interact with people and “talk” to them. Both are valuable.

      • geekwithsoul
        link
        fedilink
        English
        66 hours ago

        Hopefully not too pedantic, but no one is “teaching” AI anything. They’re just feeding it data in the hopes that it can learn probabilities for certain types of output. It “understands” neither the Reddit post nor the scientific paper.

        • @Zexks
          link
          -16 hours ago

          Describe how you ‘learned’ to speak. How do you know what word comes after the next. Until you can describe this process in a way that doesn’t make it ‘human’ or ‘biological’ only it’s no different. The only thing they can’t do is adjust their weights dynamically. But that’s a limitation we gave it not intrinsic to the system.

          • geekwithsoul
            link
            fedilink
            English
            36 hours ago

            I inherited brain structures that are natural language processors. As well as the ability to understand and repeat any language sounds. Over time, my brain focused in on only the language sounds I heard the most and through trial and repetition learned how to understand and make those sounds.

            AI - as it currently exists - is essentially a babbling infant with none of the structures necessary to do anything more than repeat sounds back without understanding any of them. Anyone who tells you different is selling you something.

  • @[email protected]
    link
    fedilink
    257 hours ago

    Because AI needs a lot of training data to reliably generate something appropriate. It’s easier to get millions of reddit posts than millions of research papers.

    Even then, LLMs simply generate text but have no idea what the text means. It just knows those words have a high probability of matching the expected response. It doesn’t check that what was generated is factual.

  • Destide
    link
    fedilink
    English
    127 hours ago

    Redditors are always right, peer reviewed papers always wrong. Pretty obvious really. :D

  • @[email protected]
    link
    fedilink
    English
    107 hours ago

    Papers are most importantly a documentation of exactly what and how a procedure was performed, adding a vagueness filter over that is only going to decrease its value infinitely.

    Real question is why are we using generative ai at all (gets money out of idiot rich people)

  • cobysev
    link
    English
    67 hours ago

    We are. I just read an article yesterday about how Microsoft paid research publishers so they could use the papers to train AI, with or without the consent of the papers’ authors. The publishers also reduced the peer review window so they could publish papers faster and get more money from Microsoft. So… expect AI to be trained on a lot of sloppy, poorly-reviewed research papers because of corporate greed.

    • @[email protected]
      link
      fedilink
      37 hours ago

      What he was fighting for was an awful lot more important than a tool to write your emails while causing a ginormous tech bubble.