• @[email protected]
    link
    fedilink
    English
    6015 days ago

    Facts are not a data type for LLMs

    I kind of like this because it highlights the way LLMs operate kind of blind and drunk, they’re just really good at predicting the next word.

    • @CleoTheWizard
      link
      English
      2814 days ago

      They’re not good at predicting the next word, they’re good at predicting the next common word while excluding most unique choices.

      What results is essentially if you made a Venn diagram of human language and only ever used the center of it.

      • @[email protected]
        link
        fedilink
        English
        1514 days ago

        Yes, thanks for clarifying what I meant! AI will never create anything unique unless prompted uniquely and even then it will tend to revert back to what you expect most.

  • @[email protected]
    link
    fedilink
    English
    4514 days ago

    ATTN: If you’re coming into this thread to say, “The output of AI is bad because your prompts suck,” I’m just proud that you managed to figure out how to use the internet at all. Good job, you!

    • @[email protected]
      link
      fedilink
      English
      1414 days ago

      remember remember, eternal september

      (not that I much agree with the classist overtones of the original, but fuck me does it come to mind often)

  • Sibbo
    link
    fedilink
    English
    3015 days ago

    Well, to be fair, AI can do it in seconds. Which beats humans.

    But if that is relevant if the results are worthless is another question.

    • HubertManne
      link
      fedilink
      1214 days ago

      Yeah it changes the task from note taking or summarizing to proofreading.

      • @[email protected]
        link
        fedilink
        English
        512 days ago

        And proofreading is notably more complex and has a worse failure state than just writing your own summary.

        • HubertManne
          link
          fedilink
          -412 days ago

          Thing is you can do in real time and not pay as much attention to the goings on as you write or do it in the end and forget stuff. there is no harm in the ai summariziation. you could instead write a summary and check if you left anything out via the ai.

  • David GerardOPM
    link
    fedilink
    English
    2114 days ago

    how the hell did this of all the posts turn into a promptfondler shooting gallery

  • kbal
    link
    fedilink
    2114 days ago

    Made strange choices about what to highlight.

    They certainly do. For a while it was common to see AI-generated summaries under links to articles on lemmy, so I got a feel for them. Seems to me you would not need any fancy artificial intelligence to do equally well: Just take random excerpts, or maybe just read every third sentence.

  • David GerardOPM
    link
    fedilink
    English
    1814 days ago

    i have seen the light from the helpful posters here, made up bullshit alleged summaries of documents are great actually

  • @[email protected]
    link
    fedilink
    English
    1414 days ago

    Dang everyone here needs to look at a tree or a cat or something. Energy is wack in here

  • @[email protected]
    link
    fedilink
    English
    1113 days ago

    Could it be because a statistical relation isn’t the same as a semantic one? No, I must be prompting it wrong. I’ll just add “engineer” to my title and then everyone will take me seriously.

  • beefbot
    link
    fedilink
    English
    414 days ago

    Is it only me, or is the linked article not super long on details & is reaching a conclusion from 2 examples? This is important & I need to hear more, & I’m generally biased against AI at this point— but the article isn’t doing enough to convince me

    • @[email protected]
      link
      fedilink
      English
      1214 days ago

      did you click through to any of the inline citations? David’s shorter articles on pivot mostly gather and summarize those, so if you need to read the original research and its conclusions that’s where to go

      • beefbot
        link
        fedilink
        English
        1114 days ago

        Ah, that’s better, yes. Thank you , no sarcasm :) now sleepy brain is more informed

  • @[email protected]
    link
    fedilink
    English
    314 days ago

    I had GPT 3.5 break down 6x 45-minute verbatim interviews into bulleted summaries and it did great. I even asked it to anonymize people’s names and it did that too. I did re-read the summaries to make sure no duplicate info or hallucinations existed and it only needed a couple of corrections.

    Beats manually summarizing that info myself.

    Maybe their prompt sucks?

        • Steve
          link
          fedilink
          English
          1414 days ago

          “tools” doesn’t mean “good”

          good tools are designed well enough so it’s clear how they are used, held, or what-fucking-ever.

          fuck these simpleton takes are a pain in the arse. They’re always pushed by these idiots that have based their whole world view on fortune cookie aphorisms

          • @[email protected]
            link
            fedilink
            English
            1014 days ago

            it makes me feel fucking ancient to find that this dipshit didn’t seem to get the remark, and it wasn’t even that long ago

    • David GerardOPM
      link
      fedilink
      English
      2514 days ago

      I got AcausalRobotGPT to summarise your post and it said “I’m not saying it’s always programming.dev, but”

    • @HootinNHollerin
      link
      English
      1614 days ago

      Did you conduct or read all the interviews in full in order to verify no hallucinations?

    • @TexasDrunk
      link
      English
      -914 days ago

      I also use it for that pretty often. I always double check and usually it’s pretty good. Once in a great while it turns the summary into a complete shitshow but I always catch it on a reread, ask a second time, and it fixes things up. My biggest problem is that I’m dragged into too many useless meetings every week and this saves a ton of time over rereading entire transcripts and doing a poor job of summarizing because I have real work to get back to.

      I also use it as a rubber duck. It works pretty well if you tell it what it’s doing and tell it to ask questions.

      • @[email protected]
        link
        fedilink
        English
        812 days ago

        Isn’t the whole point of rubber duck debugging that the method works when talking to a literal rubber duck?

        • @[email protected]
          link
          fedilink
          English
          712 days ago

          what if your rubber duck released just an entire fuckton of CO2 into the environment constantly, even when you weren’t talking to it? surely that means it’s better

      • @[email protected]
        link
        fedilink
        English
        213 days ago

        Yup! I’ll feed in meeting transcripts and get a list of action steps to email out to everyone. If I was in project management, I’m pretty sure i’d outsource my entire job to LLMs.

  • Lvxferre
    link
    fedilink
    English
    -7
    edit-2
    15 days ago

    You could use them to know what the text is about, and if it’s worth your reading time. In this situation, it’s fine if the AI makes shit up, as you aren’t reading its output for the information itself anyway; and the distinction between summary and shortened version becomes moot.

    However, here’s the catch. If the text is long enough to warrant the question “should I spend my time reading this?”, it should contain an introduction for that very purpose. In other words if the text is well-written you don’t need this sort of “Gemini/ChatGPT, tell me what this text is about” on first place.

    EDIT: I’m not addressing documents in this. My bad, I know. [In my defence I’m reading shit in a screen the size of an ant.]

    • David GerardOPM
      link
      fedilink
      English
      1714 days ago

      Both the use cases here are goverment documents. I’m baffled at the idea of it being “fine if the AI makes shit up”.

    • queermunist she/her
      link
      fedilink
      English
      15
      edit-2
      14 days ago

      ChatGPT gives you a bad summary full of hallucinations and, as a result, you choose not to read the text based on that summary.

      • Lvxferre
        link
        fedilink
        English
        -414 days ago

        (For clarity I’ll re-emphasise that my top comment is the result of misreading the word “documents” out, so I’m speaking on general grounds about AI “summaries”, not just about AI “summaries” of documents.)

        The key here is that the LLM is likely to hallucinate the claims of the text being shortened, but not the topic. So provided that you care about the later but not the former, in order to decide if you’re going to read the whole thing, it’s good enough.

        And that is useful in a few situations. For example, if you have a metaphorical pile of a hundred or so scientific papers, and you only need the ones about a specific topic (like “Indo-European urheimat” or “Argiope spiders” or “banana bonds”).

        That backtracks to the OP. The issue with using AI summaries for documents is that you typically know the topic at hand, and you want the content instead. That’s bad because then the hallucinations won’t be “harmless”.

        • queermunist she/her
          link
          fedilink
          English
          1014 days ago

          But the claims of the text are often why you read it in the first place! If you have a hundred scientific papers you’re going to read the ones that make claims either supporting or contradicting your research.

          You might as well just skim the titles and guess.

          • Lvxferre
            link
            fedilink
            English
            -714 days ago

            But the claims of the text are often why you read it in the first place!

            By “not caring about the former” [claims], I mean in the LLM output, because you know that the LLM will fuck them up. But it’ll still somewhat accurately represent the topic of the text, and you can use this to your advantage.

            You might as well just skim the titles and guess.

            Nirvana fallacy.

              • Lvxferre
                link
                fedilink
                English
                -814 days ago

                not reading the fucking sidebar

                Yeah, I get that this is a place to vent. And I get why to vent about this. LLMs and other A"I" systems (with quotation marks because this shite is not intelligent!) are being shoved down every bloody where, regardless of actual usefulness, safety, or user desire. Telling you to put glue on your pizza, to eat poisonous mushrooms, that “cherish” has five letters, that Latin had no [w], that the Chinese are inferior to Westerners.

                While a crowd of irrationals tell you “it is intelligent, you can’t prove otherwise! CHRUST IT YOU DIRTY SCEPTIC/INFIDEL/LUDDITE REEEE! LALALA I’M PRETENDING TO NOT SEE THE HALLUCINATION LALALA”.

                I also get the privacy nightmare that this shit is. And the whole deal behind “we’re using your content as training data, and then selling the result back to you”. Or that it’s eating electricity like there’s no tomorrow, in a planet where global warming is a present issue.

                I get it. I get it all. That’s why I’m here. And if you (or anyone else) think that I’m here for any other reason, by all means, check my profile - you’ll find plenty pieces of criticism against those stupid corporate AI takes from vulture capital. (And plenty instances of me calling HN “Redditors LARPing as Hax0rz”. )

                However. Pretending that there’s no use case ever for LLMs is the wrong way to go.

                and thinking this is high school debate club fallacy

                If calling it “nirvana fallacy” rubs you the wrong way, here’s an alternative: “this argument is fucking stupid, in a very specific way: it pretends that either something is perfect or it’s useless, with no middle ground.”

                The other user however does not deserve the unnecessary abrasiveness so I’ll keep simply calling it “nirvana fallacy”.

                • @[email protected]
                  link
                  fedilink
                  English
                  814 days ago

                  holy shit, imagine getting a second chance to not be a fucking debatelord and doubling down this hard

                  off you fuck

                • @[email protected]
                  link
                  fedilink
                  English
                  7
                  edit-2
                  14 days ago

                  this argument

                  I agree, you’re quite right, and I thank you for taking the time and putting in the effort on such a wonderfully thorough portrayal of why your argument is total horseshit

            • queermunist she/her
              link
              fedilink
              English
              914 days ago

              Unless it doesn’t accurately represent the topic, which happens, and then a researcher chooses not to read the text based on the chatbot’s summary.

              Nirvana fallacy.

              All these chatbots do is guess. I’m just saying a researcher might as well cut out the hallucinating middleman.

    • @[email protected]
      link
      fedilink
      English
      614 days ago

      if the text is well-written you don’t need this sort of “Gemini/ChatGPT, tell me what this text is about” on first place.

      And if it’s badly written then the LLM will shit itself.

      Now let’s ask ourselves how much of the text in the world is “well-written”?

      Or even better, you could apply this to Copilot. How much code in the world is good code? The answer is fucking none, mate.

      • Lvxferre
        link
        fedilink
        English
        215 days ago

        No, it’s just rambling. My bad.

        I focused too much on using AI to summarise and ended not talking about it summarising documents, even if the text is about the later.

        And… well, the later is such a dumb idea that I don’t feel like telling people “the text is right, don’t do that”, it’s obvious.

        • David GerardOPM
          link
          fedilink
          English
          814 days ago

          You’d think so, but guess what precise use case LLMs are being pushed hard for.

  • @z00s
    link
    English
    -8
    edit-2
    13 days ago

    The problem is not the LLMs, but what people are trying to do with them.

    They are currently spoons, but people are desperately wishing they were katanas.

    They work really well for soup, but they can’t cut steak. But they’re being hyped as super ninja steak knives, and people are getting pissed when they can’t cut steak.

    If you give them watery, soupy tasks they can do successfully, they can lighten your workload, as long as you’re aware of what they are and aren’t good at.

    What people want LLMs to be able to do, ie. “Steak” tasks:

    • write complex documents

    • apply complex knowledge/rules to a situation

    • Write complex code and create entire programs based on vague description

    What LLMs can currently do ie. “Soup” tasks:

    • check this document and fix all spelling, punctuation and grammatical errors

    • summarise this paragraph as dot points

    • write a python program that sorts my photographs into folders based on the year they were taken

    Half of Lemmy is hyping katanas, the other half is yelling “Why won’t my spoon cut this steak?!! AI is so dumb!!!”

    Update: wow, the pure vitriol pouring out of the replies is just stunning. Seems there are a lot of you out there who have, in one way or another, tied your ego very strongly to either the success or failure of AI.

    Take a step back, friends, and go outside for a while.

    • @[email protected]
      link
      fedilink
      English
      1814 days ago

      What LLMs can currently do summarise this paragraph as dot points

      The entire point here is that they can’t?

      • @[email protected]
        link
        fedilink
        English
        -1213 days ago

        Clearly this post is about LLMs not succeeding at this task, but anecdotally I’ve seen it work OK and also fail. Just like humans, which is the benchmark but they are faster.

        • @[email protected]
          link
          fedilink
          English
          713 days ago

          humans are clearly faster at generating utterly banal shit, as proven by your posts in this thread

    • @[email protected]
      link
      fedilink
      English
      1414 days ago

      they don’t do any of that soup shit reliably either and reading the article might have told you that

      • @z00s
        link
        English
        -813 days ago

        They absolutely do, and I have no idea why you’re so angry

          • @z00s
            link
            English
            113 days ago

            That phrase contains inappropriate language and could be seen as disrespectful or offensive. If you’re looking for a more polite or constructive way to express frustration or dismissal, you could say something like:

            • “Alright, I think we’re done here.”
            • “Okay, I’m going to move on now.”

            Would you like help rephrasing it further?

    • @[email protected]
      link
      fedilink
      English
      1114 days ago

      I’d offer congratulations on obfuscating a bad claim with a poor analogy, but you didn’t even do that very well.

    • @[email protected]
      link
      fedilink
      English
      1014 days ago

      good god this entire post is the most tortured believer whataboutism I’ve encountered this month and there’s extremely strong competition here

      are currently spoons, but people are desperately wishing they were katanas

      ie. “Steak” tasks

      you should make a youtube channel, The Katana Steak-Eater. I’d watch the shit out of that at least one saturday afternoon

    • @[email protected]
      link
      fedilink
      English
      1013 days ago

      Why did this immediately give me a flashback to Donald Trump yelling, “when it comes to great steaks, I’ve just raised the stakes!

    • @[email protected]
      link
      fedilink
      English
      9
      edit-2
      14 days ago

      Food analogy

      This level of discourse wouldn’t fly on 4chan, how is it so popular with LLM fans?

      • David GerardOPM
        link
        fedilink
        English
        813 days ago

        needs to be a car analogy

        • What people want LLMs to do, i.e. Corvette tasks
        • What LLMs actually do, i.e. Trabant tasks
        • @[email protected]
          link
          fedilink
          English
          613 days ago

          What LLMs actually do, i.e. Trabant tasks

          more of a Power Wheels Barbie Jeep whose battery got left out in the sun too long, but I’ll allow it

      • @z00s
        link
        English
        -913 days ago

        Thanks Donald, good luck in November

        • @[email protected]
          link
          fedilink
          English
          813 days ago

          I get that this is some sort of attempt at an election related Epic Comeback, but it doesn’t make sense

          • @z00s
            link
            English
            112 days ago

            Your initial comment sounded like the way Donald Trump rage talks.

            I’m not surprised you don’t understand.

  • @chemical_cutthroat
    link
    English
    -9
    edit-2
    14 days ago

    So, they used a year old model, and what? Expected a miracle? Do you expect your '87 Chrysler to have parallel parking assist?

    Here. If humans are better in every way, I’ll do you a favor and summarize the article.

    It’s bullshit, designed to continue the assault against AI with bad faith arguments.

    If you want to hate AI, go for it, but give me a good goddamned reason to support your cause. This shit is an insult to my intelligence.

    Edit: The Anti-AI brigade has arrived. You all gotta find a new hobby. Downvotes without discourse is just masturbation.

    • @rImITywR
      link
      English
      415 days ago

      1987 was one year ago?

      Also, you want to talk about bad faith arguments, this was presented to parliament in May 2024. It was submitted in January 2024. Model selection and optimisation was done in October 2023.

      Llama3 was released April 2024. They did not use an old model to intentionally tank the results, as you are implying. Llama2 was the ‘latest-and-greatest’ at the time of the study.

      • @chemical_cutthroat
        link
        English
        -815 days ago

        1987 was one year ago?

        I think we can agree that AI is evolving slightly faster than sedan technology

        Also, you want to talk about bad faith arguments, this was presented to parliament in May 2024. It was submitted in January 2024. Model selection and optimisation was done in October 2023.

        And the article was posted today. I can post old data all day long. Got cancer? Just drink this heroin.

        Llama3 was released April 2024. They did not use an old model to intentionally tank the results, as you are implying. Llama2 was the ‘latest-and-greatest’ at the time of the study.

        Ok, fine, I’ll accept your correction, if you’ll accept my updated summary:

        The article was written in bad faith with outdated data in an attempt to turn AI disparagement into SEO into money.

  • @[email protected]
    link
    fedilink
    English
    -1314 days ago

    Ok? I don’t have another human available to skim a shitload of documents for me to find answers I need and I don’t have time to do ot myself. AI is my best option.

    • @[email protected]
      link
      fedilink
      English
      2714 days ago

      So long as you don’t care about whether they’re the right or relevant answers, you do you, I guess. Did you use AI to read the linked post too?

      • @[email protected]
        link
        fedilink
        English
        -1314 days ago

        Yep. Go ahead and ignore all the cases where it’s getting answers correct and actually helping. We’re all just hallucinating, it’s in no way my lived experience. Your reality is the prime reality and we’re the NPC’s.

        • @fruitdealer
          link
          English
          2414 days ago

          And I wish only my good grades counted in school too.

        • David GerardOPM
          link
          fedilink
          English
          1714 days ago

          sir has failed to achieve the reading comprehension level for this sub

        • @[email protected]
          link
          fedilink
          English
          11
          edit-2
          14 days ago

          Go ahead and ignore all the cases where it’s getting answers correct

          • Sir, half of the patients are dead!
          • Ye sure, just ignore the half that survived then!
          • @[email protected]
            link
            fedilink
            English
            44 days ago

            Only it’s even worse because without redoing all the work yourself you can’t even tell which ones are dead or alive.

      • @[email protected]
        link
        fedilink
        English
        -1814 days ago

        I didn’t read the post at all because its premise is irrelevant to my situation. If I had another human to read documentation for me I would do that. I don’t so the next best thing is AI. I have to double check its findings but it gets me 95% of the way there and saves hours of work. It’s a useful tool.

        • @[email protected]
          link
          fedilink
          English
          2514 days ago

          I didn’t read the post at all

          rather refreshing to have someone come out and just say it. thank you for the chuckle

          • @[email protected]
            link
            fedilink
            English
            1614 days ago

            we really do need “my source is that I made it the fuck up” for people who aggressively don’t want to read any of the text they’re allegedly commenting on

        • @[email protected]
          link
          fedilink
          English
          1014 days ago

          This is hall of fame shit right here, someone should study the way you use the internet sir