Everybody loves Wikipedia, the surprisingly serious encyclopedia and the last gasp of Old Internet idealism!

(90 seconds later)

We regret to inform you that people write credulous shit about “AI” on Wikipedia as if that is morally OK.

Both of these are somewhat less bad than they were when I first noticed them, but they’re still pretty bad. I am puzzled at how the latter even exists. I had thought that there were rules against just making a whole page about a neologism, but either I’m wrong about that or the “rules” aren’t enforced very strongly.

  • @[email protected]OP
    link
    fedilink
    English
    242 days ago

    Reflection (artificial intelligence) is dreck of a high order. It cites one arXiv post after another, along with marketing materials directly from OpenAI and Google themselves… How do the people who write this shit dress themselves in the morning without pissing into their own socks?

    • @[email protected]
      link
      fedilink
      English
      416 hours ago

      I also really don’t enjoy AI boom.

      GPT-3 is a large language model that was released in 2020 by OpenAI and is capable of generating high-quality human-like text. […] An upgraded version called GPT-3.5 was used in ChatGPT, which later garnered attention for its detailed responses and articulate answers across many domains of knowledge.

      Who wrote this? OpenAI marketing?

      • @[email protected]OP
        link
        fedilink
        English
        3
        edit-2
        15 hours ago

        Let’s see, it cites Scott Computers, a random “AI Safety Fundamentals” website, McKinsey (four times!), a random arXiv post…

    • @[email protected]
      link
      fedilink
      English
      202 days ago

      and of course, not a single citation for the intro paragraph, which has some real bangers like:

      This process involves self-assessment and internal deliberation, aiming to enhance reasoning accuracy, minimize errors (like hallucinations), and increase interpretability. Reflection is a form of “test-time compute,” where additional computational resources are used during inference.

      because LLMs don’t do self-assessment or internal deliberation, nothing can stop these fucking things from hallucinating, and the only articles I can find for “test-time compute” are blog posts from all the usual suspects that read like ads and some arXiv post apparently too shitty to use as a citation

      • @[email protected]
        link
        fedilink
        English
        112 days ago

        on the one hand, I want to try find which vendor marketing material “research paper” that paragraph was copied from, but on the other… after yesterday’s adventures trying to get data out of PDFs and c.o.n.s.t.a.n.t.l.y getting “hey how about this LLM? it’s so good![0]” search results, I’m fucking exhausted

        [0]: also most of these are paired with pages of claims of competence and feature boasts, and then a quiet “psssst: also it’s a service and you send us your private data and we’ll do with it whatever we want” as hidden as they can manage