• YAMAPIKARIYA
    link
    fedilink
    English
    1876 months ago

    Dude. Couldn’t even proofread the easy way out they took

    • @Carrolade
      link
      English
      1046 months ago

      This almost makes me think they’re trying to fully automate their publishing process. So, no editor in that case.

      Editors are expensive.

      • YAMAPIKARIYA
        link
        fedilink
        English
        196 months ago

        If they really want to do it, they can just run a local language model trained to proofread stuff like this. Would be way better

          • YAMAPIKARIYA
            link
            fedilink
            English
            16 months ago

            I don’t think so. They are using AI from a 3rd party. If they train their own specialized version, things will be better.

            • @[email protected]
              link
              fedilink
              English
              116 months ago

              Here is a better idea: have some academic integrity and actually do the work instead of using incompetent machine learning to flood the industry with inaccurate trash papers whose only real impact is getting in the way of real research.

              • YAMAPIKARIYA
                link
                fedilink
                English
                36 months ago

                There is nothing wrong with using AI to proofread a paper. It’s just a grammar checker but better.

                • @[email protected]
                  link
                  fedilink
                  English
                  46 months ago

                  You can literally use tools to check grammar perfectly without using AI. What the LLM AI does is it predict what word comes next in a sequence, and if the AI is wrong as it often is then you’ve just attempted to publish a paper with halucinations wasting the time and effort of so many people because you’re greedy and lazy.

                • @[email protected]
                  link
                  fedilink
                  English
                  4
                  edit-2
                  6 months ago

                  Proofreading involves more than just checking grammar, and AIs aren’t perfect. I would never put my name on something to get published publicly like this without reading it through at least once myself.

            • @[email protected]
              link
              fedilink
              English
              16 months ago

              That’s not necessarily true. General-purpose 3rd party models (chatgpt, llama3-70b, etc) perform surprisingly good in very specific tasks. While training or finetuning your specialized model should indeed give you better results, the crazy amount of computational resources and specialized manpower needed to accomplish it makes it unfeasible and unpractical in many applications. If you can get away with an occational “as an AI model…”, you are better off using existing models.

    • @TheFarm
      link
      English
      276 months ago

      This is what baffles me about these papers. Assuming the authors are actually real people, these AI-generated mistakes in publications should be pretty easy to catch and edit.

      It does make you wonder how many people are successfully putting AI-generated garbage out there if they’re careful enough to remove obviously AI-generated sentences.

      • @[email protected]
        link
        fedilink
        English
        7
        edit-2
        6 months ago

        I definitely utilize AI to assist me in writing papers/essays, but never to just write the whole thing.

        Mainly use it for structuring or rewording sections to flow better or sound more professional, and always go back to proofread and ensure that any information stays correct.

        Basically, I provide any data/research and get a rough layout down, and then use AI to speed up the refining process.

        EDIT: I should note that I am not writing scientific papers using this method, and doing so is probably a bad idea.

          • @[email protected]
            link
            fedilink
            English
            56 months ago

            Yeah, same. I’m good at getting my info together and putting my main points down, but structuring everything in a way that flows well just isn’t my strong suit, and I struggle to sit there for long periods of time writing something I could just explain in a few short points, especially if there’s an expectation for a certain length.

            AI tools help me to get all that done whilst still keeping any core information my own.

  • @clearedtoland
    link
    English
    1546 months ago

    Hold up. That actually got through to publishing??

    • @RedditWanderer
      link
      English
      1106 months ago

      It’s because nobody was there to highlight the text for them.

      • exscape
        link
        fedilink
        306 months ago

        The entire abstract is AI. Even without the explicit mention in one sentence, the rest of the text should’ve been rejected as nonspecific nonsense.

        • @canihasaccount
          link
          English
          296 months ago

          That’s not actually the abstract; it’s a piece from the discussion that someone pasted nicely with the first page in order to name and shame the authors. I looked at it in depth when I saw this circulate a little while ago.

          • exscape
            link
            fedilink
            86 months ago

            Ah, that makes more sense. I looked up the original abstract and indeed it looks more like what you’d expect (hard to comprehend for someone that’s not in the field).

            Though to clarify (for others reading this) they still did use generative AI to (help?) write the paper, which is only part of why it was withdrawn.

        • @RedditWanderer
          link
          English
          46 months ago

          Maybe a big red circle around the entire abstract would have helped

    • @[email protected]
      link
      fedilink
      English
      426 months ago

      It’s Elsevier, so this probably isn’t even the lowest quality article they’ve published

      • Optional
        link
        English
        46 months ago

        Yep. And AI will totally help.

        Ooh I mean not help. It’ll make it much worse. Particularly with the social sciences. Which were already pretty fuX0r3d anyway due to the whole “your emotions equal this number” thing.

    • @Cornelius_Wangenheim
      link
      English
      206 months ago

      Many journals are absolute garbage that will accept anything. Keep that in mind the next time someone links a study to prove a point. You have to actually read the thing and judge the methodology to know if their conclusions have any merits.

      • @clearedtoland
        link
        English
        166 months ago

        Full disclosure: I don’t intend to be condescending.

        Research Methods during my graduate studies forever changed the way I interpret just about any claim, fact, or statement. I’m obnoxiously skeptical and probably cynical, to be honest. It annoys the hell out of my wife but it beats buying into sensationalist headlines and miracle research. Then you get into the real world and see how data gets massaged and thrown around haphazardly…believe very little of what you see.

        • @Adalast
          link
          English
          96 months ago

          I have this problem too. My wife gets so annoyed at things because I question things I notice as biases or statistical irregularities instead of just accepting that they knee what they were doing. I have tried to explain it to her. Skepticism is not dismissal and it is not saying I am smarter than them, it is recognizing that they are human and that I may be more proficient in one spot they made a mistake than they were.

          I will acknowledge that the lay need to stop trying to argue with scientists because “they did their own research”, but the actually informed and educated need to do a better job of calling each other out.

    • @dustyData
      link
      English
      146 months ago

      We are in top dystopia mode right now. Students have AI write articles that are proofread and edited by AI, submitted to automated systems that are AI vetted for publishing, then posted to platforms where no one ever reads the articles posted but AI is used to scrape them to find answers or train all the other AIs.

      • VeganPizza69 Ⓥ
        link
        English
        56 months ago

        How generative AI is clouding the future of Google search

        The search giant doesn’t just face new competition from ChatGPT and other upstarts. It also has to keep AI-powered SEO from damaging its results.

        More or less the same phenomenon of signal pollution:

        “Google is shifting its responsibility for maintaining the quality of results to moderators on Reddit, which is dangerous,” says Ray of Amsive. Search for “kidney stone pain” and you’ll see Quora and Reddit ranking in the top three positions alongside sites like the Mayo Clinic and the National Kidney Foundation. Quora and Reddit use community moderators to manually remove link spam. But with Reddit’s traffic growing exponentially, is a human line of defense sustainable against a generative AI bot army?

        We’ll end up using year 2022 as a threshold for reference criteria. Maybe not entirely blocked, but like a ratio… you must have 90% pre-2022 and 10% post-2022.

        Perhaps this will spur some culture shift to publish all the data, all the notes, everything - which will be great to train more AI on. Or we’ll get to some type of anti-AI or anti-crawler medium.

  • @[email protected]
    link
    fedilink
    English
    1396 months ago

    This article has been removed at the request of the Editors-in-Chief and the authors because informed patient consent was not obtained by the authors in accordance with journal policy prior to publication. The authors sincerely apologize for this oversight.

    In addition, the authors have used a generative AI source in the writing process of the paper without disclosure, which, although not being the reason for the article removal, is a breach of journal policy. The journal regrets that this issue was not detected during the manuscript screening and evaluation process and apologies are offered to readers of the journal.

    The journal regrets – Sure, the journal. Nobody assuming responsibility …

    • @[email protected]
      link
      fedilink
      English
      816 months ago

      What, nobody read it before it was published? Whenever I’ve tried to publish anything it gets picked over with a fine toothed comb. But somehow they missed an entire paragraph of the AI equivalent of that joke from parks and rec: “I googled your symptoms and it looks like you have ‘network connectivity issues’”

      • magic_lobster_party
        link
        fedilink
        66 months ago

        Nobody would read it even after it was published. No scientist have time to read other’s papers. They’re too busy writing their own papers. This mistake probably made it more read than 99% of all other scientific papers.

      • @[email protected]
        link
        fedilink
        English
        36 months ago

        I think that part of the issue is quantity and volume. You submit a few papers a year, an AI can in theory submit a few per minute. Even if you filter 98% of them, mistakes will happen.

        That said, this particular error in the meme is egregious.

    • @[email protected]
      link
      fedilink
      English
      296 months ago

      Daaaaamn they didn’t even get consent from the patient😱😱😱 that’s even worse

      • @[email protected]
        link
        fedilink
        English
        146 months ago

        I mean holy shit you’re right, the lack of patient consent is a much bigger issue than getting lazy writing the discussion.

  • magnetosphere
    link
    fedilink
    123
    edit-2
    6 months ago

    To me, this is a major ethical issue. If any actual humans submitted this “paper”, they should be severely disciplined by their ethics board.

    • @[email protected]
      link
      fedilink
      English
      916 months ago

      But the publisher who published it should be liable too. Wtf is their job then? Parasiting off of public funded research?

      • @[email protected]
        link
        fedilink
        English
        56 months ago

        Research journals are often rated for the quality of the content they publish. My guess is that this “journal” is just shit. If you’re a student or researcher, you will come across shit like this and you should be smart enough to tell when something is poor quality.

  • @repungnant_canary
    link
    English
    996 months ago

    Maybe, if reviewers were paid for their job they could actually focus on reading the paper and those things wouldn’t slide. But then Elsevier shareholders could only buy one yacht a year instead of two and that would be a nightmare…

    • @adenoid
      link
      English
      376 months ago

      Elsevier pays its reviewers very well! In fact, in exchange for my last review, I received a free month of ScienceDirect and Scopus…

      … Which my institution already pays for. Honestly it’s almost more insulting than getting nothing.

      I try to provide thorough reviews for about twice as many articles as I publish in an effort to sort of repay the scientific community for taking the time to review my own articles, but in academia reviewing is rewarded far less than publishing. Paid reviews sound good but I’d be concerned that some would abuse this system for easy cash and review quality would decrease (not that it helped in this case). If full open access publishing is not available across the board (it should be), I would love it if I could earn open access credits for my publications in exchange for providing reviews.

      • Ragdoll X
        link
        English
        12
        edit-2
        6 months ago

        I’ve always wondered if some sort of decentralized, community-led system would be better than the current peer review process.

        That is, someone can submit their paper and it’s publicly available for all to read, then people with expertise in fields relevant to that paper could review and rate its quality.

        Now that I think about it it’s conceptually similar to Twitter’s community notes, where anyone with enough reputation can write a note and if others rate it as helpful it’s shown to everyone. Though unlike Twitter there would obviously need to be some kind of vetting process so that it’s not just random people submitting and rating papers.

          • @[email protected]OPM
            link
            fedilink
            English
            11
            edit-2
            6 months ago

            I feel like I’ve seen this model before, I know I’ve heard it. There’s better ways to do it than your suggestion, but it’s there in spirit. Science is a conversation, it would be a really cool idea to make room for things like this. In the meantime, check out Pubpeer, it’s got extensions for browsers. Super useful and you have to attach your ORCID to be verified. Everyone can read it though.

      • @bananabenana
        link
        English
        46 months ago

        Open access credits is a fantastic idea. Unfortunately it goes against the business model of these parasites. Ultimately, these businesses provide little to no actual value except siphoning taxpayer money. I really prefer eLifes current model but it would be great if it was cheaper. arXiv, Biorxiv provides a better service than most journals IMO

        Also I agree with the reviewing seriously and twice as often as publishing. Many people leave academia so reviewing more can cover them.

      • @mynameisigglepiggle
        link
        English
        -36 months ago

        Perhaps paid reviews would increase quality because unpaid reviews are more susceptible to corruption

    • Match!!
      link
      fedilink
      English
      66 months ago

      Fuck that, they should pay special bounty hunters to expose LLM garbage, I’d take that job instantly

  • @Nobody
    link
    English
    636 months ago

    In Elsevier’s defense, reading is hard and they have so much money to count.

  • Panda (he/him)
    link
    fedilink
    English
    636 months ago

    Elsevier is such a fucking joke. Science should be free and open, anyways.

  • @breadsmasher
    link
    English
    446 months ago

    the entire paragraph after the highlight is still AI too

  • @[email protected]
    link
    fedilink
    English
    406 months ago

    I started a business with a friend to automatically identify things like this, fraud like what happened with Alzheimer’s research, and mistakes like missing citations. If anyone is interested, has contacts or expertise in relevant domains or just wants to talk about it, hit me up.

  • @soloner
    link
    English
    406 months ago

    Guys it’s simple they just need to automate AI to read these papers for them to catch if AI language was used. They can automate the entire peer review process /s

  • @[email protected]
    link
    fedilink
    English
    38
    edit-2
    6 months ago

    They mistakenly sent the “final final paper.docx” file instead of the “final final final paper v3.docx”. It could’ve happen to any of us.

  • @HeyThisIsntTheYMCA
    link
    English
    276 months ago

    I would insert specific language into every single one of my submissions to see if my editors were doing their jobs. Only about 1/3 caught it. Short story long, I’m not just a researcher in a narrow field, I’m also an amateur marine biologist.

  • KillingTimeItself
    link
    fedilink
    English
    226 months ago

    what if this was actually just a huge troll, and it wasn’t AI.

    Now that would be fucking hilarious.