Twitter enforces strict restrictions against external parties using its data for AI training, yet it freely utilizes data created by others for similar purposes.

  • donuts
    link
    fedilink
    36 months ago

    AI is looking like the biggest bubble in tech history and stuff like this really ain’t helping.

    • @[email protected]
      link
      fedilink
      English
      126 months ago

      AI at least has a good chance to become a big thing in some areas. NFTs were the bigger bubble and just a straight up scam

    • @bassomitron
      link
      English
      11
      edit-2
      6 months ago

      I think you’re underestimating how much AI is already used in enterprise. It’s got enormous potential and any tech company ignoring it is just shooting themselves in the foot. ChatGPT isn’t the only type of AI.

      • kpw
        link
        fedilink
        66 months ago

        ChatGPT is the kind of AI that is hyped now. Other kinds of AI (formerly statistics) have been used for decades. Oh you can learn the parameters of some function from the data? Must be AI.

      • donuts
        link
        fedilink
        6
        edit-2
        6 months ago

        I’m don’t think I am.

        The internet had a ton of legitimate and potential users too, but that didn’t prevent the dot com bubble from bursting.

        Not only is AI built on a shakey house of cards of stolen IP and unlicensed writing, artwork, music and other data, but there are also way too many players in the space and an amount of investment that, in my opinion, goes way beyond the reality of what AI can achieve.

        Whether AI is a bubble or not has more to do with the hype economy around it than the technology itself.

      • RandomStickman
        link
        fedilink
        56 months ago

        Internet is important and useful, but it didn’t stop the .com bubble from being a thing

      • @banneryear1868
        link
        English
        2
        edit-2
        6 months ago

        E-discovery and market simulation tools have basically been using these sort of models for a long time. I think “AI” is a misnomer and more of a branding/marketing term, reserved for the latest iteration of these tools, what used to be called “AI” is given a generic term describing it’s use, and the new thing becomes AI until the next significant improvements are made and it happens again.

        The thing where people think these new language models are going to create a “real” artificial intelligence basically confirms this, it’s almost a religious belief. The mythology around this iteration of “AI” is creating hype beyond what it’s technically capable of.