I’m rather curious to see how the EU’s privacy laws are going to handle this.

(Original article is from Fortune, but Yahoo Finance doesn’t have a paywall)

  • @Ottomateeverything
    link
    English
    2810 months ago

    It’s more like the law is saying you must draw seven red lines, all of them strictly perpendicular, some with green ink and some with transparent ink.

    No, it’s more like the law is saying you have to draw seven red lines and you’re saying, “well I can’t do that with indigo, because indigo creates purple ink, therefore the law must change!” No, you just can’t use indigo. Find a different resource.

    It’s not “virtually” impossible, it’s literally impossible. If the law requires that it be possible then it’s the law that must change.

    There’s nothing that says AI has to exist in a form created from harvesting massive user data in a way that can’t be reversed or retracted. It’s not technically impossible to do that at all, we just haven’t done it because it’s inconvenient and more work.

    The law sometimes makes things illegal because they should be illegal. It’s not like you run around saying we need to change murder laws because you can’t kill your annoying neighbor without going to prison.

    Otherwise it’s simply a more complicated way of banning AI entirely

    No it’s not, AI is way broader than this. There are tons of forms of AI besides forms that consume raw existing data. And there are ways you could harvest only data you could then “untrain”, it’s just more work.

    Some things, like user privacy, are actually worth protecting.

    • @[email protected]
      link
      fedilink
      English
      110 months ago

      There’s nothing that says AI has to exist in a form created from harvesting massive user data in a way that can’t be reversed or retracted. It’s not technically impossible to do that at all, we just haven’t done it because it’s inconvenient and more work.

      What if you want to create a model that predicts, say, diseases or medical conditions? You have to train that on medical data or you can’t train it at all. There’s simply no way that such a model could be created without using private data. Are you suggesting that we simply not build models like that? What if they can save lives and massively reduce medical costs? Should we scrap a massively expensive and successful medical AI model just because one person whose data was used in training wants their data removed?

      • eltimablo
        link
        fedilink
        110 months ago

        I guarantee the person you’re arguing with would rather see people die than let an AI help them and be proven wrong.

        • @Ottomateeverything
          link
          English
          1
          edit-2
          10 months ago

          Well then you’d be wrong. What a fucking fried and delusional take. The fuck is wrong with you?

      • @Ottomateeverything
        link
        English
        110 months ago

        This is an entirely different context - most of the talk here is about LLMs, health data is entirely different, health regulations and legalities are entirely different, people don’t publicly post their health data to begin with, health data isn’t obtained without consent and already has tons of red tape around it. It would be much easier to obtain “well sourced” medical data than thebroad swaths of stuff LLMs are sifting through.

        But the point still stands - if you want to train a model on private data, there are different ways to do it.