tldr- Using the robots meta element or HTTP header to say that content of this page should not be used for machine learning, in case some actors make their search UA indistinguishable from their machine learning efforts.

  • The Hobbyist
    link
    fedilink
    English
    21 year ago

    Yes, people will continue to steal content,

    I fail to see how this will solve anything. Why would stealing for AI or scraping for other purposes be done differently? If someone does not care about the rules for scraping, they still won’t care about it for AI. Especially as they don’t even have to disclose that it was used for AI (see my point about OpenAI above). There is no accountability. Previous versions or GPT language models have been trained on heaps of copyrighted material. Unless some law is enacted, it is unlikely to change.

    Is the robots file carrying any legal value? I don’t think so but if I’m wrong, this feels more like wishful thinking. I don’t mean to say I don’t care about it being done, but this is realistically unlikely to change anything in practice.

    Perhaps if robots files had legal weight (if they don’t already) (in the sense of being legally constraining the crawlers and scrapers) similarly to how LinkedIn was recently forced to abide by “do not track” requests in Germany then I’d welcome it with open arms!

    • @[email protected]OP
      link
      fedilink
      English
      21 year ago

      solve anything.

      As I say, honourable UAs will honour robots.txt and its protocol, this proposal is an extension of that.

      Google have been proposing similar. Perhaps presumably for different reasons: https://services.google.com/fh/files/misc/public_comment_thought_starters_oct23.pdf

      There is no accountability.

      On small scales perhaps not, but as said this has always been the case with scraping.

      Unless some law is enacted,

      robots.txt protocols has never been law but has been honoured so it’s worth hanging on to. It’s still the defintion of ‘good bots’ vs ‘bad bots’ on one level and that’s about as good as site owners have vs whack-a-mole with UA-IP variations.