• @LifeInMultipleChoice
    link
    5
    edit-2
    7 days ago

    Yeah, the main issues I’ve seen stem from the data being aggregated doesn’t have ways to limit it from aggregating data from itself (or other models). So take in data, spit out the same data, take in that data, and spit out more. If it was wrong at any point in time it continues to spit out data potentially more incorrect and growing.

    Also data sources get hurt. So if you go to a site that has decent information they usually profit via ads. Eventually that data that was scraped from there is being regurgitated and less people go to the site itself, less views less revenue. Some sites will die out do to monetary incentives being gone and costs the same. Meaning new information is not published there and the old data is being spit out with some percentage of error with no source to back check that information on.