An update to Google’s privacy policy suggests that the entire public internet is fair game for it’s AI projects.
You must log in or register to comment.
Isn’t crawling and scraping content what Google and every other search engine has been doing since day one?
Why is AI scraping not respecting robots.txt? It wasn’t ok early internet days, so why is it ok now? People are complaining about being overloaded by scrapers like it’s the 90’s