Login | October 03, 2024
FTC warns companies “quiet changing” their TOS because of AI
RICHARD WEINER
Technology for Lawyers
Published: September 20, 2024
In a recent blog post titled “AI (and other) Companies: Quietly Changing Your Terms of Service Could Be Unfair or Deceptive,” the FTC issued a warning to the tech industry about quietly changing Terms of Service to accommodate AI data devouring on a website.
The post starts out calling data “the new oil” that fuels innovation, and notes that parts of a TOS usually cover users’ data privacy concerns. But companies now face “a potential conflict of interest: they have powerful business incentives to turn the abundant flow of user data into more fuel for their AI products, but they also have existing commitments to protect their users’ privacy.”
The post then states plainly that “It may be unfair or deceptive for a company to adopt more permissive data practices—for example, to start sharing consumers’ data with third parties or using that data for AI training—and to only inform consumers of this change through a surreptitious, retroactive amendment to its terms of service or privacy policy.”
The FTC’s position on deceptive privacy policies dates back to at least 2004, when the agency charged Gateway Learning Corporation (the “Hooked on Phonics” folks) with “violating the FTC Act after it changed its privacy policy to allow it to share consumer data with third parties without notifying consumers or getting their consent.”
Third parties scraping data to train an AI may very well violate a company’s data privacy TOS if the company is aware of the scraping, or at least does not put in precautionary measures so that third parties don’t scrape that company’s data as a part of the third party’s LLM data base.
The blog post states that, even with all the new technology around AI, the basic principles remain the same: “A business that collects user data based on one set of privacy commitments cannot then unilaterally renege on those commitments after collecting users’ data.”
This should also apply to companies that are integrating AI into their platforms at any level. Even if the data stays in-house, customers still should be made aware of how it is being used.
In the end, says the FTC, there is “nothing intelligent about obtaining artificial consent.”