SoundCloud says it’s never trained AI using artists’ work after getting called out for terms of use change

SoundCloud Faces Backlash Over AI Training Clause in Terms of Service

Berlin, May 12, 2025 — SoundCloud, the music streaming platform known for its support of independent artists, is facing significant criticism after updating its terms of service in February 2024 to allow user-uploaded content to be used for artificial intelligence (AI) training. The change, which went largely unnoticed until flagged by tech ethicist Ed Newton-Rex, has prompted a wave of concern among artists and digital rights advocates, with some users deleting their accounts in protest.

The updated terms, effective February 7, 2024, include a clause stating that users “explicitly agree that your Content may be used to inform, train, develop or serve as input to artificial intelligence or machine intelligence technologies or services as part of and for providing the services.” This applies to all uploaded content unless covered by separate agreements with third-party rights holders, such as major record labels like Universal Music or Warner Music Group. The lack of a clear opt-out mechanism has fueled accusations that SoundCloud is prioritizing AI development over creator rights, with no apparent option for users to prevent their work from being used in this way.

The backlash erupted after artists like musical duo The Flight publicly announced they were “deleting all our songs that we uploaded to SoundCloud and now closing account” due to the policy change. Posts on X echoed the sentiment, with users like @kyrsive expressing shock and urging others to reconsider using the platform. Tech ethicist Ed Newton-Rex, founder of Fairly Trained, criticized the terms, noting that SoundCloud’s clarification “doesn’t actually rule out training generative AI models on their users’ music in future,” raising concerns about potential exploitation.

In response to the outcry, SoundCloud issued a statement asserting that it “has never used artist content to train AI models, nor do we develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes.” The company emphasized that the February 2024 update was meant to “clarify how content may interact with AI technologies within SoundCloud’s own platform,” citing use cases like personalized recommendations, playlist generation, fraud detection, and content organization. SoundCloud also highlighted technical safeguards, including a “no AI” tag to block unauthorized use, and partnerships with tools like Musiio for artist discovery, not generative AI training.

However, SoundCloud’s statement has not fully quelled concerns. Critics point out that while the company claims not to currently train AI models, the terms allow for future use, and the absence of an opt-out feature undermines artist control. Marni Greenberg, SoundCloud’s SVP and Head of Communications, told The Verge that if the platform ever considers using user content for generative AI training, it would introduce “clear opt-out mechanisms in advance—at a minimum.” Newton-Rex and others argue that such policies should be opt-in, with creators receiving credit and compensation for their contributions to AI datasets.

This controversy places SoundCloud in the company of other platforms like X, LinkedIn, and YouTube, which have recently updated their terms to permit AI training on user content, often sparking similar backlash. The broader trend has raised ethical questions about consent, transparency, and the rights of creators in an era where AI increasingly relies on vast datasets. SoundCloud’s commitment to being “artist-first” and its pledge to adhere to “ethical and transparent AI practices” through initiatives like AI For Music’s Principles for Music Creation With AI are now under scrutiny as artists demand greater clarity and control.

As the debate continues, SoundCloud faces pressure to revisit its terms and provide robust protections for its community of creators. The platform’s next steps will likely shape its reputation as a haven for independent artists amid the evolving landscape of AI and intellectual property.


Following backlash about a quietly added clause to SoundCloud’s Terms of Use that says users’ content may be fed to AI, the company says it’s “never used artist content to train AI models,” and insists it “has always been and will remain artist-first.” The outrage came after tech ethicist Ed Newton-Rex (via TechCrunch) spotted a change to SoundCloud’s terms that was made in February 2024 seemingly without notifying users. The updated text states that by using the platform, “You explicitly agree that your Content may be used to inform, train, develop or serve as input to artificial intelligence or machine intelligence technologies or services as part of and for providing the services.”

In a statement to TechCrunch, a spokesperson said the update was only meant to “clarify how content may interact with AI technologies within SoundCloud’s own platform” and that the company “has never used artist content to train AI models, nor do we develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes.” SoundCloud’s official Reddit account posted similar statements on the social media platform in response to users’ outrage, and both noted that SoundCloud added a “no AI” tag for artists “to explicitly prohibit unauthorized use.”

AI may be used for things like music recommendations, playlist creation and fraud detection, the company said. “Any future AI tools will be built for artists to enhance discovery, protect rights, and expand opportunities,” SoundCloud posted on Reddit. “We hear your concerns and remain committed to transparency, artist control, and fair use.”

Just a few months ago, though, SoundCloud introduced a suite of AI tools geared toward music creation, on top of three others it had announced earlier that year. That includes AI tools for generating remixes, new tracks, beats and singing voices.



Source link

Leave a Comment