Did you know you can customize Google to filter out garbage? Take these steps for better search results, including adding my work at Lifehacker as a preferred source.
On a post on its blog this morning, Spotify announced it is doing something to combat the glut of AI-generated music on its streaming platform. According to the company, bad actors and content farms that “push ‘slop’ into the ecosystem” are going to be dealt with. Spotify says it has already removed over 75 million such tracks in the last year, and bigger changes are coming.
Over the next few months, Spotify says it will crack down on musical impersonators, implement a new spam filtering system, and work with others in the music business to develop an industry standard for AI disclosures in music credits.
How Spotify’s AI music policy will work
According to Spotify, the availability of AI tools has allowed the easy creation of musical deepfakes—AI impersonations of existing artists, in other words. The company says it will remove tracks that “impersonates another artist’s voice without their permission—whether that’s using AI voice cloning or any other method.”
The ban includes both tracks for which the person uploading is explicitly presenting themselves as another artist and tracks labeled as an “AI version” of another artist—unless the track was made with the original artist’s permission, of course.
Spotify is also targeting mass uploads, duplicates, SEO hacks, artificially short track abuse, and other spammy abuses of its platform. The company’s new spam filter will be rolled out this fall and will identify uploaders and tracks engaging in these tactics, then “tag them and stop recommending them.” The end goal, according to Spotify, is to prevent bad actors from generating royalties that could be otherwise paid out to professional artists and songwriters.
Spotify’s working on an AI music labeling standard
Spotify has also pledged to help develop an overarching industry standard for disclosure of how artificial intelligence is used in the production of music. Labeling AI in music credits is a much more complex issue than Spotify’s other new initiatives: all kinds of technology are used in music production, and there’s a huge continuum between a track that’s generated entirely from a prompt and using auto-tune on a slightly off-pitch vocal.
Spotify says the effort requires “broad industry alignment” so it’s working with companies like Labelcamp, NueMeta, Revelator, and SonoSuite through music industry standardization company DDEX to develop an industry standard for AI labeling.
There’s still more to be done, though
Spotify’s new initiatives don’t ban AI music, or require it to be labeled. The company says it wants to treat all music “equally, regardless of the tools used to make it,” which seems to leave space for Spotify to continue promoting obviously AI-generated music playlists like “Jazz for Study” and “Lo Fi Chill” that consist mainly of “artists” like The Midtown Players, ourchase, and The Tate Jackson Trio that have all the signs of being AI-creations, but are officially “verified” by Spotify.
To be fair to the music streaming service, I did a similar search for AI playlists and musicians a few months ago, and it’s marginally more difficult to find now than it was then, but until Spotify stops filling its own playlists with AI-generated glurge, its pledge to fight “AI slop” rings hollow.
Original Source: https://lifehacker.com/tech/spotify-ai-music-policies?utm_medium=RSS
Disclaimer: This article is a reblogged/syndicated piece from a third-party news source. Content is provided for informational purposes only. For the most up-to-date and complete information, please visit the original source. Digital Ground Media does not claim ownership of third-party content and is not responsible for its accuracy or completeness.