AI generated content – it’s not all “slop”

On Thursday, October 6, we discussed whether content created by AI models could be managed and controlled.

Chris defined such content as something created by instructing an AI model on what to produce by describing it in words (a “prompt”). The content may be text, images, video, music, or computer code. The AI models are first “trained” on information “scraped” from the Internet, and they are rapidly improving in the quality of their output, often derogatorily called “slop”.

There are several problems arising from this stuff. These require imaginative solutions.


Deepfake Images and Videos

Deepfakes for Blackmail
One in 5 young people in Spain report being victims of AI deepfakes with almost all reporting sexual violence online – Save the Children Study
“A 12-year-old girl told me that she was being threatened by a person who told her that he would publish some photos of her naked, created with Artificial Intelligence, if she did not forward to all her contacts a video with sexual content that had reached her phone. The girl assured me that she had never forwarded photos of herself with that type of content, but she felt that she had caused that situation and that it was her fault,” ….”Save the Children is calling for the deployment of legal measures to strengthen the online protection of children, to provide safe digital environments for children, and to incorporate digital education for all ages on the safe and responsible use of technologies in schools as well as training for teachers.”..

Deepfakes to spread falsehood and propaganda
Here is a recent example: Fox News Falls for AI-Generated Footage of Poor People Raging About Food Stamps Being Shut Down, Runs False Story That Has to Be Updated With Huge Correction “This seems to be AI’s new role: manufacturing life-like “evidence” to justify harmful narratives. It’s a stupefying sign of things to come as these fakes become harder to distinguish from reality.”

AI-generated music

This is a complex issue involving fraud on streaming platforms and the question of what copyright means.

Your favorite band has a new single? It might be AI This article describes cases where AI-generated songs, apparently from well-known artists, were uploaded to Spotify. Some of the artists were deceased. The problem is huge and linked to money (of course). “The platform admits it is fighting against a ceaseless torrent of AI slop. Spotify says it has removed 75 million “spammy” tracks from the platform just in the past year. Note: AI can churn out songs at a tremendous rate.

Spotify’s payment system is an incentive to create and stream multiple tracks: Artists earn approximately $0.003 to $0.005 per stream on Spotify, but this rate is not fixed and is influenced by factors like the listener’s country, their subscription type (Premium vs. free), and the artist’s deal with their label or distributor. Payments are not made per stream but from a pool of revenue shared with rights holders, with a 2024 policy change requiring songs to have at least 1,000 streams in a 12-month period to be eligible for royalties. (Gemini AI)

A similar thing happens on YouTube, which is also partially financed by ads, though payment is per stream. YouTube makes money primarily through advertising and subscription services like YouTube Premium. Advertisers pay to place ads on videos, and YouTube shares a portion of this revenue with creators, while the remaining revenue from ads and subscriptions funds the platform’s operations and growth. A creator’s earnings from a YouTube stream are not fixed and can vary widely, depending on factors like ad revenue, viewer count, and audience demographics. Ad revenue can range from approximately (\$0.01) to (\$0.03) per view, meaning a stream with 1,000 viewers might earn between (\$10) and (\$30) from ads alone, with creators keeping about 55% of the total ad earnings. (Gemini AI)

AI-generated fake books sold on Amazon

Amazon is still struggling to stem the flood of AI-generated fake books “While AI-generated fake books are already a widespread issue on Amazon, the sheer number of titles ….. shows just how much the problem has grown. The combination of tools like ChatGPT and easy self-publishing now makes it simple for scammers to flood the marketplace with AI-generated books that copy the style and branding of well-known figures.”

The Question of copyright and “fair use”

The above cases are fraud, through and through, but when someone instructs an AI to generate songs “in the style” of an artist or author, is that a copyright infringement or “fair use” ? . What is the status of a cover version of a song made with AI?

Open-source tools

The tools used to create AI video and music thus far have been subscription services such as Sora (for video) and Sono and Udio for audio. However, there are now open-source services giving access to an international audience. Pittances earned from streaming services are significant incomes in poor countries. Open-source AI video generators include models like Wan 2.2, Mochi 1, and OpenSora 1.2. These models offer a range of features from browser-based simplicity to local installation and advanced customization, allowing users to generate videos from text prompts or existing images. Popular tools for running them include web-based platforms like Genmo and offline interfaces like ComfyUI. (Gemini AI)

The Fight Back

Technical tools
New tools help artists fight AI by directly disrupting the systems. This article describes a tool that can protect an image by disrupting the AI’s ability to recognise what it is.
“You can think of Nightshade as adding a small poison pill inside an artwork in such a way that it’s literally trying to confuse the training model on what is actually in the image,” Zhao says. AI models like DALL-E or Stable Diffusion usually identify images through the words used to describe them in the metadata. For instance, a picture of a dog pairs with the word “dog.” Zhao says Nightshade confuses this pairing by creating a mismatch between image and text. “So it will, for example, take an image of a dog, alter it in subtle ways, so that it still looks like a dog to you and I — except to the AI, it now looks like a cat,” Zhao says.

Users must denounce Spam or misleading videos
Platforms need to encourage users’ ability to denounce spam and fraud. If you have a Gmail account, you can report Spam YouTube videos. Click on the three dots to the bottom right of the video, then select the little flag icon /Report.

How can you tell if a video is AI? The number one sign you’re watching an AI video
“For the most part, AI videos are very short, even shorter than the typical videos we see on TikTok or Instagram which are about 30 to 60 seconds. The vast majority of videos I get asked to verify are six, eight or 10 seconds long.” That’s because generating AI videos is expensive”……”,the bad guys downgrade their work on purpose to hide the AI-generated artefacts. “If I’m trying to fool people, what do I do? I generate my fake video, then I reduce the resolution so you can still see it, but you can make out all the little details. And then I add compression that further obfuscates any possible artefacts,”

However, AI-generated videos are improving all the time. The long-term solution “is for us all to start thinking differently about what we see online. Looking for the clues AI leaves behind isn’t “durable” advice because those clues keep changing. Instead, we have to abandon the idea that videos or images mean anything whatsoever out of context.

To report a fake book on Amazon, navigate to the book’s product page, log in to your account, and click the “Report an issue with this product” link. Then, select the option for suspicious activity and follow the prompts to provide details about the issue. You can also contact customer support or, if you are the intellectual property owner, use the Report Infringement form.

The corporate fight to protect copyright
This is hotting up in a big way. The music industry recently opened a lawsuit alleging that leading AI music generators trained on their artists’ work without permission. US Record Labels Sue AI Music Generators Suno and Udio for Copyright Infringement

As a consequence, Udio made a compromise agreement with music rights holder Universal Universal says it has struck first deal for AI music creation “The new platform, due in 2026, will use generative AI trained only on authorised and licensed music, allowing users to customise, stream and share tracks in what Universal described as a “licensed and protected environment.” The restrictions met with a huge backlash from Udio subscribers who were suddenly unable to download their content.

The key to controlling AI generated content is transparency
…and if they cheat – punish them !
Spain to impose massive fines for not labelling AI-generated content
The Spanish bill, which needs to be approved by the lower house, classifies non-compliance with proper labelling of AI-generated content as a “serious offence” that can lead to fines of up to 35 million euros ($38.2 million) or 7% of their global annual turnover.

It’s not all slop – AI can enable creativity!

There was a big fuss about an AI-generated singer hitting the charts recently: Xania Monet is the first AI-powered artist to debut on a Billboard airplay chart, but she likely won’t be the last .

But there is a creative human being behind this avatar, Meet the woman behind chart-topping AI artist Xania Monet: “I look at her as a real person” Telisha “Nikki” Jones …created the persona while teaching herself AI just four months ago. The 31-year-old Mississippi native admits she’s not a singer, but says the “lyrics are 100% me,” and that they come from poems she wrote based on real-life experiences.

“Whether it was stuff I went through, a close family member, or a close friend, I wrote about it.” Jones said losing her dad at just 8 years old inspired her chart-topping song, “How Was I Supposed to Know?”

Chris had Googled “AI generated music video” and was sent to this AI singer: “YURI – SURREAL Completely AI-Generated Music Video | by AI.TALK This is our third music video produced using AI, and we combined various tools to achieve the desired effect. In just three months, AI technology has advanced by leaps and bounds, offering us many new possibilities in camera movement, consistency, and body dynamics.

It’s very polished with fun lyrics appropriate for an AI avatar. The creators are looking for commercial and brand collaborations. They are open about how they create their videos. Good luck to them.

AI can be used to create modern satirical “cartoons” such as this one: The Very Model of a Modern Royal Degenerate (which the Brits will appreciate !) If you watch this on YouTube, click on “More” in the panel under the video and scroll down to below the lyrics. There, you will see this declaration: How this content was made – Altered or synthetic content – Sound or visuals were significantly edited or digitally generated.

What fun! People who can’t sing, or play an instrument or take good pictures, but who have music or a story inside them, now have an avenue to express their creativity. AI-generated content is here to stay. It’s up to us to separate the slop from the art.

Christine Betterton-Jones – Knowledge Junkie