Zum Inhalt springen
9 Min. Lesezeit KI

5 Lehren aus den pornografischen KI-Fakes von Taylor Swift

Was der Sängerin geschah, kennen viele Frauen nur allzu gut: sexualisierte digitale Gewalt. Die Swift-Fakes zeigen, dass jede Technologie missbraucht werden kann – und wird.

5 Lehren aus den pornografischen KI-Fakes von Taylor Swift

Was ist

Wie Swift verleumdet wurde

Was sich daraus lernen lässt

1. Die Warnungen vor KI-Fakes sind berechtigt

2. Das Netz ist ein frauenfeindlicher Ort

A group of 34 synthetic NCII providers identified by Graphika received over 24 million unique visitors to their websites in September, according to data provided by web traffic analysis firm Similarweb. Additionally, the volume of referral link spam for these services has increased by more than 2,000% on platforms including Reddit and X since the beginning of 2023, and a set of 52 Telegram groups used to access NCII services contain at least 1 million users as of September this year.
But also I’m most annoyed that this is 100% a thing that was “predicted” (obvious) years in advance. The number of panels and articles where those of us who followed the development of the technology pointed out that yeah, disinformation tactics would change, but harassment and revenge porn and NCII were going to be the most significant form of abuse… Still, legislators did essentially nothing. Some states are playing catch-up. But it’s yet another example of tech moving quickly-ish while legislators simply lag.

3. Die digitale Gewalt betrifft nicht nur Prominente

4. Jede Technologie wird missbraucht

Yes, we have to act. I think we all benefit when the online world is a safe world. And so I don’t think anyone would want an online world that is completely not safe for both content creators and content consumers. So therefore, I think it behooves us to move fast on this.
I go back to what I think's our responsibility, which is all of the guardrails that we need to place around the technology so that there's more safe content that's being produced. And there's a lot to be done and a lot being done there.
One technical solution could be watermarks. Watermarks hide an invisible signal in images that helps computers identify if they are AI generated. For example, Google has developed a system called SynthID, which uses neural networks to modify pixels in images and adds a watermark that is invisible to the human eye. That mark is designed to be detected even if the image is edited or screenshotted. In theory, these tools could help companies improve their content moderation and make them faster to spot fake content, including nonconsensual deepfakes.
(…) a slew of new defensive tools allow people to protect their images from AI-powered exploitation by making them look warped or distorted in AI systems. One such tool, called PhotoGuard, was developed by researchers at MIT. It works like a protective shield by altering the pixels in photos in ways that are invisible to the human eye. When someone uses an AI app like the image generator Stable Diffusion to manipulate an image that has been treated with PhotoGuard, the result will look unrealistic.
Technical fixes go only so far. The only thing that will lead to lasting change is stricter regulation

5. Die aktuelle Regulierung ist bestenfalls lückenhaft

Of course Congress should take legislative action. That’s how you deal with some of these issues. We know that lax enforcement disproportionately impacts women and also girls, sadly, who are the overwhelming targets of online harassment and also abuse,
The good news is that the fact that this happened to you means politicians in the US are listening. You have a rare opportunity, and momentum, to push through real, actionable change.

I know you fight for what is right and aren’t afraid to speak up when you see injustice. There will be intense lobbying against any rules that would affect tech companies. But you have a platform and the power to convince lawmakers across the board that rules to combat these sorts of deepfakes are a necessity. Tech companies and politicians need to know that the days of dithering are over. The people creating these deepfakes need to be held accountable.

Social Media & Politik


Follow the money


Next (AR, VR, KI, Metaverse)



Tipps fürs Wochenende


Neue Features bei den Plattformen

TikTok

YouTube

X