Warning: This text discusses non-consensual sexually express content material and Baby Sexual Abuse Materials (CSAM).
Clothoff, one of the crucial infamous apps for non-consensual deepfake pornographic materials, claims that it’s donating funds to “assist these affected by AI”, highlighting its collaboration with an organisation named ASU Label that claims it goals to “shield your rights within the age of AI”.
However it’s unclear who precisely is behind ASU Label and why – given its said goals – it could select to work with an organisation like Clothoff that runs a “free undress AI and garments remover” app.
There is no such thing as a info detailing the people or different organisations concerned with ASU Label on its web site.
Bellingcat first seen a reference to this organisation in December 2024, when the next traces had been added to a few of Clothoff’s websites: “We’re working with Asulable and donating funds to assist these affected by AI. If in case you have skilled issues associated to AI, please go to asulable.com or contact them at crew@asulabel.com.”
This paragraph appeared on a number of of Clothoff’s community of internet sites, together with their principal website, Clothoff.io, which went offline in December 2024. They nonetheless function a number of web sites with comparable domains of their community.
The web site asulable.com couldn’t be discovered. Nevertheless, asulabel.com – the area of the contact e mail deal with talked about – goes to a website for an organisation that calls itself AisafeUse Label (ASU) or ASU Label. This web site for “ASU Label” additionally contains a emblem on the highest left-hand nook which matches a emblem featured on Clothoff’s web site, additional indicating that that is the organisation Clothoff was referring to.
In keeping with DomainTools, a instrument that shows the area registration info of internet sites, ASU Label’s area was registered on Oct. 15, 2024. The Web Archive’s Wayback Machine, a well-liked net archive, captured ASU Label’s website for the primary time on Nov. 13, and Clothoff’s first point out of ASU Label was archived the next month in December.
ASU Label mentioned their mission was “to help people who’ve suffered as a result of unsafe use of neural networks”.
Nowhere on ASU Label’s web site do they state how precisely they assist these affected by any kind of AI or hyperlink to sources for victims. When requested to supply specifics on what they do, ASU Label mentioned they supply “direct assist to victims” and “help people” however didn’t specify what this assist or help regarded like.
ASU Label advised Bellingcat that it was registered as a non-profit organisation, however didn’t say the place it was registered. Bellingcat’s searches on a number of worldwide databases of non-profits and non-governmental organisations for “ASU”, “AisafeUse Label” and “ASU Label” didn’t return any related matches, though such databases will not be equally complete or up to date in each nation.
When requested for any proof that they’re a registered charity or non-profit, to make clear what nation they function from, or any proof in any respect to show that they’re a reputable organisation, ASU Label mentioned their crew had made a “collective choice to not disclose our authorized paperwork” as “in current occasions, we have now encountered quite a few adversaries whose sole intent is to hinder us from fulfilling our mission”.
They didn’t specify who these adversaries had been or how these encounters happened.
ASU Label didn’t reply questions in regards to the harms of deepfake pornography, which Clothoff’s platform creates. Nor did it deal with questions on who’s behind ASU Label, past saying that they had been based by “a gaggle of execs from the fields of AI, legislation, and public advocacy”, or reveal every other organisations it really works with to realize its goals.
The organisation mentioned it was not owned or managed by anybody together with Clothoff. “We’re a crew of like-minded people centered on charity,” it mentioned.
Clothoff’s web site lists a contact e mail for ASU Label, which is how Bellingcat reached them, however this e mail deal with doesn’t seem on ASU Label’s personal website and doesn’t come up wherever else in a Google search. ASU Label’s web site doesn’t embrace any technique to contact them besides a pop-up contact type for these desirous to grow to be a member of the organisation or for these affected by AI.
Since Bellingcat couldn’t discover any hyperlink or reference to ASU Label past its personal web site and the point out on Clothoff’s websites, we requested each organisations about their affiliation with one another.
Clothoff mentioned that it collaborates with ASU Label “sometimes”. “Often, they method us with requests for direct help to people or proposals for joint analysis initiatives. Every time potential, we assist their efforts, help these affected, or present analytical insights,” it mentioned.
Clothoff didn’t reply to Bellingcat’s request for any proof of donations to ASU Label.
ASU Label additionally confirmed that they had been collaborating with Clothoff: “Along with donations, this organisation frequently participates in our analysis actions, offering analytical insights on enhancing authorized frameworks in several areas to uphold human rights. For example, we have now just lately been conducting a joint research on the unfold of deepfakes in Japan.”
Bellingcat couldn’t discover any file of ASU Label within the nationwide non-profit database of Japan, however it’s unclear if they’re registered as a non-profit elsewhere.
‘Makes an attempt to Ban This Progress Are Futile’
In response to Bellingcat’s questions, Clothoff mentioned it was “an adult-oriented platform designed for protected, consensual exploration of intimate wishes” and “strictly prohibits unlawful use”.
This description contradicts the truth of the platform. Clothoff, like different “nudifying” apps, permits customers to “undress” photographs of anybody utilizing AI with out their consent. Girls are extra doubtless to be victims of deepfake porn, and victims have testified about hurt together with excessive psychological misery, in-person stalking and harassment, and reputational damage. There have additionally been circumstances within the US and globally of minors having non-consensual photographs of themselves created and shared by classmates utilizing Clothoff.
The Clothoff press crew advised Bellingcat that it believed that “there are extra severe issues on the earth than footage on the web”.
“In time, society might even method them with humour – playful April Fools’ jokes, as an illustration – turning potential stress into lighthearted interplay.”
Nevertheless, in locations just like the UK, the US and a rising record of nations, the content material that Clothoff produces may very well be unlawful. For instance, within the US the Take It Down Act, which criminalises non-consensual intimate photographs, just lately handed within the Senate. A new legislation to be launched within the UK – the primary of its type on the earth– will even make it “unlawful to own, create or distribute AI instruments designed to create baby sexual abuse materials (CSAM), with a punishment of as much as 5 years in jail”.
Clothoff advised Bellingcat that “AI evolution is inevitable” and “makes an attempt to ban this progress are futile”.
The app is secretive about its possession, and none of its a number of websites comprise any indication of the individuals who personal or run them. Through the investigation for this story, we reached out to a software program growth firm whose title and deal with was listed within the footer of Clothoff’s web sites with out every other rationalization, which usually implies that the corporate owns, manages or is in any other case intently affiliated with these web sites.
Bellingcat had a video name with the CEO of that firm, who appeared genuinely stunned to listen to that they had been listed on the web sites and mentioned they’d no relationship or any prior communication with Clothoff. The CEO mentioned he subsequently contacted Clothoff to take away their title from their websites and shared a screenshot of their response: “Good day! We’ve got eliminated your organization’s deal with from the location. The affirmation is hooked up.”
What adopted was a screenshot of the web page with one more firm title and deal with – this time an AI-focused funding firm – listed in the identical place. That is not less than the fourth firm that Bellingcat has seen within the footer of Clothoff’s web sites since 2023. We’ve got chosen to not title these corporations as there isn’t a proof to point they personal or function Clothoff.
When requested in regards to the string of corporations they listed on their web sites, Clothoff said that “our holding firm oversees a number of companies”, however didn’t verify or deny any official relationship with the enterprise entities they listed on their websites regardless of being pressed a number of occasions for a response.
Clothoff mentioned its holding firm was owned by “a gaggle of engineer-enthusiasts” however that it couldn’t disclose their identities “resulting from non-disclosure agreements”.
A earlier Bellingcat investigation linked a number of corporations to Clothoff, whereas a Guardian investigation revealed different names tied to the deepfake porn app, together with a brother and sister in Belarus.
AI-Generated Assist for AI Victims?
ASU Label’s web site lists a number of kinds of hurt from AI, together with misinformation unfold, job displacement, bias in choice making, and unsafe recommendation. In an article describing what deepfakes are, it additionally mentions for example “movie star deepfakes, the place folks’s faces had been superimposed onto grownup content material, resulting in reputational injury”.
However their declare that they’re actively collaborating on analysis with Clothoff, an organisation identified for non-consensual deepfake pornography, seems to immediately contradict their said objectives of serving to victims of AI hurt and safeguarding the rights of people.
Curiously, a number of AI detection instruments point out that ASU Label’s textual content might itself be AI-generated. We ran the entrance web page textual content by way of three such instruments, GPTZero, Quillbot, and ZeroGPT, which all resulted in a 90 to one hundred pc likelihood fee of the textual content being AI-generated. Subsequent pages we checked, like ASU Label’s articles on AI harms and recognizing deepfakes, ranged in AI-text likelihood between 75 and one hundred pc throughout these three instruments.
When requested about this, ASU Label didn’t deny utilizing AI and mentioned they “see no situation in utilising such instruments for structuring our web site”.
However in an article in December, the organisation warned about “AI-based deception”.
“AI-generated visuals, deepfakes, and even AI-written articles can unfold false info or create deceptive narratives,” it mentioned within the article. “These manipulations are sometimes indistinguishable from actual content material.”
Foremost picture: Merel Zoet/Bellingcat
If in case you have been affected by image-based sexual abuse, you will discover a world record of sources for survivors and victims right here. Established organisations you’ll be able to attain out to for assist embrace the Cyber Civil Rights Initiative within the US and the Revenge Porn Helpline within the UK.
Bellingcat is a non-profit and the flexibility to hold out our work relies on the type assist of particular person donors. If you need to assist our work, you are able to do so right here. You can even subscribe to our Patreon channel right here. Subscribe to our Publication and comply with us on Bluesky right here and Mastodon right here.