Blog

  • Switzerland will open secret files on Auschwitz’s ‘Angel of Death’ Mengele

    Even though Mengele, who was held responsible for the murder of 1 million Jews in concentration camps in Nazi Germany during World War II, fled Europe after the war, rumors that he visited Sweden never went away.

    Go to Original Source

  • İsviçre, Auschwitz’in ‘Ölüm Meleği’ Mengele’ye ait gizli dosyaları açacak

    İkinci Dünya Savaşı’nda Nazi Almanyası’nda 1 milyon Yahudi’nin toplama kamplarında öldürülmesinden sorumlu tutulan Mengele savaşın ardından Avrupa’dan kaçsa da İsveçre’yi ziyaret ettiğine dair söylentiler hiç bitmedi.

    Orijinal Kaynağa Git

  • OpenAI lance le Codex sur mobile

    OpenAI, rend l’assistant de programmation Codex disponible sur les appareils mobiles présenté. Il est précisé que certaines tâches de développement logiciel peuvent être considérablement accélérées grâce au service accessible via les applications iOS et Android de ChatGPT. De plus, l’accès mobile nouvellement ajouté réduit également les coûts d’inférence dans ce processus. 

    L’application mobile ChatGPT fait office d’intermédiaire entre vous et l’environnement que vous avez mis en place pour vos projets de codage. OpenAI explique le fonctionnement de cette nouvelle fonctionnalité :

    Codex utilise une couche de transport sécurisée en arrière-plan qui maintient les machines de confiance accessibles entre les appareils sans les exposer directement à l’Internet public. Cette couche de transport maintient également l’état et le contexte de la session active synchronisés partout où vous vous connectez à ChatGPT.

    Pour aller un peu plus loin, GPT-5.5, le grand modèle de langage qui prend en charge le Codex, peut effectuer des heures de tâches de programmation. Dans de tels workflows, les développeurs doivent parfois guider l’IA. Par exemple, si le Codex identifiait deux manières différentes de réécrire un ancien morceau de code, il pourrait demander à l’utilisateur de spécifier quelle méthode doit être implémentée. De même, le Codex a besoin de l’avis du développeur lorsqu’une autorisation est nécessaire pour apporter une modification risquée au projet. 

    Jusqu’à présent, il n’existait aucun autre moyen de guider le Codex pour les utilisateurs qui commençaient une tâche de longue durée et s’éloignaient de leur ordinateur. Pour cette raison, une tâche pouvait attendre des heures. La nouvelle prise en charge mobile élimine le besoin pour Codex d’attendre que l’utilisateur puisse à nouveau accéder au bureau. De cette façon, les retards inutiles dans les projets sont évités.

    Cependant, il existe des situations dans lesquelles le Codex n’a pas besoin de conseils de l’utilisateur pour accomplir une tâche, mais peut quand même bénéficier de conseils techniques. Par exemple, les développeurs peuvent souhaiter arrêter le service si Codex commence à implémenter un module logiciel de manière sous-optimale.

    Nous devons souligner que maintenant que Codex peut être utilisé sur des appareils mobiles, les utilisateurs peuvent arrêter Codex même s’ils n’ont pas accès à leur ordinateur de bureau. Cela évite l’utilisation inutile de jetons associée à des flux de travail de programmation mal exécutés.

    OpenAI publie également deux autres nouvelles fonctionnalités du Codex appelées Hooks et Remote SSH, ainsi qu’une prise en charge mobile. 
    Hooks permet aux clients de personnaliser l’assistant de programmation avec des scripts. Les scripts créés avec cet outil peuvent gérer non seulement les invites de commande mais également les réponses de l’assistant de programmation. Grâce à Hooks, les développeurs pourront personnaliser les sorties Codex pour chaque projet. La fonctionnalité Remote SSH, introduite avec Hooks, permet au Codex de se connecter à des environnements de développement distants via une connexion réseau cryptée. 

    De plus, dans le cadre de la mise à jour publiée aujourd’hui pour les utilisateurs, les équipes logicielles pourront désormais créer des jetons d’accès programmatiques, c’est-à-dire des informations d’identification d’accès permettant aux outils de développement tiers d’utiliser le Codex. De plus, OpenAI a ajouté la prise en charge de la conformité HIPAA aux versions autonomes du client et du service Codex que les développeurs peuvent intégrer dans leurs outils de codage.

    Jusqu’à présent, Codex était disponible en versions autonomes pour Mac et Windows. Le lancement de Codex sur les appareils mobiles intervient environ huit mois après qu’Anthropic ait rendu Claude Code disponible sur les appareils mobiles. Désormais, tous les utilisateurs de ChatGPT, y compris les niveaux Go et Free d’OpenAI, pourront utiliser Codex via l’application ChatGPT sur Android et iOS. Pour essayer la nouvelle intégration, vous devez mettre à jour l’application ChatGPT sur votre téléphone et l’application Codex sur votre Mac. OpenAI indique que la prise en charge de la connexion à l’application Windows sera également bientôt disponible pour les utilisateurs. 

    Aller à la source originale

  • OpenAI führt Codex auf Mobilgeräten ein

    OpenAI macht den Programmierassistenten Codex auf Mobilgeräten verfügbar präsentiert. Es wird angegeben, dass einige Softwareentwicklungsaufgaben dank des über die iOS- und Android-Anwendungen von ChatGPT zugänglichen Dienstes erheblich beschleunigt werden können. Darüber hinaus reduziert der neu hinzugefügte mobile Zugriff auch die Inferenzkosten in diesem Prozess. 

    Die mobile ChatGPT-Anwendung fungiert als Vermittler zwischen Ihnen und der Umgebung, die Sie für Ihre Codierungsprojekte einrichten. OpenAI erklärt, wie diese neue Funktion funktioniert:

    Codex verwendet eine sichere Transportschicht im Hintergrund, die dafür sorgt, dass vertrauenswürdige Maschinen zwischen Geräten zugänglich bleiben, ohne sie direkt dem öffentlichen Internet auszusetzen. Diese Transportschicht sorgt außerdem dafür, dass der Status und der Kontext der aktiven Sitzung überall dort synchronisiert werden, wo Sie sich bei ChatGPT anmelden.

    Um es etwas näher zu erläutern: GPT-5.5, das große Sprachmodell, das Codex unterstützt, kann stundenlange Programmieraufgaben erledigen. In solchen Arbeitsabläufen müssen Entwickler manchmal die KI leiten. Wenn Codex beispielsweise zwei verschiedene Möglichkeiten identifiziert, einen alten Codeabschnitt neu zu schreiben, wird der Benutzer möglicherweise aufgefordert, anzugeben, welche Methode implementiert werden soll. Ebenso benötigt Codex die Eingabe des Entwicklers, wenn eine Genehmigung für eine riskante Projektänderung erforderlich ist. 

    Bisher gab es keine andere Möglichkeit, Codex für Benutzer zu führen, die eine lang laufende Aufgabe starteten und dann ihren Computer verließen. Aus diesem Grund kann eine Aufgabe stundenlang warten. Durch die neue mobile Unterstützung muss Codex nicht mehr warten, bis der Benutzer wieder auf den Desktop zugreifen kann. Dadurch werden unnötige Projektverzögerungen verhindert.

    Es gibt jedoch Situationen, in denen Codex keine Benutzerführung benötigt, um mit einer Aufgabe fortzufahren, aber dennoch von technischen Tipps profitieren kann. Beispielsweise möchten Entwickler möglicherweise den Dienst stoppen, wenn Codex beginnt, ein Softwaremodul nicht optimal zu implementieren.

    Wir möchten darauf hinweisen, dass Benutzer Codex jetzt, da Codex auf Mobilgeräten verwendet werden kann, auch dann stoppen können, wenn sie keinen Zugriff auf ihre Desktop-Computer haben. Dies verhindert eine unnötige Token-Nutzung im Zusammenhang mit falsch abgeschlossenen Programmierworkflows.

    OpenAI veröffentlicht außerdem zwei weitere neue Codex-Funktionen namens Hooks und Remote SSH sowie mobile Unterstützung. 
    Hooks ermöglicht es Kunden, den Programmierassistenten mit Skripten anzupassen. Mit diesem Tool erstellte Skripte können nicht nur Eingabeaufforderungen, sondern auch Antworten des Programmierassistenten verarbeiten. Mithilfe von Hooks können Entwickler Codex-Ausgaben für jedes Projekt anpassen. Die mit Hooks eingeführte Remote-SSH-Funktion ermöglicht es Codex, über eine verschlüsselte Netzwerkverbindung eine Verbindung zu Remote-Entwicklungsumgebungen herzustellen. 

    Darüber hinaus können Softwareteams im Rahmen des heute für Benutzer veröffentlichten Updates nun programmatische Zugriffstoken erstellen, d. h. Zugriffsberechtigungsnachweise, die es Entwicklertools von Drittanbietern ermöglichen, Codex zu verwenden. Darüber hinaus hat OpenAI HIPAA-Konformitätsunterstützung zu den eigenständigen Codex-Client- und Serviceversionen hinzugefügt, die Entwickler in ihre Codierungstools einbetten können.

    Bisher war Codex in eigenständigen Versionen sowohl für Mac als auch für Windows verfügbar. Die Einführung von Codex auf Mobilgeräten erfolgt etwa acht Monate, nachdem Anthropic Claude Code auf Mobilgeräten verfügbar gemacht hat. Jetzt können alle ChatGPT-Benutzer, einschließlich der Go- und Free-Stufen von OpenAI, Codex über die ChatGPT-App auf Android und iOS nutzen. Um die neue Integration auszuprobieren, müssen Sie die ChatGPT-App auf Ihrem Telefon und die Codex-App auf Ihrem Mac aktualisieren. OpenAI gibt an, dass Benutzern bald auch Unterstützung für die Verbindung zur Windows-Anwendung zur Verfügung stehen wird. 

    Zur Originalquelle gehen

  • OpenAI launches Codex on mobile

    OpenAI, makes the programming assistant Codex available on mobile devices presented. It is stated that some software development tasks can be significantly accelerated thanks to the service accessible through ChatGPT’s iOS and Android applications. Additionally, the newly added mobile access also reduces inference costs in this process. 

    ChatGPT mobile application acts as an intermediary between you and the environment you set up for your coding projects. OpenAI explains how this new feature works:

    Codex uses a secure transport layer in the background that keeps trusted machines accessible between devices without directly exposing them to the public internet. This transport layer also keeps active session state and context in sync wherever you log into ChatGPT.

    To elaborate a little further, GPT-5.5, the large language model that supports Codex, can complete hours of programming tasks. In such workflows, developers sometimes need to guide the AI. For example, if Codex identified two different ways to rewrite an old piece of code, it might ask the user to specify which method should be implemented. Likewise, Codex needs the developer’s input when permission is needed to make a risky project change. 

    Until now, there was no other way to guide Codex for users who started a long-running task and stepped away from their computers. For this reason, a task could be kept waiting for hours. The new mobile support eliminates the need for Codex to wait until the user can access the desktop again. In this way, unnecessary project delays are prevented.

    However, there are situations where Codex does not need user guidance to proceed with a task, but can still benefit from technical tips. For example, developers may want to stop the service if Codex begins to implement a software module in a suboptimal way.

    We should point out that now that Codex can be used on mobile devices, users can stop Codex even when they do not have access to their desktop computers. This prevents unnecessary token usage associated with incorrectly completed programming workflows.

    OpenAI is also releasing two other new Codex features called Hooks and Remote SSH, along with mobile support. 
    Hooks allows customers to customize the programming assistant with scripts. Scripts created with this tool can handle not only command prompts but also responses from the programming assistant. Using Hooks, developers will be able to customize Codex outputs for each project. The Remote SSH feature, introduced with Hooks, allows Codex to connect to remote development environments via an encrypted network connection. 

    In addition, as part of the update released to users today, software teams will now be able to create programmatic access tokens, that is, access credentials that enable third-party developer tools to use Codex. Additionally, OpenAI has added HIPAA compliance support to the standalone Codex client and service versions that developers can embed into their coding tools.

    Until now, Codex has been available in standalone versions for both Mac and Windows. The launch of Codex on mobile devices comes approximately eight months after Anthropic made Claude Code available on mobile devices. Now all ChatGPT users, including OpenAI’s Go and Free tiers, will be able to use Codex through the ChatGPT app on Android and iOS. To try the new integration, you need to update the ChatGPT app on your phone and the Codex app on your Mac. OpenAI states that support for connecting to the Windows application will also be available to users soon. 

    Go to Original Source

  • OpenAI, Codex’i mobil cihazlarda kullanıma sundu

    OpenAI, programlama asistanı Codex‘i mobil cihazlarda kullanıma sundu. ChatGPT’nin iOS ve Android uygulamaları üzerinden erişilebilen hizmet sayesinde bazı yazılım geliştirme görevlerinin önemli ölçüde hızlandırılabileceği belirtiliyor. Ayrıca, yeni eklenen mobil erişim, bu süreçteki çıkarım maliyetlerini de azaltıyor. 

    ChatGPT mobil uygulaması, kodlama projeleriniz için kurduğunuz ortamla aranızda bir aracı görevi görüyor. OpenAI bu yeni özelliğin çalışma şeklini şöyle aktarıyor: 

    Codex, arka planda, güvenilir makineleri doğrudan genel internete maruz bırakmadan cihazlar arasında erişilebilir tutan güvenli bir aktarım katmanı kullanır. Bu aktarım katmanı ayrıca, ChatGPT’ye giriş yaptığınız her yerde aktif oturum durumunu ve bağlamı senkronize tutar.

    Biraz daha detaylandırmak gerekirse, Codex’i destekleyen büyük dil modeli GPT-5.5, saatler süren programlama görevlerini tamamlayabilir. Bu tür iş akışlarında, geliştiricilerin bazen yapay zekaya rehberlik etmesi gerekiyor. Örneğin, Codex, eski bir kod parçasını yeniden yazmak için iki farklı yol belirlediyse, kullanıcıdan hangi yöntemin uygulanması gerektiğini belirtmesini isteyebilir. Aynı şekilde Codex, riskli bir proje değişikliği yapmak için izin gerektiğinde geliştiricinin girdisine ihtiyaç duyuyor. 

    Şimdiye kadar, uzun süren bir görevi başlatıp bilgisayarlarından uzaklaşan kullanıcılar için Codex’e rehberlik etmenin başka bir yolu yoktu. Bu nedenle bir görev, saatlerce bekletilebiliyordu. Yeni mobil destek, Codex’in kullanıcının masaüstüne yeniden erişene kadar bekleme ihtiyacını ortadan kaldırıyor. Bu sayede gereksiz proje gecikmeleri de önleniyor.

    Bununa beraber Codex’in bir görevi sürdürmek için kullanıcı rehberliğine ihtiyaç duymadığı, ancak yine de teknik ipuçlarından yararlanabileceği durumlar da söz konusu. Örneğin, geliştiriciler, Codex bir yazılım modülünü optimal olmayan bir şekilde uygulamaya başlarsa hizmeti durdurmak isteyebilir.

    Artık Codex mobil cihazlar üzerinden kullanılabildiği için kullanıcıların masaüstü bilgisayarlarına erişimi olmadığında bile Codex’i durdurabileceğini belirtelim. Böylece hatalı bir şekilde tamamlanan programlama iş akışlarıyla ilişkili gereksiz token kullanımı da önleniyor.

    OpenAI, mobil destekle beraber Hooks ve Remote SSH adlı diğer iki yeni Codex özelliğini de yayına alıyor. 
    Hooks, müşterilerin programlama asistanını komut dosyalarıyla özelleştirmesine olanak tanıyor. Bu araçla oluşturulan komut dosyaları, yalnızca komut istemlerini değil, programlama asistanının yanıtlarını da işleyebiliyor. Geliştiriciler, Hooks’u kullanarak Codex çıktılarını her proje için özelleştirebilecek. Hooks ile birlikte kullanıma sunulan Remote SSH özelliği ise Codex’in şifreli bir ağ bağlantısı üzerinden uzak geliştirme ortamlarına bağlanmasını sağlıyor. 

    Ayrıca bugün kullanıcılara sunulan güncelleme kapsamında yazılım ekipleri artık programlı erişim tokenlerı, yani üçüncü taraf geliştirici araçlarının Codex’i kullanmasını sağlayan erişim kimlik bilgilerini oluşturabilecek. Bunun yanı sıra OpenAI, bağımsız Codex istemcisine ve geliştiricilerin kodlama araçlarına gömebilecekleri hizmet sürümlerine HIPAA uyumluluk desteği ekledi.

    Şimdiye kadar Codex, hem Mac hem de Windows için bağımsız sürümlerle kullanıma sunulmuştu. Codex’in mobil cihazlarda kullanıma sunulması, Anthropic’in Claude Code’u mobil cihazlarda kullanıma sunmasından yaklaşık sekiz ay sonra gerçekleşti. Artık OpenAI’ın Go ve Free kademeleri de dahil olmak üzere tüm ChatGPT kullanıcıları, Android ve iOS’taki ChatGPT uygulaması aracılığıyla Codex’i kullanabilecek. Yeni entegrasyonu denemek için, telefonunuzdaki ChatGPT uygulamasını ve Mac’inizdeki Codex uygulamasını güncellemeniz gerekiyor. OpenAI, Windows uygulamasına bağlanma desteğinin de yakında kullanıcılarla buluşacağını belirtiyor. 

    Orijinal Kaynağa Git

  • The shock of seeing your body used in deepfake porn 

    The shock of seeing your body used in deepfake porn 

    When Jennifer got a job doing research for a nonprofit in 2023, she ran her new professional headshot through a facial recognition program. She wanted to see if the tech would pull up the porn videos she’d made more than 10 years before, when she was in her early 20s. It did in fact return some of that content, and also something alarming that she’d never seen before: one of her old videos, but with someone else’s face on her body.

    “At first, I thought it was just a different person,” says Jennifer, who is being identified by a pseudonym to protect her privacy. 

    But then she recognized a distinctly garish background from a video she’d shot around 2013, and she realized: “Somebody used me in a deepfake.”

    Eerily, the facial recognition tech had identified her because the image still contained some of Jennifer’s features—her cheekbones, her brow, the shape of her chin. “It’s like I’m wearing somebody else’s face like a mask,” she says. 

    “It’s like I’m wearing somebody else’s face like a mask.”

    Conversations about sexualized deepfakes—which fall under the umbrella of nonconsensual intimate imagery, or NCII—most often center on the people whose faces are featured doing something they didn’t really do or on bodies that aren’t really theirs. These are often popular celebrities, though over the past few years more people (mostly women and sometimes youths) have been targeted, sparking alarm, fear, and even legislation. But these discussions and societal responses usually are not concerned with the bodies the faces are attached to in these images and videos.

    As Jennifer, now 37 and a psychotherapist working in New York City, says: “There’s never any discussion about Whose body is this?” 

    For years, the answer has generally been adult content creators. Deepfakes in fact earned their name back in November 2017, when someone with the Reddit username “deepfakes” uploaded videos showing faces of stars like Scarlett Johansson and Gal Gadot pasted onto porn actors’ bodies. The nonconsensual use of their bodies “happens all the time” in deepfakes, says Corey Silverstein, an attorney specializing in the adult industry. 

    But more recently, as generative AI has improved, and as “nudify” apps have begun to proliferate, the issue has grown far more complicated—and, arguably, more dangerous for creators’ futures. 

    Porn actors’ bodies aren’t necessarily being taken directly from sexual images and videos anymore, or at least not in an identifiable way. Instead, they are inevitably being used as training data to inform how new AI-generated bodies look, move, and perform. This threatens the livelihood and rights of porn actors as their work is used to train AI nudes that in turn could take away their business. And that’s not all: Advancements in AI have also made it possible for people to wholly re-create these performers’ likenesses without their consent, and the AI copycats may do things the performers wouldn’t do in real life. This could mean their digital doubles are participating in certain sex acts that they haven’t agreed to do, or even perpetrating scams against fans. 

    Adult content creators are already marginalized by a society that largely fails to protect their safety and rights, and these developments put them in an even more vulnerable position. After Jennifer found the deepfake featuring her body, she posted on social media about the psychological effects: “I’ve never seen anyone ask whether that might be traumatic for the person whose body was used without consent too. IT IS!” Several other creators I spoke with shared the mental toll that comes with knowing their bodies have been used nonconsensually, as well as the fear that they’ll suffer financially as other people pirate their work. Silverstein says he hears from adult actors every day who “are concerned that their content is being exploited via AI, and they’re trying to figure out how to protect it.” 

    One law professor and expert in violence against women calls these creators the “forgotten victims” of NCII deepfakes. And several of the people I spoke with worry that as the US develops a legal framework to combat nonconsensual sexual content online, adult actors are only at risk of further injury; instead of helping them, the crackdown on deepfakes may provide a loophole through which their content and careers could be stripped from the internet altogether.

    How deepfakes cause “embodied harms”

    During his preteen years in the 1970s, Spike Irons, now a porn actor and president of the adult content platform XChatFans, was “in love” with Farrah Fawcett. Though Fawcett did not pose nude, Jones managed to get his hands on what looked like pictures of her naked. “People were cutting out faces and pasting them on bodies,” Irons says. “Deepfakes, before AI, had been going around for quite a while. They just weren’t as prolific.”

    The early public internet was rife with websites capitalizing on the idea that you could use technology to “see” celebrities naked. “People would just use Microsoft Paint,” says Silverstein, the attorney. It was a simple way to mash up celebrities’ faces with porn. 

    People later used software like Adobe After Effects or FakeApp, which was designed to swap two individuals’ faces in images or videos. None of these programs required serious expertise to alter content, so there was a low barrier to entry. That, plus the wealth of porn performers’ videos online, helped make face-swap deepfakes that used real bodies prevalent by the 2010s. When, later in the decade, deepfakes of Gal Gadot and Emma Watson caused something of a broader panic, their faces were allegedly swapped onto the bodies of the porn actors Pepper XO and Mary Moody, respectively.

    But it wasn’t just high-profile actors like them whose bodies were being used. Jennifer was “a very minor performer,” she says. “If it happened to me, I feel like it could happen to anybody who’s shot porn.” Since he started his practice in 2006, Silverstein says, “numerous clients” have reached out to report “This is my body on so-and-so.” 

    Both people whose faces appear in NCII deepfakes and those whose bodies are used this way can feel serious distress. Experts call this type of damage “embodied harms,” says Anne Craanen, who researches gender-based violence at the UK’s Institute for Strategic Dialogue, an organization that analyzes extremist content, disinformation, and online threats. 

    The term reflects the fact that even though the content exists in the virtual realm, it can cause physiological effects, including body dysmorphia. The face-swapped entity occupies the uncanny valley, distorting self-perception. After discovering their faces in sexual deepfakes, many people feel silenced, experts told me; they may “self-censor,” as Craanen puts it, and step back from public-facing life. Allison Mahoney, an attorney who works with abuse survivors, says that people whose faces appear in NCII can experience depression, anxiety, and suicidal ideation: “I’ve had multiple clients tell me that they don’t sleep at night, that they’re losing their hair.” 

    Independent creators aren’t just “having sex on camera.” For someone to rip off their work “for their own entertainment or financial gain fucking sucks.”

    Though the impact on people whose bodies are used hasn’t been discussed or studied as often, Jennifer says that “it’s just a really terrible feeling, knowing that you are part of somebody else’s abuse.” She sees it as akin to “a new form of sexual violence.”

    The uncertainty that comes with not being aware of what your body is doing online can be highly unsettling. Like Jennifer, many adult actors don’t really know what’s out there. But some devoted followers know the actors’ bodies well—often recognizing tattoos, scars, or birthmarks—and “very quickly they bring [deepfakes] to the adult performer’s attention,” says Silverstein. Or performers will stumble upon the content by chance; some 20 years ago, for instance, the first such client to tell Silverstein her body was being used in a deepfake happened to be searching Nicole Kidman online when she found that one of the results showed Kidman’s face on her porn. “She was devastated, obviously, because they took her body,” he says, “and they were monetizing it.” 

    Otherwise, this imagery may be found by an organization like Takedown Piracy, one of several copyright enforcement companies serving adult content creators. US copyright violations can be challenging to prove if someone’s body lacks distinguishing features, says Reba Rocket, Takedown Piracy’s chief operating and marketing officer. But Rocket says her team has added digital fingerprinting technology to clients’ material to help flag and remove problematic videos, often finding them before clients realize they’re online. 

    By capturing “tens of thousands of tiny little visual data points” from videos, digital fingerprinting creates unique corresponding files that can be used to identify them, Rocket says—kind of like an invisible watermark. The prints remain even if pirates alter the videos or replace performers’ faces. Takedown Piracy has digitally fingerprinted more than half a billion videos and the organization has gotten 130 million copyrighted videos taken down from Google alone (though, of those videos, Rocket hasn’t tracked how many of these specifically include someone else’s face on a performer’s body). 

    Besides copyright, a range of legal tools can be used to try to combat NCII, says Eric Goldman, a law professor at Santa Clara University. For example, victims can claim invasion of privacy. But using these tools isn’t particularly straightforward, and they may not even apply when it comes to someone’s body. If there aren’t, for instance, unique markers indicating that a body in a deepfake belongs to the person who says it does, US law “doesn’t really treat [this content] as invasion of privacy,” Goldman says, “because we don’t know who to attribute it to.”

    In a 2018 study that reviewed “judicial resolution” of cases involving NCII, Goldman found that one successful way plaintiffs were able to win cases was to assert “intentional affliction of emotional distress.” But again, that hinges on the ability to clearly identify the person in the content. Relevant statutes, he adds, might also require “intent to harm the individual,” which may be hard to show for people whose bodies alone are featured.

    “AI girls will do whatever you want”

    In the last few years, Silverstein says, it’s become less and less common to see the bodies of real adult content creators in deepfakes, at least in a way that makes them clearly identifiable. 

    Sometimes the bodies have been manipulated using AI or simpler editing tools. This can be as basic as erasing a birthmark or changing the size of a body part—minor edits that make it impossible to identify someone’s image beyond a reasonable doubt, so even porn actors who can tell that an altered image used their body as a base won’t get very far in the legal realm. “A lot of people are like, That looks like my body,” says Silverstein, but when he asks them how, they’ll reply, It just does

    At the same time, other users are now creating NCII with wholly AI-generated bodies. In “nudify” apps, anyone with a minimal grasp of technology can upload a photo of someone’s clothed body and have it replaced with a fake naked one. “So [much] of this content being created is just someone’s face on an AI body,” Silverstein says.

    Such apps have drawn a ton of attention recently, in incidents from Grok’s “nudifying” minors to Meta’s running ads for—and then suing—the nudify app Crushmate. But there’s been relatively little attention paid to the content being used to train them. They almost certainly draw on the more than 10,000 terabytes of online porn, and performers have virtually zero recourse. 

    One reason is that creators aren’t able to demonstrate with any certainty that their content is being used to train AI models like those used by nudify apps. “These things are all a black box,” says Hany Farid, a professor at the University of California, Berkeley, who specializes in digital forensics. But “given the ubiquity” of adult content, he adds, it’s a “reasonable assumption” that online porn is being used in AI training. 

    “It’s just not at all difficult to come up with pornographic data sets on the internet,” says Stephen Casper, a computer science PhD student at MIT who researches deepfakes. What’s more, he says, plenty of shadowy online communities provide “user guides” on how to use this data to train AI, and in particular programs that generate nudes. 

    It’s not certain whether this activity falls within the US legal definition of “fair use”—an issue that’s currently being litigated in several lawsuits from other types of content creators—but Casper argues that even if it does, it’s ethically murky for porn created by consenting adults 10 years ago to wind up in those training data sets. When people “have their stuff used in a way that doesn’t respect or reflect reasonable expectations that they had at that time about what they were creating and how it would be used,” he says, there’s “a legitimate sense in which it’s kind of … nonconsensual.” 

    Adult performers who started working years ago couldn’t possibly have consented to AI anything; Jennifer calls AI-related risks “retroactively placed.” Contracts that porn actors signed before AI, adds Silverstein, might provide that “the publisher could do anything with the content using technology that now exists or here and after will be discovered.” That felt more innocuous when producers were talking about the shift from VHS to DVD, because that didn’t change the content itself, just the way it was conveyed. It’s a far different prospect for someone to use your content to train a program to create new content … content that could replace your work altogether. 

    Of course, this all affects creators’ bottom line—not unlike the way Google’s AI overviews affect revenue for online publishers who’ve stopped getting clicks when people are content with just reading AI-generated summaries. Performers’ “concern is … it’s another way to pirate [their] content,” says Rocket. 

    After all, independent creators aren’t just “having sex on camera,” as the adult content creator Allie Eve Knox puts it. They’re paying for filming equipment and location rentals, and then spending hours editing and marketing. For someone to then rip off and distort that content “for their own entertainment or financial gain,” she says, “fucking sucks.” 

    KIM HOECKELE

    Tanya Tate, a longtime adult content creator, tells me about another highly unsettling AI-created situation: She was recently chatting with a fan on Mynx, a sexting app, when he asked her if she knew him. She told him no, and “his eyes just started watering,” Tate says. He was upset because he thought she did know him. Turns out he’d sent $20,000 to a scammer who’d used an AI-generated deepfake of Tate to seduce him. 

    Several men, Tate subsequently learned, had been scammed by an AI version of her, and some of them began blaming her for their losses and posting false statements about her online. When she reported one particularly aggressive harasser to the police, they told her he was exercising his “freedom of speech,” she says. Rocket, too, is familiar with situations where AI is used to take advantage of fans. “The actual content creator will get nasty emails from these people who’ve been scammed,” she says.

    Other porn actors say they fear that their likenesses have been used without consent to do other things they wouldn’t do. One, Octavia Red, tells me she doesn’t do anal scenes, “but I’m sure there’s tons of deepfake anal videos of me that I didn’t consent to.” That could cost her, she fears, if viewers choose to watch those videos instead of subscribing to her websites. And it could cause fans to develop false expectations about what kind of porn she’ll create.

    “I saw one AI creator saying, ‘Well, AI girls will do whatever you want. They don’t say no,’” says Rocket. “That horrifies me … especially if they’re training those AI models on real people. I don’t think they understand the damage to mental health or reputation that that can create. And once it’s on the internet, it’s there forever.” 

    Efforts to “scrub adult content from the internet”

    As AI technology improves, it’s increasingly difficult for people to discern any type of real video from the best AI-generated ones on their own. In one 2025 study, UC Berkeley’s Farid found that participants correctly identified AI-generated voices about 60% of the time (not much better than random chance), while advances like false heartbeats make AI-generated humans tougher than ever to spot.

    Nevertheless, most lawyers and legal experts I spoke with said copyright laws are still adult performers’ best bet in the US legal system, at least for getting their face-swapped content taken down. For his clients, Silverstein says, he tries to figure out the content’s origins and then issue takedown requests under the Digital Millennium Copyright Act, a 1998 law that adapted copyright law for the internet era. “Even recently, I had a performer who has an insanely well-known tattoo,” he says, and with a DMCA subpoena he managed to identify the poster of the content, who voluntarily removed it. 

    But this way of working is becoming increasingly rare.

    These days it’s nearly “impossible,” Silverstein says, to determine who produced a deepfake, because many platforms that host pirated content operate facelessly. They’re also often based in places that “don’t really care about US law when it comes to copyrights,” says Rocket—places like Russia, the Seychelles, and the Netherlands. 

    While governments in the EU, the UK, and Australia have said they will ban or restrict access to nudify apps, it’s not an easily executed proposition. As Craanen notes, when app stores remove these services, they often simply reappear under different names, providing the same services. And social platforms where people share NCII deepfakes, argues Rocket, are slacking in getting them removed. “It’s endless, and it’s ridiculous, because places like Twitter and Facebook have the same technology we do,” Rocket says. “They can identify something as an infringement instantly, but they choose not to.”

    (An Apple spokesperson, Adam Dema, said in an email that “’nudification’ apps are against our guidelines” in the app store, and it has “proactively rejected many of these apps and removed many others,” flagging a reporting portal for users. A Google spokesperson emailed, “Google Play does not allow apps that contain sexual content,” noting that the company takes “proactive steps to detect and remove apps with harmful content” and has suspended hundreds of apps for violating its policy. A Meta spokesperson shared a blog post about actions that company has taken against nudify apps but did not respond to follow-up questions about copyrighted material. X did not respond to a request for comment.)

    As porn performers are forced to navigate AI-related threats, the only current federal law to address deepfakes may not help them much—and could even make matters worse. The Take It Down Act, which became US law last year, criminalizes publishing NCII and requires websites to remove it within 48 hours. But, as Farid notes, people could weaponize the measure by reporting porn that was made legally and with consent and claiming that it’s NCII. This could result in the content’s removal, which would hurt the performers who made it. Santa Clara’s Goldman points to Project 2025, the Heritage Foundation’s policy blueprint for the second Trump administration, which aims to wipe porn from the web. The Take It Down Act, he argues, “allows for the coordinated effort to scrub adult content from the internet.” 

    US lawmakers have a history of hurting sex workers in their attempts to regulate explicit content online. State-level age verification laws are an example; visitors can pretty easily get around these measures, but they can still result in reduced revenue for adult performers (because of lower traffic to those sites and the high price of age-checking services they have to purchase). 

    “They’re always doing something to fuck with the porn industry, but not in a way that actually helps sex workers,” says Jennifer. “If they do something, they’re taking away your income again—as opposed to something like giving you more rights to your image, [which] would be tremendously helpful.” 

    But as generative AI plays an increasingly large role in NCII deepfakes, the types of images to which adult performers have rights moves deeper into a gray area. Can actors lay claim to AI images likely trained on their bodies? How about AI-generated videos that impersonate them, like the one that tricked Tanya Tate’s fan?

    The biggest challenge will be creating “legitimate, effective laws that will absolutely protect content creators from abusing their likeness to train and create AI,” Rocket says. “Absent that, we’re just going to have to keep pulling content down from the internet that’s fake.”

    In the meantime, a few porn actors tell me, they’re trying to take advantage of copyright laws that weren’t really made for them; they’ve signed with platforms that host their AI-generated duplicates, with whom fans pay to chat, in part so they’ll have contracts that protect ownership of their AI likenesses. When I spoke with the actor Kiki Daire in September 2025 for a story on adult creators’ “AI twins,” she said she “own[ed] her AI” because she’d signed a contract with Spicey AI, a site that hosted AI duplicates of adult performers. If another company or person created her AI-generated likeness, she added, “I have a leg to stand on, as far as being able to shut that down.”  

    Even this, though, is not a sure thing; Spicey AI, for instance, shut down several months after I spoke with Daire, so it’s unlikely that her contract would hold. And when I spoke in October with Rachael Cavalli, another adult actor who had signed with an AI duplicate site in hopes it’d help protect her AI image, she admitted, “I don’t have time to sit around and look for companies that have used my image or turned something into a video that I didn’t actually do … it’s a lot of work.” In other words, having rights to your AI image on paper doesn’t make it easier to track down all the potentially infinite breaches of those rights online.

    If she’d known what she knows about technology today, Jennifer says, she doesn’t think she would have done porn. The risks have increased too much, and too unpredictably. She now does in-person sex work; it’s “not necessarily safer,” she says, “but it’s a different risk profile that I feel more equipped to manage.” 

    Plus, she figures AI is unlikely to replace in-person sex workers the way it could porn actors: “I don’t think there’s going to be stripper robots.” 

    Jessica Klein is a Philadelphia-based freelance journalist covering intimate partner violence, cryptocurrency, and other topics.

    Orijinal Kaynağa Git

  • Multiverse a reçu un investissement de 70 millions de dollars pour une valorisation de 2,1 milliards de dollars

    Nous vous informerons que la valorisation de Multiverse, qui a reçu un investissement de 220 millions de dollars, a atteint 1,7 milliard de dollars en juin 2022. href=”https://webrazzi.com/2022/06/09/220-milyon-dolar-yatirim-alan-multiverse-un-degerleme-1-7-milyar-dolara-ulasti/” target=”_blank”>nous avons transféré. L’entreprise s’est fait connaître avec de nouvelles nouvelles en matière d’investissement. 

    Selon les informations du Financial Times, target=”_blank”> Multiverse, a reçu un investissement de 70 million de dollars. Le cycle d’investissement de la société basée à Londres comprenait Schroders Capital, Index Ventures, General Catalyst, Lightspeed Venture Partners et D1 Capital. Fondée en 2016, la valorisation de Multiverse est passée à 2,1 milliards de dollars.

    L’entreprise a annoncé qu’elle utiliserait ce nouvel investissement pour développer la formation de la main-d’œuvre axée sur l’intelligence artificielle et accélérer la croissance en Europe.

    Multiverse a été fondée en 2016 par Euan Blai, fils de l’ancien Premier ministre britannique Tony Blair, et Sophie Adelman.  L’entreprise se définit comme une initiative technologique éducative qui instaure un système éducatif basé sur l’assistanat et l’apprentissage au sein des entreprises en offrant aux jeunes et aux salariés un modèle alternatif à l’université.

    Contrairement à l’enseignement universitaire traditionnel, il met directement en relation les individus avec les entreprises et les engage dans un modèle d’apprentissage par le travail. Dans cette démarche, elle vise à les aider à acquérir des compétences à travers de véritables projets d’entreprise en leur proposant à la fois du coaching et du mentoring.

    Multiverse se concentre sur la refonte de la main-d’œuvre, en particulier dans les domaines de l’intelligence artificielle, des données et des compétences numériques. L’entreprise a formé à ce jour environ 30 000 personnes en collaborant avec de grandes institutions telles que Microsoft, John Lewis et M&S. Alors que Multiverse a réalisé un chiffre d’affaires de 79,6 millions de livres au cours de l’exercice 2025, il a annoncé une perte de 62,4 millions de livres.

    Aller à la source originale

  • Multiverse erhielt eine Investition von 70 Millionen US-Dollar bei einer Bewertung von 2,1 Milliarden US-Dollar

    Wir teilen Ihnen mit, dass die Bewertung von Multiverse, das eine Investition von 220 Millionen Dollar erhielt, im Juni 2022 1,7 Milliarden Dollar erreichte. href=”https://webrazzi.com/2022/06/09/220-milyon-dolar-yatirim-alan-multiverse-un-degerleme-1-7-milyar-dolara-ulasti/” target=”_blank”>wir haben übertragen. Das Unternehmen rückte mit neuen Investitionsnachrichten in den Vordergrund. 

    Laut den Nachrichten der Financial Times erhielt target=”_blank”> Multiverse, eine Investition von 70 Millionen Dollar. Die Investitionsrunde des in London ansässigen Unternehmens umfasste Schroders Capital, Index Ventures, General Catalyst, Lightspeed Venture Partners und D1 Capital. Der Wert von Multiverse wurde 2016 gegründet und stieg auf 2,1 Milliarden US-Dollar.

    Das Unternehmen gab bekannt, dass es die neue Investition nutzen wird, um die Ausbildung von Arbeitskräften mit Schwerpunkt auf künstlicher Intelligenz auszubauen und das Wachstum in Europa zu beschleunigen.

    Multiverse wurde 2016 von Euan Blai, Sohn des ehemaligen britischen Premierministers Tony Blair, und Sophie Adelman gegründet.  Das Unternehmen versteht sich als Bildungstechnologieinitiative, die ein auf Assistenz- und Lehrlingsausbildung basierendes Bildungssystem in Unternehmen etabliert, indem sie jungen Menschen und Mitarbeitern ein Alternativmodell zur Universität bietet.

    Im Gegensatz zur herkömmlichen Universitätsausbildung bringt es Einzelpersonen direkt mit Unternehmen zusammen und bindet sie in ein arbeitsbasiertes Lernmodell ein. Ziel ist es, sie durch Coaching und Mentoring dabei zu unterstützen, durch reale Geschäftsprojekte Kompetenzen zu erwerben.

    Multiverse konzentriert sich auf die Umgestaltung der Belegschaft, insbesondere in den Bereichen künstliche Intelligenz, Daten und digitale Kompetenzen. Durch die Zusammenarbeit mit großen Institutionen wie Microsoft, John Lewis und M&S hat das Unternehmen bisher rund 30.000 Menschen geschult. Während Multiverse im Geschäftsjahr 2025 einen Umsatz von 79,6 Millionen Pfund erwirtschaftete, meldete es einen Verlust von 62,4 Millionen Pfund.

    Zur Originalquelle gehen

  • Multiverse received $70 million investment at a $2.1 billion valuation

    We will inform you that the valuation of Multiverse, which received an investment of 220 million dollars, reached 1.7 billion dollars in June 2022. href=”https://webrazzi.com/2022/06/09/220-milyon-dolar-yatirim-alan-multiverse-un-degerleme-1-7-milyar-dolara-ulasti/” target=”_blank”>we transferred. The company came to the fore with new investment news. 

    According to Financial Times’ news, target=”_blank”> Multiverse, received an investment of 70 million dollars. The London-based company’s investment round included Schroders Capital, Index Ventures, General Catalyst, Lightspeed Venture Partners and D1 Capital. Founded in 2016, Multiverse’s valuation rose to $2.1 billion.

    The company announced that it will use the new investment to expand artificial intelligence-focused workforce training and accelerate growth in Europe.

    Multiverse was founded in 2016 by Euan Blai, son of former British Prime Minister Tony Blair, and Sophie Adelman.  The company is defined as an education technology initiative that establishes an assistantship and apprenticeship-based education system within companies by offering young people and employees an alternative model to university.

    Unlike traditional university education, it matches individuals directly with companies and engages them in a work-based learning model. In this process, it aims to help them gain skills through real business projects by providing both coaching and mentoring.

    Multiverse focuses on reshaping the workforce, especially in the areas of artificial intelligence, data and digital skills. The company has trained approximately 30 thousand people to date by collaborating with large institutions such as Microsoft, John Lewis and M&S. While Multiverse earned revenue of 79.6 million pounds in the 2025 financial year, it announced a loss of 62.4 million pounds.

    Go to Original Source