The recent circulation of an AI-generated deepfake video featuring Bobbi Althoff stirred considerable attention online, which she swiftly denounced as false.
TMZ acquired insights into the creation process from Paul Dawes, CEO of More. AI. Dawes explained that the technology analyzes extensive 2D footage to identify patterns, enabling the generation of new, seemingly authentic videos. Given Althoff’s vast content, the AI had ample material to fabricate the deceptive video.
This technology poses a similar threat to celebrities, political figures, and corporate brands, all susceptible to AI-generated misrepresentations. While current AI struggles more with crafting convincing 3D deepfakes, advancements in the field suggest it’s a matter of time before it overcomes this hurdle.
Read: Bobbi Althoff’s Leaked Video Controversy: The Viral Sensation Explained
Additionally, the availability of open-source AI software compounds the problem. Even if a developer ceases operations, the code remains accessible, potentially facilitating misuse by the public.
Dawes underscored the critical role of social media platforms in managing the dissemination of such content. Despite this, these platforms often grapple with regulating the influx of inappropriate material due to limited oversight.
The unauthorized use of AI in creating content infringing on individuals’ privacy, as in Althoff’s case, highlights a pressing concern for digital ethics in today’s internet landscape.