The rise of artificial intelligence has opened up new possibilities for human expression and content creation. However, it has also brought about new challenges, particularly in the form of fraud stemming from deep fakes.
According to data from Sumsub, the portion of fraud from deep fakes more than doubled from 2022 to the first quarter of 2023, with the United States seeing a rise from 0.2% to 2.6%. Celebrities like Tom Hanks, Jennifer Aniston, and Mr. Beast have all been affected by deep fakes using their digital personas to sell products.
In response to this, California-based Hollo.AI launched a platform on Nov. 16, allowing users to claim their AI identity and features a personalized chatbot to help users monetize and verify their AI work through blockchain technology verification.
“The registry serves as a public registry ledger that offers AI identities, once verified by Hollo.AI, to be logged on the blockchain for all to see.”
Wong told Cointelegraph that the services offered by Hollo.AI work similarly to credit identity theft protection but are tailored to safeguard AI identities.
“They monitor and alert users of unauthorized uses of their digital personas, helping to prevent the spread and impact of deep fakes.”
Once a user has created an AI “digital twin” on the platform, it “continues learning” based on the user’s social links provided to create a more accurate digital identity.
While Hollo.AI is addressing transparency and ethical use of AI, the same topics are being considered within other institutions and platforms. YouTube recently updated its community guidelines to include more AI transparency measures, and the entertainment industry union SAG-AFTRA is currently negotiating final terms with major Hollywood studios over the use of AI-generated “digital twins” for its actors.
Related: OpenAI debuts ChatGPT Enterprise — 4 times the power of consumer version
Magazine: ‘AI has killed the industry’: EasyTranslate boss on adapting to change