Tony Robbins' Case 

A recent lawsuit filed by Tony Robbins’ companies against YesChat.ai is making waves in the intersection of artificial intelligence and law, and for good reason. The case introduces several pressing legal questions surrounding the use of AI-generated personas and may very well-set new standards for how public figures and potentially all individuals are protected in the age of generative AI.

According to court documents, the defendants developed no fewer than eleven AI-powered chatbots designed to replicate Robbins’ persona, coaching style, and language. These bots were marketed under names such as “Talk to Tony Robbins” and “Tony Robbins GPT,” and monetized through subscription models. What makes this particularly significant is that Robbins already offers an official AI product meaning these bots aren't merely fan-made experiments but are instead competing directly with his commercial offerings.

The plaintiffs are seeking over $10 million in damages, calling this a "digital heist." And indeed, the legal implications go beyond just one individual. This case raises questions fundamental to the future of AI development:

  • Can a person’s coaching style be protected under intellectual property laws?
  • Is it a violation of rights to train AI on someone's content without permission, even if no direct text is copied?
  • How do existing right of publicity laws apply when the "likeness" is not a face or voice, but an interactive, conversational AI?

This is not simply a case about deepfakes or cloned voices, issues which already pose their own legal challenges. What we are seeing here is the emergence of commercial AI personalities that can simulate real human interaction, all trained on proprietary content, without consent.

A Global Legal Shift

Globally, lawmakers are beginning to take notice and respond decisively. Across multiple jurisdictions, we are witnessing a surge in regulatory efforts aimed at protecting digital identity:

  • Denmark has proposed allowing citizens to copyright their own face, voice, and physical features, with the country’s culture minister remarking that people shouldn't be “run through the digital copy machine” without consent.
  • France has updated its criminal code, making it a punishable offence to share AI-generated content of a person without their permission. The penalties increase if the content is disseminated online.
  • The EU AI Act now mandates the labelling of AI-generated deepfakes, with potential penalties of up to €35 million or 7% of global turnover for non-compliance.

Denmark’s model is especially notable. By proposing that individuals, celebrity or not, hold copyright over their personal likeness, it grants ordinary users the same takedown powers that, until now, were largely reserved for public figures through mechanisms like the DMCA.

Toward Accountability in AI Development

What happens in the Robbins case could be pivotal. It may determine whether AI developers need explicit permission before training or launching bots modelled on real individuals. The current “build first, ask forgiveness later” approach is increasingly running afoul of emerging laws and expectations.

As countries move swiftly to guard against the unauthorized use of likeness and personal content, the message to the tech industry is clear: the Wild West phase of AI is nearing its end. A more accountable era, grounded in legal safeguards and ethical use, is on the horizon.

For any information or assistance, please contact us at info@gtg.com.mt

Author: Dr Ian Gauci

Disclaimer This article is not intended to impart legal advice and readers are asked to seek verification of statements made before acting on them.
Skip to content