• AiNews.com
  • Posts
  • Rewriting Hollywood: Protecting Stories in the Age of AI

Rewriting Hollywood: Protecting Stories in the Age of AI

Why the Film Industry Needs AI Governance

A professional video camera is mounted on a stabilizing rig in a brightly lit studio or event setting. The lens is focused on a subject visible in the camera’s LCD screen, which displays a close-up of a person’s legs on a lit stage. Colorful lighting effects and lens flares fill the background, suggesting a live production, concert, or commercial shoot. The scene captures the technical detail of the camera equipment and the vibrant energy of a creative production environment.

Image Source: Provided by contributing writer Patrick McAndrew

Disclaimer: The views and opinions expressed in this article are expressed solely by Patrick McAndrew and do not necessarily reflect those of any organizations or affiliations he is associated with.

Rewriting Hollywood: Protecting Stories in the Age of AI

Earlier this year I was visiting some friends in Minneapolis, and on the flight back home to New York City I decided to watch the film, Here. I’m a fan of Robert Zemekis films, including such classics as Back to the Future and Forrest Gump, so I was eager to see what this film was all about. Not to mention, it was a reunion of sorts for Tom Hanks and Robin Wright and featured a handful of incredible actors including Paul Betany, Michelle Dockery, and others. But, perhaps the biggest star of all among these Hollywood legends was an AI company called Metaphysic.

I was familiar with Metaphysic for about a year before they hit the Hollywood scene with Here. While their claim to fame was initially making viral, deepfake videos of Tom Cruise, they have now helped create a handful of films with their generative AI technology. Specifically, for Here, this involved aging and/or de-aging actors throughout the course of the story.

Generative AI is Revolutionizing Filmmaking

While Here felt more like experimental filmmaking than a once-in-a-lifetime story, there is no denying that the use of generative AI was fascinating. There were subtle moments where you could tell that the actor wasn’t completely human, but it’s evident that this technology has progressed rapidly in the last five to ten years (lightyears away from Princess Leia in 2016’s Rogue One, for example). The partnership between Zemekis, Miramax, and Metaphysic was certainly historic in this regard.

Generative AI has become more and more prevalent in filmmaking in recent years. From a youthful-looking Mark Hamill in The Mandalorian to the flashback scene in Indiana Jones and the Dial of Destiny, artificial intelligence is revolutionizing the ways we create and tell stories. And it’s not just about de-aging or recreating an actor’s likeness. AI is being used in visual effects in such mediums like The Volume. It’s being used for preproduction purposes with story generators or software like Cinelytic, which forecasts box office performance. AI has advanced dramatically in postproduction, with tools like Runway, Adobe Sensai, and iZotope revolutionizing the ways in which we edit and put together a finished film. And just recently it was announced that films made with AI can still be contenders for the most prestigious filmmaking awards: the Oscars. It’s no secret that all these AI tools bring about so many opportunities for the creatives who use them. That said, they do not come without their risks and many creatives are pushing back on these tools being created in the first place.

The Risks of AI in Entertainment

While generative AI applications have brought a fair amount of possibility to the entertainment industry, it doesn’t come without its baggage. There are several serious risks that come with implementing this technology, especially if we do so without any guardrails or responsible frameworks in place. In all honesty, the list of potential risks is long, but here are some of the most prevalent that are being discussed today.

  1. Identity Theft and Risks to Intellectual Property by Deepfakes and Voice Cloning

This particular risk has received a lot of attention, and for good reason. We’ve reached a point where AI can clone voices, faces, and performances without the permission of the actor, performer, or really any human that is being portrayed. Because this threatens the performers’ control over their likeness and voice, it raises a plentitude of issues around consent, royalties, and ownership over our own selves. There is also the risk of brand reputation if the voice or persona is used in a manner that the performer did not agree to, such as propaganda, pornography, or association with particular brands.

Scarlett Johansson has been the victim of such attacks. OpenAI developed their AI voice assistant, “Sky,” which sounds just like Johansson. After OpenAI claimed that they used a different actor, Johansson hired legal counsel and raised concerns about AI generated content and intellectual property rights. While many actors have expressed enthusiasm for this technology, as we saw with the film Here, it’s important to note that, while these tools exist, there will always be bad actors who abuse these tools against reputable artists.

Thankfully, big steps are being made to help mitigate these issues. The No Fakes Act is being pushed for passage, which aims to protect artists from unauthorized AI-generated replicas of their likenesses. This bipartisan effort is a significant milestone toward safeguarding creators' rights in the digital age. Support from organizations like the Recording Industry Association of America (RIAA), The Recording Academy, and SAG-AFTRA underscores the importance of this legislation for the future of the entertainment industry.

  1. Copyright Risks

In addition to an actors’ likeness being used without their permission, copyright risks are prevalent across a variety of sectors within the entertainment industry. While several AI companies are popping up that create scripts and music based off of prompts alone, it’s important to consider that these AI models are being trained on data of some kind. This data is often from the creative works of screenwriters, composers, musicians, and other artists who did not give permission to these AI companies to use their copyrighted works. The AI models learn by a set of patterns. You can ask it, for example, to create a song that sounds like it was written by The Beatles, and the AI model will use their songs to create this new, “original” song for you. Sadly, at this time, there aren’t many legal protections in place that stop AI companies from doing such things.

As the technology advances, the creation of hyper realistic AI content becomes an even greater threat to copyright. In criminal cases, for example, where videos and photos are generally authentic forms of evidence, the mere possibility of such videos and photos being fake could be enough to establish reasonable doubt and could lead to the acquittal of guilty defendants. There are no limits that bad actors know when using the copyrighted material of actors, writers, musicians, and other high-profile artists.

Last year, some of the largest music studios in the world, including Universal Music Group, Sony Music, and Warner Records, sued two AI startups over alleged copyright violation. While AI firms said that the music they are creating is a legitimate exercise of the fair use doctrine, the record labels said the AI firms are making money from having copied the songs, plain and simple. Unless protections are put in place soon, it’s likely that we will see more groundbreaking actions such as these. We need more rigorous authentication processes now before AI technology becomes so advanced that we can’t tell the difference between original and synthetic content.

  1. Job Displacement

AI tools have the possibility of making creatives’ jobs easier and, potentially, more productive. However, a major concern is that these AI tools may replace the creatives altogether. Automation in editing, writing, camerawork, and even acting threatens the work of humans who have dedicated years of study and work to perfecting their crafts.

We are at a crucial point right now when we must ask ourselves, “What is the true art behind filmmaking?” Sure, one can argue that once movies began incorporating CGI and green screen, that the “art” of filmmaking was lost, but even with these elements it still takes brilliant human designers and visual effects artists to bring stories to life, along with the actors, writers, and directors. Humans (at least for now) understand the essence of a film: the raw, emotional, hilarious, and devastating experiences that only humans can feel to this degree. It’s why we love films so much, and it’s why we can usually recognize when a film hasn’t been created by humans. It lacks substance. Once AI tools get efficient enough, however, it will be tempting for film studios to cut costs by using these tools instead of hiring a human who needs a salary, healthcare, and time off. While most of these tools haven’t quite reached the apex of functioning independently, their impact on jobs in the creative fields should not be taken lightly.

Actors and writers won a fair amount of protections from the strikes back in 2023, but the fight is long from over. Studios still hold the keys to the kingdom when it comes to digital replicas, synthetic performers, and the data that AI models would train on. We are witnessing in real time whether film studios will protect the rights of their creatives or if the profit potential in AI will win out over time.

  1. Lack of Industry-Wide Standards

In a more general sense, there is a real lack of industry-wide standards not only across the film industry, but in the entertainment industry as a whole. There are no universal rules or frameworks that film studios can follow when implementing AI into their productions. From bias and lack of representation to audience trust and misinformation, studios and production companies face an uphill battle in how they will use AI. Can they use AI in a way that is trustworthy, responsible, and protects the rights of their artists, all while reaping the benefits of such technology at the same time? Ethical considerations must be taken into account to ensure informed consent, fair compensation, AI transparency, and AI safety are implemented going forward.

Why the Film Industry Needs AI Governance

In order to balance these (and many other) risks with the innovations to come with AI, the film industry must build out comprehensive AI governance. Putting responsible AI governance frameworks in place will ensure that AI is developed, deployed, and used in a manner that supports humanity. AI Governance encompasses many facets, all of which can be applied to the film industry directly.

  1. Accountability

Who is responsible if an AI system causes harm? Is it the film studio? Production company? The distributor? Who will take full ownership of the AI system regardless of its successes or its failures?

  1. Transparency

If AI is going to be used in film, the users should have a foundational understanding of its applications. How is it being implemented? What is it learning from and have the users secured the permission from the correct parties? Do the users of the AI model know how to do so in a way that won’t cause harm, either directly or indirectly, to other artists?

  1. Bias and Fairness

Does the AI model reinforce stereotypes against certain groups? Is it biased against certain communities or organizations? The film industry has made strides in representation in recent years, but still has a ways to go. It will be important that the AI models being used help support the work of representation rather than tear it down.

  1. Privacy and Security

Is artist information protected or compromised? How can studios ensure that its workers’ confidential information is secured? Additionally, how can they confirm that actors, writers, directors, and others’ IP is not compromised? Having protections in place for personal data is more crucial now than ever before.

These are just a couple of areas that can be explored at length if AI is used for a specific production. In a more practical sense, these AI governance components may be used as a framework to assess:

a) How AI generated content is labeled or watermarked

b) Protecting creators’ rights and IP, if an AI model is trained on their work

c) Monitoring bias in content generation, or in how a project is cast

d) How to establish consent when using an actor’s voice, likeness, etc.

e) Putting guardrails in place for how writers, editors, designers, and technicians use AI in preproduction, production, and postproduction.

f) Build trust between studios, tech platforms, and creatives

Keeping Humanity Central

There are many creatives calling for the complete elimination of AI technology. In a lot of ways, I can’t blame them. What draws me to the entertainment industry, be that as an actor, writer, or as an audience member, is the humanity within it. Stories are powerful mediums to connect us as human beings, so it’s natural to want to keep humanity central to the art of storytelling. In a world that can often feel very divided, stories help us bridge gaps, educate us on lives that are different from our own, and inspire us to step outside our comfort zones and learn more about our neighbor, a global issue, or the successes and missteps of the past. Stories have the ability to change the world. You may disagree with a family member on everything under the Sun, but there are many families and friends who use stories, and movies more specifically, as a middle ground, a common interest, and a place to find peace with each other. So long as we are able to keep humanity at the focus and we don’t get distracted by the shiny object that is AI, we will be okay.

AI has the ability to help us tell new stories and continue to stretch our imaginations as movies have done for more than a hundred years. If we do implement artificial intelligence into the fabric of filmmaking, as we’ve done with other tools in the past, it is imperative that we keep creatives at the table. We must be the masters of the tool, not the other way around.

AI made films like Zemekis’s Here groundbreaking. The technology will continue to advance and it’s up to us to set the stage in how we want to use it. AI doesn’t use itself, at least not yet. Humans are still in the driver’s seat and we can decide how we want to use these emerging technologies. We must not get distracted by profit margins and other ulterior motives that could threaten the art of filmmaking. So long as we keep humanity front and center, I’m confident that we can use these new AI tools to our advantage and guarantee that we continue to thrive as creative storytellers rather than the alternative.

About The Author:

Patrick McAndrew is a responsible AI strategist, writer, and actor based in New York City. His work focuses on the benefits of responsible AI with expertise in entertainment and media. He currently works on the responsible AI team at HCLTech and has worked for the Responsible AI Institute and the Entertainment Community Fund.