ai_effects_developers_analysts_engineers

Frankly speaking, Artificial Intelligence (AI) is doing interesting, albeit sometimes spooky, things. From prizes being won by AI-generated art, to online games created entirely by AI, we're beginning to explore what society is ready to accept. Are these advancements making people uncomfortable, or are we ready for the future to arrive?

AI Generated Games are Disrupting the Gaming Industry

According to AI Business, since the start of 2022 the gaming industry has earned more revenue than home video, streaming, cinema, music, books, and all other forms of entertainment combined – hitting the $152.1 billion mark. By 2023, game revenues are predicted to pass $200 billion, as gamers increase in numbers globally. The industry has always been quick to incorporate the latest technology, so it’s no surprise that AI is being given a larger role.

Take the recent Steam game, This Girl Does Not Exist. While it utilizes simple puzzle gameplay, its implications for the gaming industry are not simple at all. The developer claims that everything, from art, to story, to music, has been generated by AI. While its creators were excited for the debut, the Steam community has mixed feelings about a completely AI-generated game.

Some companies, like Ubisoft, are also utilizing AI-based game development. By loading models with game footage, they hope to use it to “teach” AI to design new games. For some creators, AI-generated content is concerning, but Ubisoft hopes to encourage AI as an essential part of the development process. And to make sure that happens, they have created La Forge – a group of students, professors, and employees working together to find ways that AI can improve the creative process.

AI Generated Music and its Impact on Music Artists

Unlike popular AI-image generators, most AI generated music platforms are limited on what they can produce – only generating seconds-long clips. However, that’s quickly. In late September, Harmonai released Dance Diffusion, an algorithm and set of tools that can generate much longer clips of music by training on hundreds of hours of existing songs. This raises the question – how will AI generated music impact artists?

Most of AI-generated music platforms aren’t fooling anyone. They’re only able to “style transfer” songs, essentially creating a cover of one artist’s song with another’s style.

This is also changing: the new generation of music-generators, led by Mubert AI as well as Google's Audio LM, may be on the verge of something much bigger (and scarier) for music artists. They open the possibility for replacing musicians themselves by offering content creators the ability to create their own high-quality, royalty-free music.

Deepfake Technology 

As AI content generators advance, deepfake technology also continues to rise. Experts estimate that as much as 90% of online content may be synthetically generated by 2026. Synthetic media, also known as AI-generated media or content, is most often generated for gaming, to improve services, and/or for quality-of-life improvements. However, the rise in AI-generated content has also enabled the negative application of disinformation and deepfakes that convincingly show people saying or doing things they never did.

Among of the biggest challenges to AI-generated content and deepfakes are policing, and a public that seems relatively uninformed about its dangers. Despite their prevalence, research in 2019 showed almost 72% of people in a UK survey to be unaware of deepfakes and their impact.

So, how are the big tech companies dealing with this? Meta now employs an AI tool that detects deepfakes by reverse-engineering a single AI-generated image to track its origin. Google has released a large dataset of visual deepfakes that has been incorporated into the FaceForensics benchmark. And Microsoft has launched the Microsoft Video Authenticator, which anayzes a still photo or video to provide the percentage chance of whether the media has been artificially manipulated.

Potential Legal Challenges with AI Generated Content

Questions abound surrounding the training data used to generate AI music, images, and other systems. Often, the training data is taken from the web without the creators’ permission. Critics have also questioned whether training AI models on copyrighted music material constitutes fair use. Some companies, like OpenAI, open source their AI music generators under a non-commercial license, prohibiting users from selling any music created with the system. Eric Sunray, a legal intern with Music Publishers Association, argues that AI music generators violate music copyright by creating “tapestries of coherent audio from the works they ingest in training, thereby infringing the United States Copyright Act’s reproduction right.”

Mat Dryhurst and Holly Herndon recently founded Spawning AI, providing a set of AI tools built for artists by artists. According to Herndon, copyright law is not structured to adequately regulate AI artmaking. One of their projects, Have I Been Trained, allows users to search for their artwork to see if it’s been incorporated into an AI training set without consent. this allows creators to opt-in or opt out-to their content being used to train an AI.

As AI content generation continues to grow and improve, one question remains: how much is too much?