Weighing the risks and benefits of online anonymity

by Nancy Shohet West ’84

Mike Pappas ’10 is CEO and co-founder of Modulate, a startup that creates “voice skins” that alter people’s voices for online use. Founded in 2017 and based in Cambridge, Mass., Modulate raised $2 million in seed financing earlier this year.

At MIT you majored in applied math and physics. What was your trajectory into software design?

From a very young age I wanted to be a physicist because I wanted to understand the universe. But once I began learning programming in my physics classes, I recognized that as a programmer, when you have an idea, no matter how wacky, you can put it into action and see what happens. My friends and I could make games and scheduling tools as quickly as we could dream them up. In contrast, my first real project in physics — helping design a dark matter detector — was part of an experiment which in total took about eight years to complete. I still love physics, but I just couldn’t see myself spending so much time without being able to see my work have an impact on the world.

What problem was Modulate designed to solve?

Today’s video games give players the ability to customize their characters’ appearances. Players spend enormous amounts of time and money designing their characters but then chat with other players in their regular voices. Our voice skins allow players to create voices that match their characters, making it more fun and immersive, and also safer. A lot of online harassment is based on age, gender, or other demographic facts. Until now, the player’s voice was often the giveaway as to who he or she really was. By choosing the vocal cords you want for your character, you can become who you want your character to be.

 

What’s the downside?

Voice skins are a hugely powerful tool, but a crucial part of our mission is to ensure that the freedom to design one’s voice results in more creativity and diversity, not less. As one example, we’ve had women gamers ask us for male voice skins, to avoid the harassment they are unfortunately often subjected to. In helping with this, we also want to make sure that we’re not masking the real problem — that harassment exists — and we also want to make sure the end result isn’t that everyone sounds like the same generic man. We’re deploying this technology carefully, to give players the fun and safety of covering their voices while conscientiously not setting the stage for going down a dystopian path.

How do you keep this capability from being misused, say for celebrity impersonation or even to create “deep fake” news?

One of the reasons we started our technology in the gaming world is that when you enter a video game, you implicitly consent to some amount of make-believe. As a player, you know you are talking to avatars and not real people. If a character in a video game starts speaking to you in the voice of President Obama, you know you’re not actually talking to Obama. Even so, we want to be careful not to misuse people’s identities. For instance, if a video game developer asks us for a celebrity’s voice, we won’t create anything unless the client can provide permission from that celebrity to use his voice. We’ve also built an audio watermark so that, if needed, any audio we’ve generated can be identified as synthetic. Understanding the ethics of this technology has always been a major focus as we built our company. New technologies are always emerging. Therefore, what we need to do is figure out how we can incorporate an ethical approach from the ground up.

More from CA Magazine