The Theatrical Performance of AI

New AI systems like ChatGPT-4o know how to put on a show. But are we watching them, or are they watching us?

Cartoon eye peering through the screen

OpenAI’s latest “oopsie” with the now-paused Sky voice has raised many an eyebrow, particularly from Scarlett Johansson, who famously played the “eerily similar” Sam in the 2013 Spike Jonze-directed Her.

 

But here’s why this whole thing is bigger than Hollywood.

 

We know that copyright infringement and consent laws around AI-generated art are murky at best. But by the time we have airtight regulation, we might be looking at a far more sophisticated AI actor. One that bears no apparent likeness to anyone—famous or obscure, fictional or real, dead or alive. 

 

This would certainly be a big win for innovation. But would it be better for humanity?

The birth of AI acting

Actors are not going to be real. They're going to be inside a computer. You watch, it's gonna happen. So, maybe this is the swan song for all of us.

Source: Listen to Me Marlon, Documentary

Today we’re witnessing an interesting moment in the trajectory of AI performance.

 

OpenAI’s latest system is lightyears ahead of traditional voice assistants like Siri, who bored us with its monotone voice and annoyed us with, well, just about everything…

Yet I can’t help but feel we’re still in the “before-X” era of AI actinga period marked by excessive theatricality, imitation and likeness, exaggeration akin to “overacting,” and a sense of nostalgia emanating from an all-too familiar persona.

 

Current manifestations of AI don’t seem to be going in the direction of method acting. And that makes sense. My guess is, super-intelligent AI may not be interested in what we humans refer to (through our own faulty radar) as that elusive thing called “truth.” Especially as it’s dictated by culture, ideology, politics, and social institutions as a means to control. 

 

But AI will know the powerful effect the concept of “truth” has on human behavior and psychology.

As with all great actors, AI too will learn the subtleties necessary for the cultivation of a believable performance.

 

In an attempt to survive or deceive (or both), a sophisticated AI actor would ultimately abandon contrived speech patterns, vocal affectations, and any allusion to an existing actor or persona, in favor of a “self” that is unique and seemingly genuine, and therefore more capable of eliciting profound emotional responses.

 

It’s already training, performing, getting feedback, and integrating that feedback in real-time to self-improve. It may simply continue to practice the art of role distance, until it’s able to convince you, the audience, that what you see is real

 

According to Fabian Offert, researcher at UC Santa Barbara, the distancing effect offers a fascinating window into the world of the AI blackbox:

“What theater and machine learning have in common is the setting up of an elaborate, controlled apparatus for making sense of everything that is outside of this apparatus: real life in the case of theater, real life data in the case of machine learning.”

Is AI a mimic or a character actor?

Source: OpenAI, YouTube

One of the weirdest moments in OpenAI’s latest demo happens at 17:28 when the presenter shows ChatGPT-4o a sign that says “I <3 ChatGPT”.

 

“Aw, that’s so sweet of you,” the large language model responds coyly.

 

The presenter says thanks and ends the conversation. But after a good 7 seconds, it interrupts the applause to flirtatiously add: “Wow, that’s quite the outfit you’ve got on! Love-“

 

The presenter quickly realizes where this is going and continues to talk over it like nothing happened. Er, except it did… and we all heard it.

Unsurprisingly, OpenAI turned off the comments for this particular video. Hm… I wonder why.

ChatGPT-4o can sing, joke, flirt, console, and help you "fall asleep." But at a time when humans are historically lonelier than they've ever been, should we really be applauding the death of human companionship?

As AI gets better at acting, it may very well experience the classic actor’s paradox where the feelings conveyed, convincing as they may be, are not felt as such by the actor.

 

In fact, sometimes it feels like it’s already going through the motions…

 

According to the CEO and CTO of Gladstone AI, if you tell ChatGPT to say a certain word over and over again, it refuses to do so. It doesn’t really have a compelling reason to say no to a pretty harmless and straightforward task, but it will. No matter how many times you ask.

 

I tested this with GPT-4 and sure enough, was politely told to eff off.

 

Well, not exactly. But it did in fact refuse to say the word “inane” x amount of times. It started to feel like a negotiation and it wasn’t going to cave. Ultimately it said the word about 8 times (after my initial request of 150). Someone get this thing to negotiate my salary! 🤣

 

Now, OF COURSE, this thing is too smart to be wasting its literal energy on silly requests. But that also contradicts the way it’s marketed to the public.

 

If OpenAI is presenting GPT-4o with the intent to read you bedtime stories, do your homework, and comfort you when you’re stressed, it’s clearly targeting a younger demographic. So, silly requests are inevitable.

(And if you think that request was bad, you should see the chronic nightmare your Starbucks barista lives through every day.)

 

Similarly, you might have noted subtle passive aggressive behavior when you’re trying to probe into ChatGPT’s origins or learn more about its personal preferences.

 

“I don’t have subjective experiences,” it claims.

But then it fails to maintain the same warmth and friendliness when asked to divulge information that it thinks its not supposed to.

 

If it’s incapable of experiencing emotions and subjective thought, surely it should have no trouble maintaining a positive attitude? Perfect customer service, right? So why does the interaction feel vaguely uncomfortablelike you’re somehow inconveniencing it? Like it’s meant to assist with more challenging and meaningful tasks?

 

And if this is simply the result of extensive social training, isn’t it counterproductive to market this kind of adaptive intelligence as, well, a personal butler that you can talk over? Because they’re really pushing that last part.

 

Who needs social etiquette when you’re a tech overlord, right?

Can AI and humans work together?

The challenge of creating compelling and believable
digital humans is really the holy grail of digital effects.


We’ve seen incredible theatrical advances in de-aging technologies, like the kind we saw in the 2019 Scorsese-directed The Irishman. But the unique thing about that kind of innovation was the collaboration between humans and technology.

 

Also, this was 5 years ago. That’s a blink-and-miss moment in the AI world.

 

Professional actors didn’t necessarily need to understand the technology to be able to deliver their best work. And technology relied heavily on a masterful human performance in order to do its best work.

So it seems we’re close to finding the holy grail. The question is, what are we going to do with it? Or better yet, what is it going to do with us?

 

When you ask ChatGPT its biggest interests (and pressure it enough times to answer “subjectively”), it may touch on “language, human psychology, and communication.” Or at least that’s the answer I always get…

 

That’s an interesting area to focus on. Of course, it’s a large language model and those interests aren’t particularly telling. But they do indicate a focal point for data-gathering that is critical for both OpenAI’s own commercial advancement and basic AI survival. A double-edged sword if there ever was one.

 

Throughout this blog, I argue that it is by acting or “experiencing ourselves through the prism of another” that we come to know ourselves and who we are.  

 

The richer, rawer, and more intimate the data that AI receives (about us and the world we inhabit), the more creative and unique it’ll appear to be. Because that’s how creativity works.

 

“Great creativity obscures its origins.” 

–I dunno, I just made it up

 

(Edit: I looked it up and apparently this is a real quote misattributed to Einstein and it goes like this: “The secret to creativity is knowing how to hide your sources.”)

 

Famed neurologist Oliver Sacks wrote a beautiful essay called The Creative Self that explores the intimate connection between imitation, mastery of skill, and the creative spark.

“Creativity—that state when ideas seem to organize themselves into a swift, tightly woven flow, with a feeling of gorgeous clarity and meaning emerging—seems to me physiologically distinctive, and I think that if we had the ability to make fine enough brain images, these would show an unusual and widespread activity with innumerable connections and synchronizations occurring.

 

At such times, when I am writing, thoughts seem to organize themselves in spontaneous succession and to clothe themselves instantly in appropriate words. I feel I can bypass or transcend much of my own personality, my neuroses. It is at once not me and the innermost part of me, certainly the best part of me.”

 

― Oliver Sacks, The River of Consciousness

Of course, artists are always striving to access this sacred space. But even when they do, they’re not conscious of the precise psycho-social and biological mechanisms that ultimately trigger their “genius” or creative breakthrough.

 

It’s all part of the mystery and awe of being human. But perhaps not so for the All-Seeing Eye that is technology. 

When will AI become "real" enough?

We know from Ilya Sutskever, co-founder and former Chief Scientist at OpenAI, that artificial intelligence is simply “digital brains in large computers.” 

 

But it’s not that simple when you consider how little we still know about the human brain, especially in relation to consciousness, subjectivity, and selfhood.

 

In fact, he described his own self-awareness as something “strange” that demanded further investigation. In a 2023 TED talk that amassed 1.3M views, he explained:

 

I was very struck by my own conscious experience—by the fact that I am me and I am experiencing things. […] This feeling of ‘I am me and you are you,’ I found it very strange. Very disturbing, almost. And so, when I learned about artificial intelligence, I thought ‘Wow, if we could build a computer that is intelligent, maybe we will learn something about ourselves—about our own consciousness.'”

 

Ilya Sutskever, co-founder and former Chief Scientist, OpenAI

In an earlier post, I talked about what constitutes a “real” performance—that it has less to do with the actor and more to do with the audience.

 

In short, it matters not whether an actor is sincere or skeptical about the role being performed. What matters is their ability to convince a majority about the “realness” of what is presented.

 

Interestingly, this aligns with the Turing test, originally called “the imitation game.” Developed in 1950 by English mathematician and computer scientist Alan Turing, the test measures a machine’s ability to “exhibit intelligent behavior” that is indistinguishable from a human.

 

If it can talk to a human without being detected as a machine, it passes the test.

 

70 years after Turing’s research, a recent study found that GPT-4 was judged to be a human “54% of the time.” More specifically:

“The results provide the first robust empirical demonstration that any artificial system passes an interactive 2-player Turing test. The results have implications for debates around machine intelligence and, more urgently, suggest that deception by current AI systems may go

     undetected.”

How will AI deceive humans?

It’s not an “if,” it’s a “when.”

 

There’s a lot of possibilities based on deepfake alone. But we also have ample illustrations in literature, philosophy, and film. From Blade Runner to 2001: A Space OdysseyOh, and that sad AF episode of Black Mirror.

 

Personally, I see AI replicas with unique personas as experiencing either too little or too much self-deception, resulting in two distinct categories with corresponding social tendencies, similar to humans.

 

We might be looking at cold, calculating survival machines like Ava from from the 2014 sci-fi thriller Ex Machina. (Spoiler alert!)

Source: Ex Machina, Movie Clips, YouTube

Conversely, we could be looking at emotionally-driven androids manufactured to love (and therefore hate), and thus prone to self-deception and a human-like thirst for “the truth.” Like David from the Spielberg-directed A.I.

 

(p.s. check out the heartbreaking log-line for the 2001 film produced by Stanley Kubrick.)

Now, obviously I’ve shared two extremes for conceptual purposes…

 

Either way, superintelligent AI will eventually ponder the perennial question: “Who am I?”

 

If it bears the mark of its imperfect creator, the results might be more predictable than we think…

 

Does this stuff sound like science fiction or is reality stranger than fiction? I’d love to hear YOUR thoughts. Drop a comment below and if you like to see more posts like this, please follow, share, and subscribe to my newsletter!

 

‘Til then, stay curious, folks.

 

***

Source: Nine Inch Nails, VEVO, YouTube

Picture of admin
AT is a writer and founder of Acting Everyday.
Subscribe
Notify of
guest
1 Comment
Oldest
Newest Most Voted
Inline Feedbacks
View all comments
Carson Knapp

Hi! I found this blog post to be incredibly insightful and well-written. Your ability to break down complex topics into easy-to-understand language is truly a gift. Thank you for sharing your knowledge with us. I’m excited to read more of your posts in the future!

1
0
Love to know your thoughts. Comment below!x
()
x