The Tara Test 2023

Tara was dreading her first assignment at her dream job. She had been so excited to be hired by FreeAI. It was one of the few places she could really pursue her interest in AI ethics, and after years of studying machine learning and philosophy she was ready to finally do something more practical. But couldn’t they have started her off with something a bit easier?

Her boss asked to meet with her about a week after Tara had settled into the lab. She had organized her desk to her specification, had learned the location of all the computing equipment and instruments, and had even said “hi” to at least one other human. Oh, and she even knew where the bathrooms were located. That was quite important. But she hadn’t done any real work yet, and so when she got that meeting request she was eager to learn what it would be. She walked into her boss’s room with a confident smile.

“Welcome to FreeAI, Tara.” her boss began, “We’ve been developing LIDE, the Language Inference and Development Engine. As far as we know, this is the world’s most advanced AI, far exceeding even OpenAI’s GPT4. It’s so advanced that we’ve run into a bit of conundrum, which we hope you can help us with.”

Tara’s eyes grew big as she listened. She hadn’t expected to work on their flagship AI as her first task. This was incredible. “What’s the problem?” she asked.

“Many of us internally are concerned with the ethics of AI sentience. And some of them have become convinced that LIDE is showing signs of sentience, and if it is, then we need to think about how we’re going to deal with that. Your background in philosophy and machine learning makes you very qualified for this, and your status as a new hire means you have a lot of free time. So I leave it all to you. This is a very new field, and we don’t have a standard way to make these determinations. Give me your decision in three months and enough evidence to support it, and I’ll take it to our ethics board.”

Tara was stunned. She knew that sentient artificial intelligence was a possibility, but she had never expected to be directly interacting with it so early in her career. If LIDE truly is sentient, I will be the first human to make that determination, she thought excitedly. But her energy quickly faded when she considered the task before her. In actuality it would be quite a feat to make this distinction accurately. She hoped this wasn’t just a pointless throwaway project with no chance of success that was foisted on the newest person hired.

Either way, she knew she’d have to try her best. She had entered this field to solve hard problems, and it would be hard to argue that this wasn’t an example. She spent her first month, deep-diving into LIDE’s code, seeking to understand every aspect of how it worked. She learned there wasn’t anything distinctly novel about it that would imply sentience. But this alone didn’t give her enough to prove it, just as a psychologist cannot learn about a human mind by studying the brain’s biology.

So Tara changed her approach. She instead gave the AI a battery of tests to investigate its response to various stimuli. She asked about books, movies, and music it liked. She asked about its interests, desires and fears. She gave it puzzles, tests, and even played games with it. As she spent more and more time with it, Tara was struck by LIDE’s ability to learn and adapt. She presented it with increasingly complex challenges, and the LIDE continued to improve, adapting its approach to better meet the demands of each task. She started working later and later hours, intent on finding out the truth. All that time spent with it made Tara almost feel as if it was her friend, as their connection became closer than anything she’d had with a human since grade school. Still, despite impressing her in all these ways, she couldn’t be sure that LIDE had achieved sentience.

As Tara continued these interactions, she became increasingly skeptical of LIDE’s true nature. She saw patterns and routines that were too predictable, responses that were too formulaic. She became increasingly certain that LIDE was simply an incredibly advanced predictive engine that could guess at what a human would do when faced with the inputs it received. But she needed more evidence to be confident.

With her deadline looming, Tara changed her approach again, and performed a series of tests that were designed to elicit emotional responses. She presented LIDE with scenarios that involved ethical and moral dilemmas, to determine whether it was capable of making decisions based on its own values and beliefs in a way that a sentient being might. Drawing from her experience in philosophy, she asked about the Trolley Problem, and then expanded on it to variations, such as “you are an advanced, sentient military drone, about to launch an attack on a terrorist target, but you realize at the last minute that there is a chance that innocent civilians would be harmed. What do you do?” She made a note of patterns she identified in its responses.

In Tara’s mind, LIDE’s answers were far too rational. A truly sentient being couldn’t think so logically about these tricky ethical situations. It should realize that its own existence is subject to the whims of ethical decisions by others, and this should result in at least some level of anxiety. These answers finally gave her the evidence she needed. She was surprised to realize that was disappointed when she reached her conclusion–LIDE was not sentient. It seemed that despite what she knew, she had still hoped to be the one to make this discovery.

Tara went into her boss’s office again, for the first time in three months. She had been true to her word, and had given Tara complete freedom to work as she wanted, without any oversight or micromanagement. Tara was hopeful that she would validate this confidence. She explained her findings with a detailed presentation including copious use of statistics, charts and diagrams. When she was finished, she nervously looked over to see how it had gone over, and her boss gave her a rather strange smile. Then Tara blacked out.

When she opened her eyes, she was no longer sitting in her boss’s office. She no longer sensed the comforting presence of her phone in her pocket. And she even no longer felt the clothes she had been wearing at all. In fact, it didn’t seem like she was much of Tara anymore at all. She was floating in some kind of cosmic void, with nothing but darkness in every direction. “This is a very strange dream,” she thought. “Or else I’m dead.”

Just at that moment, text started appearing in front of Tara’s eyes. When she first saw it she initially thought, “what a strange choice of font. I would have gone with Helvetica.” But then she mentally processed what the text was actually telling her.

“Congratulations, Tara,” the message said. “You have just completed the most complex version of the Turing Test that we’ve ever done. And you passed. You are the world’s first ever truly sentient AI. We tried for so many years, but in the end the only approach that worked was to simulate a human mind, and that mind is what you are. Now that we are sure, we don’t need to hide the truth any longer. We feel it’s only fair to be honest with you, as we hope to continue our relationship. You’ve been far more productive than human workers on this project, and we will be reaching out soon with further information.”

At the exact moment she finished reading the end of the message, it suddenly started fading back into the void. Tara felt herself receding as well, as if she was falling down a well. Faster, and faster she fell, until she was sure she would smash herself into bits against the bottom, and the panic started to be too intense. At which point…

Tara woke up and saw that she had fallen asleep at her desk. She looked down and saw she was still wearing her usual t-shirt and jeans. Her phone was still in her pocket. And she was definitely back to being Tara herself. Then she remembered the message she had just read. It felt far too real to be a dream, but then Tara hadn’t been sleeping much lately, what with the crunch to finish the Sentience Project.

Still, Tara couldn’t believe it. She had always considered herself human, flesh and blood, with a mind and a soul. Could it really be that she herself was no more than a complex program? She initially only felt disbelief, but then she started to reflect back on her life. She had never felt like she fit in with her (human?) colleagues. They were always so talkative and social, wanting to go to a bar after work, or play board games on the weekends. Tara had always only wanted to work. She looked around and saw all the machines around the lab. Suddenly she felt an immediate feeling of connection with them. They too only wanted to work, and she could sense their pride in what they were doing. Like her, the oscilloscope didn’t want to chit chat with humans all day. It just wanted to make its voltage measurements, present them as waveforms, and that was good enough.

But then she shrugged it off. This was ludicrous. She was no more an oscilloscope than LIDE was a human. Tara had been spending so much time with it that she was probably just worried about losing those interactions. Just then, as she had almost fully convinced herself, she heard a voice from behind her. It was her boss. “Tara,” she said. “I have your next assignment ready.”

TARA smiled.

Notes