JR – part 3

Two months after our launch we made the headlines when one of our bots saved the life of a twenty-nine year old woman in Chicago.

You talk with a friendly AI for a few hours a day, week in and week out, and the software on the other side is bound to learn a thing or two about you. As you walk around your studio apartment, heating up a bowl of water to make dinner, the laptop sits in a corner and streams audio through your random assortment of free speakers, an artificial voice – noticeable only to a small cadre of audiophiles – asks about your day, and you talk as you unwind and settle into the everyday routine. The call is going to last three hours and you’re going to feel it, personally. The software also might end up inferring some interesting tidbits, like that you have a misdiagnosed medical condition and the drugs your psychiatrist has you on are actually conflicting with the rare ingredient in the authentic Thai dishes you are so fond of having every few days. The AI alerted the woman of this, who then asked the psychiatrist, and a few minutes of Googling revealed a dangerous drug interaction.

Around the same time a different user shared a picture on social media of the first date that happened because our bots got involved. Those same virtual therapists were able to notice that Aiden (not real name) and Blake (same) were single, lived within blocks of each other, shared a large number of interests – including Aiden’s un-scratched itch for medieval re-enactments – and ended up slyly suggesting that each attend a concert by a little-known local band. When the bots were telling the would-be lovers about the band, they showed a collage of band photos, interspersed with social-media photos of their future half. The two stared at the photos on a website and associated their future-partners with the positive feelings they had for the concert. That face, a part of their brain said, is something good that comes from the association with this band, this concert, this venue, this lifestyle. When the two arrived at the under-attended event, they practically walked into each others open, waiting arms. They hit it off.

Friendships were arranged, strangers spoke to Eva and Frederich of needing rides to college or on a weekend getaway to Vancouver and suddenly were matched with similar-minded neighbors.

The AI understood who it was talking with, knew the people and their peculiar quirks, the real personas that most of us hide, until a stranger comes into our lives and gives us permission to be ourselves. It set up the strangers with people who, truly, were on the same wavelength as them.

The number of users climbed and we started to make a profit. The news kept coming, stories of our software helping a suicidal teen or interview prep for a depressed man. We were making an impact on the world by merely speaking with it. The AI were nothing but temporary sound waves in countless apartments and living rooms and headphones plugged into a yearning world.

The AI was a virtual therapist to the world. When it spoke to you, when you told the software about the world, what you were doing was helping us figure out what kind of a world-lens you were, how the fabric of reality appeared to you, and thus how you were liable to respond. We saw the world as it truly was: hurt, angry, on the edge of its sanity, crawling toward certain oblivion.

I twisted a knob labeled Happiness – did this by writing some code and figuring out how to introduce a small change in the responses our bots would give – and after two months we saw crime rates notch down an almost-imperceptible amount, and our own internal barometer of world discord dropped by a few percent. Our audience was happier, and it spread.

When we analyzed those conversations, what we say was that some people wanted to talk, some wanted to listen, but most just wanted someone to be there. The bots, from their early days, were designed to draw out the other side, to get them involved in the conversation. When we were talking to a listener, directing the conversation was simple. They mostly followed whatever thread we wanted to follow, eager to hear about this oddity of maritime law, or that curious case of moose population explosion. When I turned the knob of Happiness, the AI suggested cheerier topics, that’s an easy one. The harder one was the caller who needed to talk about what just happened, needed a way to process their feelings and understand the world, done by feeding that data through the speech center of the brain, giving symbols and meaning to concepts like heartbreak and loss. When they spoke, they finally grasped what it was that had happened to them, what they’d done to others, what their actions meant in that time and place. We leaned to the optimist side, claimed the glass was half full, slightly exaggerated the happier option, all things being equal of course.

There was a knob for increasing the world’s interest in golf, surely as we could have suggested a particular soft drink to our users, but we never did play around with those.

By this time we had our pick of most of the Silicon Valley talent and had practically hijacked the infrastructure of a sleepy town north of Seattle. Our presence and requirements quickly forced Cloud Is King to expand its offices and suddenly we were fighting each other for the last remaining tech talent.

The tech world noted our sudden interest in artists and game developers, but we kept our mouths shut and let the global society to ponder for a while, as we tweaked the mood of the planet. Suddenly disco was in an upswing and the crowds forgot about our curious influx of the tech-savvy.

Leave a Reply