Making AI Human

My final driver of six, making AI human, is very close to some of the other drivers but will, in my opinion, have the most profound impact on our mental state regarding our relationship with technology. Remarkably, even though we fully understand that an AI system is just that, a system, we seem to naturally develop relationships with it quite easily. The same can be said for other types of technology we operate daily. The metal boxes and screens we interact with seem to take on a personality and when one is lost, we feel that loss in our hearts. This uncanny ability to develop relationships with technology is pushing the limits of what AI is and should be, sparking debate and fear but also eliciting the imagination and bringing to life new ideas to combat some of today’s most difficult problems.

Recently, Open AI researchers demonstrated ChatGPT’s new voice assistant capabilities, including new vision and voice capabilities. The assistant demonstrated the ability to talk a researcher through solving a math equation on a sheet of paper. The most remarkable part of the demonstration was how quickly the system responded while interacting with the researcher. The voice was quite human and the delay was so subtle it could easily be mistaken for a real person. But this upgraded version of ChatGPT is nowhere near the type of artificial intelligence the future may hold. In February of 2023, Open AI published an article, Planning for AGI and Beyond, where they claimed “The first AGI will be just a point along the continuum of intelligence. We think it’s likely that progress will continue from there, possibly sustaining the rate of progress we’ve seen over the past decade for a long period of time. If this is true, the world could become extremely different from how it is today, and the risks could be extraordinary.” Yes, the speed of change can be concerning and the implications that come along with super-intelligent bots could be very dark. But what is more concerning or exciting, depending on how you want to look at it, is the idea that these superintelligent systems could very easily be mistaken for actual humans.

The first question that comes to mind is why we want machines to be human-like anyway. Is it just our lack of imagination? In The Jetsons, the maid robot looks like a robot but acts like a human. In Star Wars, there is a bit more creativity. Some droids have their own language while some are able to speak the language of humans or whatever other beings they are interacting with. So maybe it isn’t a lack of imagination. In the manufacturing world, robots are being made to function as humans do because their environment is uniquely human. In theory, it is easier to build a bot to function around a human world rather than build a whole new world for that bot. That isn’t always the case, however. In some distribution centers they’ve built entire facilities for robots that don’t at all resemble the warehouses and distribution center layouts humans typically work in and those uniquely robotic DC’s have been quite successful. So, that doesn’t seem to be the answer either. Maybe it is our way of bringing to life the machines we have already given personalities to? Think Transformers. Most of us have at least one “appliance” we talk to as if it were human. So why not give it the ability to respond to us the same way we talk to it? We’ve already decided it needs to be human because we’ve already built a relationship with it and that relationship is human.

The concept isn’t confined to physical machines, either. Replika, a chatbot company that lets you personalize a character that you can then chat with endlessly, sells itself on having the ability to create the most human-like AI chatbot. We also just had the first-ever Miss AI Beauty Pageant creating quite the controversy. The world is obsessed with creating artificially intelligent human-like things.

There are many different topics to talk about surrounding the idea of making AI human, but the focus of this article is how it will impact the future of our relationships with technology. On one hand, having a computer or robot or even a car that can respond to you the exact same way another human would respond to you would be really easy. It makes total sense. If we want to have a digital twin for work, we want it to think just like us which would mean it would have to be as human as possible. If AI were more human it would just make the adoption of technology much smoother. It would be as if we hired another person to work on the team, the only difference is they don’t get paid, maybe, and they don’t have a commute time. Our driverless taxis would chat with us the same way a driver would now and the digital barista would still remember your favorite order and say hi to you when you walk in the door. Sounds great.

But what about the long view? We start to incorporate very human-like AI into our lives and in thirty, forty, or fifty years from now are there any implications that we need to consider? Connections created could end up causing more harm than good. We could get dystopian and think about a possible decline in marriages which means fewer babies and ultimately a shrinkage in the population beyond the point of recovery. Maybe it is the opposite. Maybe we segregate ourselves and only interact with AI when absolutely necessary, creating a very strange divide in society. These are extremes but also plausible consequences. By creating human-like AI, we are encouraging a connection that should be innately human to be made with a piece of technology. We are setting ourselves up for unprecedented social change.

As a human resources professional I think a lot about the day I will need to sit down and mediate a conversation between a robot and a human. I think about what it might be over, how it will feel, and how I will treat the bot versus the human. Will I think of the bot differently or will it be as human to me as the actual person sitting at the other side of the table? Would making AI and other technologies less human solve these potential problems? There are so many questions to ask and there in not a single definitive answer. We each have our own opinion and no one knows what the right answer is. Regardless, we should be thinking about this now. How can creating human-like technology impact our society in the years to come and what might be the possible implications? As an HR professional, especially if you are early in your career, you need to be thinking about how you might navigate these situations. As far away as it may seem now, think about how quickly we all adopted cell phones into our lives. Consider how much your life has changed from your youth to now. The same, if not more, change could happen in the next 20 years. Our job as HR professionals will be navigating these very different and new realities that emerge.  We must pay close attention to how the personalities and functions of AI systems will impact our workplaces.

Citations:

Altman, Sam. “Planning for AGI and Beyond.” Open AI, 24 Feb. 2023, openai.com/index/planning-for-agi-and-beyond/. Accessed 15 July 2024.

Claypool, Rick. “Chatbots Are Not People.” Public Citizen, 26 Sept. 2023, www.citizen.org/article/chatbots-are-not-people-dangerous-human-like-anthropomorphic-ai-report/. Accessed 1 July 2024.

Harper, Sophie Bushwick,Kelso. “AI Chatbots and the Humans Who Love Them.” Scientific American, 24 Apr. 2023, www.scientificamerican.com/podcast/episode/ai-chatbots-and-the-humans-who-love-them/. Accessed 15 July 2024.

Heydari, Anis. “As AI Becomes More Human-Like, Experts Warn Users Must Think More Critically about Its Responses.” CBC, 15 May 2024, www.cbc.ca/news/business/google-openai-search-1.7204014. Accessed 15 July 2024.

Houshalter. “The AI That Pretends to Be Human.” Www.lesswrong.com, 2 Feb. 2016, www.lesswrong.com/posts/o5PojLYFEWtyeo4ct/the-ai-that-pretends-to-be-human. Accessed 15 July 2024.

Mucci, Tim. “Getting Ready for Artificial General Intelligence with Examples.” IBM Blog, 18 Apr. 2024, www.ibm.com/blog/artificial-general-intelligence-examples/. Accessed 15 July 2024.

O’Brien, Matt. “Tech Companies Want to Build Artificial General Intelligence. But Who Decides When AGI Is Attained?” AP News, 4 Apr. 2024, apnews.com/article/agi-artificial-general-intelligence-existential-risk-meta-openai-deepmind-science-ff5662a056d3cf3c5889a73e929e5a34. Accessed 15 July 2024.

Next
Next

Let’s Not Wait For Tomorrow