The idea that AI can develop human-like emotions and behavior is coming closer to reality, as seen in the latest research by Stanford and Google.
Did you play any version of The Sims in your childhood? Even if you have not, let us briefly introduce this worldwide famous game that explains the base of what’s in this article. The Sims is a virtual simulator world that enables you to customize it to your unique preferences. Now this game gives most of the control to you in terms of what type of personality your virtual agent will have and what kind of relationships they will build with the Non-Playable characters.
Now simply remove any human intervention in the game and give full autonomous control to the people you have built. That’s what the researchers at Stanford University did in collaboration with Google, creating an entire AI-powered village with 25 virtual agents who live a fully independent life. The research was conducted to understand AI’s capability to mimic human behavior and emotions.
The experiment was conducted in controlled conditions, but complete autonomous power was given to all AI characters in terms of thinking, analyzing, creating tasks, meeting their basic daily needs, and making decisions. The result of this experiment has been quite surprising so far and is quite similar to human-like behavior.
This entire AI village is built on the same technology that ChatGPT is built on, Large Language Models and Machine Learning. The characters sleep, wake up, and go about their everyday lives independently. They also interact with each other and also live in harmony till now. They are also discussing who will be the Mayor in the upcoming elections.
One of the characters in the Simulation, Isabella, was prompted to plan a Valentine’s Day celebration. She took this task and started sending out invites, finalizing the date & venue, and even decorating the venue. Based on the prompt, you would expect an ideal AI character to do this. But the shocking part was that even the ones invited to the party invited others to accompany them. This was neither taught nor commanded by anyone, but doing so indicates an eerie human-like behavior.
The first reason why these characters could build a unique bond with others or opinions about someone else’s actions is the capability to store memory. One thing lacking in ChatGPT that the world does not want it to have is a memory of what is happening on it. This particular AI has access to old actions that impact future decisions. This capability has created a huge difference in the AI characters of this virtual village while letting them all work in sync to keep the town running.
One specific autonomous action of an AI character was finding an appropriate gift by recalling past specifics of another AI character. If we agree that this is still a young experiment and the result is not appearing anytime soon, we also have to agree that the probability makes it a dangerous investigation. If AI successfully develops human emotions and behavior capabilities, the power combined with its judgment can wreak havoc in the real-world. But let’s be thankful that this experiment was completely isolated and conducted in extremely controllable conditions.
With a mixture of literature, cinema, and photography, Manish is mostly traveling. When he is not, he is probably writing another tech news for you!
Cut to the
chase content that’s credible, insightful & actionable.
Get the latest mashup of the App Industry Exclusively Inboxed