More human-like behaviors emerged in a series of simulations of 30 agents. Although all agents started with the same personality and general goal (to create an effective village and protect the community from attacks by other game creatures), they spontaneously developed specialized roles within the community, without no incentive. They branched out into roles such as builder, defender, trader, and explorer. Once an agent began to specialize, their in-game actions began to reflect their new role. For example, an artist spent more time picking flowers, farmers collected seeds, and guards built more fences.
“We were surprised to find that if you install the right kind of brain, they can have truly emergent behavior,” says Yang. “That’s what we expect from humans, but let’s not expect machines to have it.”
Yang’s team also tested whether officers could follow community-wide rules. They introduced a world with basic tax laws and allowed agents to vote for changes to the game’s tax system. Agents with incentives to be pro- or anti-tax were able to influence the behavior of other agents around them. them, enough that they then voted for a tax reduction or increase based on who they had interacted with.
The team scaled, pushing the number of agents in each simulation to the maximum that the Minecraft server could comfortably handle, up to 1,000 at a time in some cases. In one of Altera’s simulations involving 500 agents, they observed how agents spontaneously invented and then spread cultural memes (such as a penchant for pranks or an interest in environmental issues) among their fellow agents. The team also trained a small group of agents to attempt to spread the (parody) religion, Pastafarianism, in different towns and rural areas that made up the game world, and watched these Pastafarian priests convert many of the agents with whom they interacted. . The converts then spread Pastafarianism (the word for the Church of the Flying Spaghetti Monster) to neighboring towns in the game world.
The way the agents acted may seem eerily realistic, but their behavior combines patterns learned by LLMs from human-created data with Altera’s system, which translates those patterns into contextual actions, like picking up a tool or interact with another agent. “The takeaway is that LLMs have a sophisticated enough model of human social dynamics (to) reflect these human behaviors,” says Andrew Ahn, co-founder of Altera.
ALTERA
In other words, the data makes them excellent imitators of human behavior, but they are in no way “alive.”
But Yang has bigger plans. Altera plans to expand to Roblox next, but Yang hopes to eventually move beyond the game’s worlds. Ultimately, his goal is a world in which humans don’t just play alongside AI characters, but also interact with them in their everyday lives. His dream is to create large numbers of “digital humans” who will actually care for us and work with us to help us solve problems and entertain us. “We want to create agents that can truly love humans (like dogs love humans, for example),” he says.
This view – that AI might love us – is quite controversial in the field, with many experts arguing that it is not possible to recreate emotions in machines with current techniques. AI veteran Julian Togelius, for example, who runs game testing company Modl.ai, says he likes Altera’s work, in part because it allows us to study human behavior in simulation.