Expert Voice: Icosystem’s Eric Bonabeau on Agent-Based Modeling

Jeffrey Rothfeder Avatar

Updated on:

How closely can agent-based modeling represent business systems? Icosystem’s Eric Bonabeau says it depends on what you’re looking for.

Eric Bonabeau, chairman and chief scientific officer of Icosystem Corp. in Cambridge, Mass., became interested in complexity theory and adaptive problem solving by watching insects. More than a decade ago, when he was a France Telecom research engineer, Bonabeau spent time in Santa Fe, N.M., at the foot of the Rocky Mountains, studying insect colonies. He learned that individual insects, though relatively small and weak on their own, are collectively capable of finding food, building sophisticated shelters, dividing up labor and defending their territories. Complex systems such as computer networks, supply chains and equity markets, Bonabeau was convinced, are not much different: They rely on their individual parts for their ability to perform extremely complicated activities. In the late 1990s, agent-based modeling, a way to simulate complex systems, became used more widely as a management tool—and Bonabeau quickly became a convert. “Agent-based modeling is a mind-set as much as a technology,” he says. “It’s a perfect way to view things and understand them by the behavior of their smallest components.” After a stint at BiosGroup Inc., a Santa Fe complexity theory consultancy, Bonabeau in 2000 founded Icosystem, which designs agent-based models for corporations. The company has been profitable, Bonabeau says, since September 2001. CIO InsightContributing Editor Jeffrey Rothfeder caught up with Bonabeau recently to discuss the present status and future potential of agent-based modeling.

CIO Insight: What is agent-based modeling?
Bonabeau: Agent-based modeling is the main tool in the complexity science toolbox. People have been thinking in terms of agent-based modeling for many years but just didn’t have the computing power to actually make it useful until recently. With agent-based modeling, you describe a system from the bottom up, from the point of view of its constituent units, as opposed to a top-down description, where you look at properties at the aggregate level without worrying about the system’s constituent elements. The novelty in agent-based modeling compared to what physicists would call micro-simulation is that we’re talking about the possibility of modeling human systems, where the agents are human beings with complex behavior.

Give us an example of how an agent-based model would work.
Think about a traffic jam. It’s very hard to capture the properties of a traffic jam at the aggregate level without describing what individual drivers do. These drivers are the agents in an agent-based model. Each of these agents/drivers is different, and the characteristics of their driving behavior become the rules in the model. Just a few variables, five to ten, can describe how aggressive they are, how they react to a slowdown, how often they change lanes, if they like to pass on the right. When you run this model you can reproduce a traffic jam, but this time you can watch closely the individual behavior of the drivers and you can inject different events—a forest fire near the highway, for instance—to see how these events would affect the emergent properties, the visible properties, of the traffic jam.

These emergent properties, we find, are the result of not only the behavior of individual drivers but the interactions between them as well. What I do on the road depends on what others do. Some of these emergent properties are counterintuitive. One example that I think is very interesting involves the beltway in London. They changed the speed limit from a uniform 50 miles an hour to 35 miles an hour in some portions, and then they varied speeds depending on traffic flow. They discovered that by reducing the speed limit, you actually increase the average speed of the cars. So that’s an example of a counterintuitive phenomenon that you could only predict with an agent-based model. It could not be explained without looking at how the parts behave and interact to make the whole.

How do companies that run agent-based simulations of their operations react to counterintuitive results?
We typically work with clients who already know that the world around them is so complex that there are things they don’t understand, that they won’t be able to grasp, and sometimes the solution that we propose is not understandable. But this a very, very, very small fraction of all business executives—1 percent of top executives at Fortune 500 corporations. These executives understand the idea of modeling well enough that they usually take what we offer on faith. There are millions of interactions and pathways through which events propagate and each of them is a tiny fraction of the full explanation. You can’t reduce the explanation to two or three or five simple sentences. It’s often too complex for that.

For example?
We’ve been working for a software company, a leader in the storage field, that was interested in moving from a centralized storage system to a decentralized network storage system. The company wanted to implement rules for data management locally, in the various nodes of its storage network. They came to us with three sets of rules and asked us to test them, to see if they’re the right ones and which one is the best. But none of them was particularly good, because they came out of a centralized mind-set, which is all the company knew. What happens is that when you implement these rules locally at a node in the storage network, in some situations for some configurations of traffic and document distribution over the network, an action that you take can have ripple effects throughout the entire network, creating congestion or outrageous latency delays for something as simple as document requests. This is because it was centralized thinking behind these rules, so the effect of one node on another was quite strong. Without modeling it, though, it would be very hard for a human brain to predict this pathological behavior of the system, because there are so many pathways and so many influences in the network. What creates congestion is not a single message by definition. It’s the fact that there are many, many packets and messages in the network that are traveling all over the place, which are the result of many, many different things happening all over the network. You cannot reduce this to, oh, it is because A influences B but not C.

But if it’s so complex and so difficult for humans to understand, how do you know that your simulation is an accurate representation of what you’re modeling?
Well, first of all, the very notion of an accurate representation is a slippery concept. A model is always a simplified description of the real world. And there is no such thing as an accurate model without reference to the question it’s trying to address. You have to know the question, the issue that you’re addressing with the model. You have to have very, very specific objectives. And once you’ve got that, you have to decide what level of description you’re going to use, what variables are going to be in your model, and so on. That’s where the art resides. And then you’re driven in your model-building process by what kind of data is available to validate the simulation. You’re not going to build an agent-based model—or any kind of model, for that matter—without taking into account how you’re going to calibrate it and validate it against what you’re trying to model. And once you’re happy with the model’s accuracy, can you trust how the model responds to things it has never seen? This is where human judgment plays a role. You test an intervention, which is a new business condition—maybe a new pricing strategy or a change in regulatory policy—and see how the model reacts. Then you have to ask an expert, does it make sense to you? You don’t have real data to back it up. You’re using a model of something that exists to respond to something that has never existed before.

What kinds of business systems does agent-based modeling work best with?
It works best when the systems are comprised of many constituent units that interact and where the behavior of the units can be described in simple terms. So it’s a situation where the complexity of the whole system emerges out of relatively simple behavior at the lowest level. If you have a system in which the behavior is already very complex, and all these behaviors put together produce something that’s even more complex, you might lose the power of the approach.

One of the most fascinating things about agent-based modeling is that often we’re simulating human behavior. Human behavior is extremely complex, but depending on the issue that you’re trying to address, you might actually be able to describe human behavior in very simple terms, like human beings in a supermarket. They have a shopping basket, there’s a finite number of things they can do in a supermarket, they’re very constrained. You might be able to characterize their behavior with 10 or 15 variables.