Dartmouth Engineer - The Magazine of Thayer School of EngineeringDartmouth Engineer - The Magazine of Thayer School of Engineering

Complex Systems

Tackling surprises in multi-component systems, from human behavior to robotic smarts

By Lee Michaelides and Karen Endicott
Cover art by Michael Austin

Don’t worry if you’re not sure what a complex system is. Even the people who study multi-component systems, such as the internet, communication networks, industrial processes, and interacting teams of robots, define complex systems in various ways.

Some see complex systems as having so many components that they are difficult or impossible to model. Others emphasize that interacting components produce unexpected emergent properties that make the overall system tough to model. Still others see complex systems as intricate interfaces between humans, nature, and technologies. One of Thayer’s three research focus areas (the other two are engineering in medicine and energy), complex systems provides room for creative new ways of thinking about the world around us. Here we look at some of the complex system challenges that Thayer professors are trying to understand and solve.

Complex systems group photo
A COMPLEX GROUP: These Thayer professors don't shy away from complicated problems. Left to right, front row: Professors Minh Phan, Eugene Santos, Laura Ray, Reza Olfati-Saber; back row: Robert Graves, Mark Borsuk, George Cybenko. Photograph by John Sherman.

FLOCKS AND CARS

Professor Reza Olfati-Saber

Six little robots that resemble football-sized beetles scurry around the floor of an empty room in Thayer’s basement. They’re small, but Reza Olfati-Saber’s robots have a big job: simulating a transportation system that could change the way you drive — and maybe save your life.

According to the professor, the robots resemble a flock of birds. “A flock of birds is still capable of flocking, even if hunters shoot down some of the birds — nothing really happens,” he says. “We’re developing networked sensing and decision-making systems that are capable of working even if you start taking some of their parts away or wrecking some of their components.” Moreover, he says, “We are trying to benefit from mechanisms that birds use to avoid collisions to create safer transportation systems.”

The road to safer highways began with a new idea about math. In the mid-1990s, physicists started analyzing computer networks, insect swarms, and bird migrations and found a common element — networks. “The existence of this element of networks,” says Olfati-Saber, was a “new way of looking at nature.”

Olfati-Saber’s work with complex systems, which he prefers to call “networked systems,” dates back a decade ago when he was a post-doc at CalTech. “At the time, the Air Force was interested in creating unmanned vehicles that could coordinate with each other and collaborate for surveillance, reconnaissance, and combat operations,” he says. One of the early problems Olfati-Saber sought to solve was how a network of agents — the technical term for birds, people, robots, ants, or anything that is part of the network — could reach a consensus when there is a difference of opinion. How is it that migrating ducks all go the same way and don’t crash in mid-air? The paper he wrote “turned out to be the basis for a new theory called ‘consensus theory’ that turned out to be an incredibly simplistic framework for understanding networked systems, their behavior and properties,” he says. “It was applicable to social networks, biological systems, robotics, and engineered systems.”

In other words, whether you are studying a flock of birds, a swarm of ants, or a nation of Deadheads, “the math is the same for all of them — even for bacterial swarming. It is the same for traffic flows, for cars, and for pedestrians. One of the main features of systems science is to take a group of seemingly different applications or phenomena and create a unifying framework that explains all of them. This is one of the main features of systems science we emphasize at Thayer,” says Olfati-Saber. “I’m trying to use all these theories, including the flocking theory, to create the next generation of transportation system.”

Olfati-Saber says that his idea for safer highways differs from the safety systems now being pioneered by BMW and Mercedes that use sensors to warn drivers about potential collisions. “Our goal,” he says, “is to communicate between vehicles.”

How would it work? “Cars could use on-board sensors to communicate with other cars to detect when they would collide, and they could take over and react before the driver actually reacts,” he says. The system would also help avoid traffic jams. One of the system’s capabilities is estimating congestion on a given road. A central server, which is aware of all the cars in an area, could then tell your car about a faster alternative route.

Olfati-Saber and his team are now using those little robots to work out the system details. “The main computing challenge is to keep track of all the other moving objects around a car,” he observes. The task will become more complex soon: his six robots will be joined by 15 more. The professor is trying to add more grad students or post-docs to his team as well. “Many of our innovative team coordination methods are inspired by nature,” he says. “However, it takes the combination of a sophisticated set of tools from control theory, communications, physics, and computer science to make this inspiration a reality.”

SMARTER ROBOTS

Professor Laura Ray

Laura Ray is trying to model a complex system we take for granted: how the human brain works. Then she wants to use that model to create robots that can make decisions on their own, recognize changing environments, and work in teams.

Autonomous robots are still a work in progress. “Robots used in the military and in first response are operated by a joystick or some keypad. They’re not smart at all,” says Ray. “We want to move away from that one robot, one operator paradigm.” But before autonomous robots could substitute for humans in war zones or other dangerous situations, researchers like Ray need to figure out what competencies the robots need — and how to supply them. “Say a team of robots is on patrol. Maybe one robot has to patrol on water and another is patrolling on a road, and maybe they can only communicate two or three times a day,” says Ray. “What kind of information should they communicate that makes them more effective? How do they model behavior of another robot when they don’t see that robot over time?”

For answers, Ray turned to neuroscience.

Having spent a sabbatical auditing neuroscience courses at Dartmouth, Ray now collaborates with a multidisciplinary team, including Professor Richard Granger of the Department of Psychological and Brain Sciences (he’s also an adjunct professor at Thayer). Building on his work on how the brain represents information, Ray says, “We’re trying to model decision processes the way the brain would model them, which is by creating some kind of hierarchy. The thalamo-cortical circuits in the brain take the inputs and cluster and sequence them. You see a flower and think flower, then red flower, then rose. This kind of representation makes it easy to remember things. We’re trying to apply this concept to problem solving with robots, and so far it’s been pretty effective.”

Ray is also drawing on studies of social cognition in other primates. “Groups of mammals can do some pretty spectacular problem solving in teams,” she says. “One example is chimps that hunt monkeys in forest canopies. The chimps are faster on the ground, the monkeys are faster in the canopy. So the chimps have a number of roles. There’s a chaser who climbs a tree under the monkeys and starts chasing them. The ambusher on the ground runs forward and climbs a tree in the direction of the chase, and blockers try to funnel the action. With just four chimps they’re able to hunt.

“What social cognitive skills can we draw from this? The chaser is usually the youngest and least experienced, the blockers are more experienced, and the ambusher is usually older and has the most experience. The teams remain together, even when assembling into larger teams. The dominance hierarchies among teams and between teams are something we’re trying to use in our robots. Instead of every robot acting as an individual in a large area, they form sub-teams to divide and conquer.”

But robots are still a long way from being able to do many things that are easy for humans. “We can go out to the Green in winter, when it’s covered in snow, or summer, when it’s green, and know where we are. For a robot that’s a really hard task to know this is the same place but a different season,” says Ray. “We’re currently trying to program competencies, so a robot with a camera can actually see some scene and be able to recognize that it’s similar to some other scene that it has experienced.”

Professor Laura Ray is trying to get autonomous robots to recognize where they are. Photograph by Kathryn LoConte.
Professor Laura Ray is trying to get autonomous robots to recognize where they are. Photograph by Kathryn LoConte.

With modeling of cognition still in its infancy, Ray would like to see more interaction between engineers and neuroscientists, maybe even a joint major at Dartmouth. “If you don’t speak the language it’s hard to work across the disciplines,” she says.

Ultimately, making robots think more like humans — processing numerous inputs simultaneously, making inferences — will necessitate new methods of computing. “The von Neumann computer with its CPU and memory components is too compartmentalized,” Ray says. She has her eye on “field programmable analog arrays that are completely different computing tools to replicate dynamic systems. I think they will close the gap between understanding how dynamic systems work and how we model them in electronics. It will be more like having a circuit that behaves identically to the dynamic processes. Then you can start stringing these circuits together and see what comes out of it.”

ROBOTS THAT LEARN

Professor Minh Phan

On a computer screen in his office, Minh Phan watches dots representing a group of interacting robots that assemble themselves into an orderly V-formation. As they communicate their positions to one another, the robots are controlled by an algorithm that tells them to split into three small groups and reassemble afterwards. Next Phan watches robots that have a harder job: they have to figure out what they’re supposed to do. “In the beginning they are swirling around because they’re learning the algorithms themselves,” he says. Guided by strategies that only set broad parameters, the robots have to fine-tune their own control algorithms. “When an obstacle shows up, they have to use the algorithms they learned to evade the obstacle and regroup afterward,” he says.

Phan works on the theory side of robotic swarm control. “I’m showing that model predictive control, which is well understood for a single system, can be expanded to handle a complex system with many interacting components,” he says. Future applications that depend on such control include space exploration and surveillance work. “Rather than using one expensive robot, you could use a large number of less expensive robots,” he says. “If some of the robots don’t make it you can still accomplish the task.”

SOUND AND THE CITY

Professor Minh Phan

Minh Phan is working on a noisy complex systems challenge: modeling how sound waves travel through a complicated environment, such as a city core. “Sound can propagate through or around buildings and bounce off other buildings. It’s a complicated path,” he says. Modeling this multi-path propagation is computationally expensive. “To simulate how sound propagates through a dense city block with high-rise buildings in 1.6 seconds, it takes more than 11 hours on a high-performance 256-CPU computer. If you have to do this hundreds of times with different source data, it’s too much,” he says.

He uses system identification to simplify the process. “We run the supercomputer simulation once and collect data from that simulation, then process the data to arrive at a mathematical model that captures the physics of that specific environment. The model can be used to simulate the propagation of another sound source from that location. It is a simple model of a complicated model,” he says. “My research group was able to develop techniques that take data from one run of the supercomputer and quickly arrive at a high-fidelity reduced-order model that can run on a laptop in minutes.”

The sound propagation model has military, security, and surveillance applications, such as tracing the source of gunshots or bomb blasts. “If we know the dynamics of the forward model, we can produce an inverse model to recover the source signal and its location,” says Phan. “My interest,” he adds, “is in the creative side of research: inventing unconventional methods to handle unconventional problems.”

WHAT TO DO ABOUT CLIMATE CHANGE

Professor Mark Borsuk

Here’s what keeps Mark Borsuk awake at night: Scientists cannot rule out the possibility that climate change could become so severe that human life is no longer sustainable. Traditional economic theory for making decisions under uncertainty recommends doing everything possible to avoid the risk of catastrophic outcomes. How much would people have to change their lifestyles to avert the collapse of humanity? If the threat doesn’t seem imminent or likely, what would induce them to change to avoid risking calamity for future generations?

Borsuk is developing integrative assessment models to get a handle on these kinds of uncertainties. “The models represent how the Earth is likely to respond to climate change — global warming, rising sea levels, animal and plant extinctions — and what the economic consequences are likely to be,” he says. “These models allow us to look at peoples’ attitudes toward risk, how uncertainty is represented, how scientists think about climate damage — and what all that can tell us about what we should do.”

One theoretical exploration takes a top-down approach. “If there were a global benevolent dictator,” he says, “what should that person do to achieve the right balance between cost and benefits with regard to climate change? Obviously, this scenario is unrealistic, but in theory, if society as a whole could do something that was economically optimal, what would that thing be? It gives you perhaps an idealized target for where international negotiations should be headed.”

Borsuk also takes a bottom-up approach. “It’s an agent-based model: rather than a global dictator, we’re looking at the various stakeholders who are emitting global warming gasses, how they currently make decisions that affect the climate, and what kind of incentives could be put in place to drive them toward behavior that is sustainable and socially optimal,” he says. “Our basic question is: What kinds of institutional structures at the international and domestic levels can help promote a process of negotiation and feedback that is likely to lead in a direction that’s globally optimal?”

His model “will have five or six agents representing the interests of the West, developing nations, former Soviet nations, and other regions of the world to see how different negotiation scenarios might play out,” he explains.

Then the real challenge begins. “Once governments decide to do something about climate change, how do they get people to change their attitudes and habits?” he says. “We think that at least in the United States, where people have a large degree of freedom in their behavior and choices, getting to any sort of target with respect to global-warming gas emissions is going to entail incentives that allow people to draw upon things they already find important.” His model will try to identify such incentives. “Will it require advertising or some sort of scientific breakthrough?” he asks. For answers, stay tuned.

AGENT-BASED FULFILLMENT SYSTEMS

Professor Robert Graves

Anyone who has ever checked luggage at an airport knows how frustrating it is when your suitcase goes to Miami but you’re in New York. Or when you mail-order a medium-size green T-shirt and get a small pink one instead. These are the kinds of widely used complex systems that Robert Graves tries to improve.

Warehouses and distribution systems are a maze of complexity
Warehouses and distribution systems are a maze of complexity. Photograph courtesy of Professor Robert Graves.

In a recent project, Graves, co-director of Thayer’s Master of Engineering Management program, worked with Vanderlande Industries, a Dutch company that makes automatic storage and order retrieval systems. Getting the right item from a vast, dense warehouse and ensuring that it reaches the right customer in the right timeframe is no easy task, especially when thousands of such fulfillments are on a daily docket. A disturbance in one part of the system — perhaps a malfunctioning conveyor or a worker on a break — may slow the entire system or shut it down altogether.

“Users of the systems complain about stoppages that necessitated rebooting the whole system. That’s lost production time,” says Graves. “The challenge for us was: Could we change the control philosophy and system drivers to meet the goal of being more fault-tolerant. Software controls the machines — the cranes in the warehouse, the conveyors, etc. How do you make everything operate as a team?”

Model for a smart fulfillment system
Model for a smart fulfillment system. Image courtesy of Professor Robert Graves.

The approach Graves took was to replace the control system with intelligent, interacting agent-based controls at all parts of the system, including work stations, totes, and conveyor links. “The agents will accomplish individual, cell-wide, and system-wide goals,” says Graves. “You give a set of rules to an agent, and the agent decides which to use, based on conditions at that moment. For example, a tote arrives at the dispatch station section and announces, ‘Here I am,’ asking work stations to bid on where it should go. If the conveyor queue to a given work station is full, then that work station puts in a low bid. If the work station is empty, it puts in a high bid to attract the work. It’s a very flexible and responsive system.”

A demonstration of the agent-based system showed its efficiencies. Whereas the old system took six hours to fulfill 1,000 orders, the agent-based system completed the same task in four hours. For the same 1,000 order run: when operators were taken out of the system for 30 minutes — long enough to upset it — the agent-based system experienced less disturbance, continued to perform, and recovered faster than the old system. “These improvements were achieved with modifications to the least expensive part of the system: the software,” says Graves.

HUMAN INTELLIGENCE

Professor Eugene Santos

Eugene Santos wants to understand the nature of human intelligence. To do so, he is trying to unravel the complex system of human behaviors. “I look at human behavior as: How do people make their decisions and take action. I want to explain the basis for why people do what they do.”

Santos examines a wide range of factors that influence behavior, including beliefs and experiences. “I want to tie those influences together,” he says. Behavior isn’t just a matter of one influence per action. “It’s more complex than that,” he says. “It’s a whole hierarchy or lattice of interactions. I want to figure out how to build such a lattice.”

Santos uses the theory of probability to assess, quantify, and rank degrees of influence. “Our influences aren’t deterministic,” he says. “Just because I have a cultural experience, you can’t say this cultural experience will always produce a particular outcome. But influence can make an outcome more or less likely, so I try to capture those elements of what’s more likely and what’s less likely. That gives me a baseline. Then once I see an action, I can go back through the influence structure, including what they’ve told me about their beliefs, their demographics, their personal history, to see how they got from their background to their final action.”

Sounds like reverse engineering — because it is. “At this point the only way to understand a complex system is to reverse engineer it. To understand the system is to dissect it,” says Santos.

To test his way of quantifying and ranking influences, he assessed voters’ shifts from Hillary Clinton to Barack Obama in the 2008 Democratic primary. “We looked at race, economics, whether they’re conservative, et cetera,” Santos explains. “We had these pieces of influences and key moments from the campaign, as when the black ministers endorsed Hillary Clinton. We also took information from the political pundits at the time. Our model was able to match the polling trends. Then we looked at precise points to see why Hillary was trending up here and down there, Obama was trending up here and down there. With quantitative models you can go in and say which values, perceptions, or events contributed to change. When we compared our results to post-election analyses, we were able to match up with them.”

Santos’ search for ways to model thought and behavior is leading him to rethink logic and rationality. “As I explore reasoning, I have to go beyond logical reasoning. It doesn’t explain everything,” he says. “People used to say people are irrational, and that’s why you can’t model it. No. They’re misusing the term irrational. Rational to me means there is some basis, some axioms that all your behaviors follow from. Then irrational can be some sort of randomness or unpredictability. I’m trying to figure out how to capture these factors.”

Santos points out that sometimes the seemingly irrational makes sense in its own cultural context. Citing the work of a colleague in Connecticut who studies the Palestinean-Israeli conflict, Santos looks at what drives suicide bombers. “The usual explanation is that they’re attacking the West, the great Satan. But in at least one faction, people want to make the point that Hamas and the PLO have too much power. The reason the faction conducted suicide attacks is to show that there’s competition for leadership. All of a sudden we have reasons that have nothing to do with the West. It’s rational. My goal would be to take something like that, which doesn’t seem to fit, and realize my model is missing something, which means I can now try to fill in those pieces.”

His quest is hard, but Santos likes it that way. “It’s the challenge,” he says, “that keeps me going.”

DIGITAL MODELS OF HUMAN BEHAVIOR

Professor George Cybenko

To most people, the internet is a mind-boggling resource for finding information, shopping, paying bills, staying in touch with friends, and entertainment. To George Cybenko, the internet is a vast storehouse of data about human behavior that can reveal everything from buying habits to hacker and other malicious activities — if you have the right kind of analytical tools.

Cybenko’s development of such tools has evolved along with the continued growth of computational power and scope. Long involved in cyber security, he and colleagues recently developed a new approach — Process Query Systems (PQS) — to assess huge amounts of online information and data collected by acoustical, video, seismic, and other monitoring means. “We now have a tremendous amount of data coming at us; the question remains what to do with it,” he and Vincent Berk, who is now an assistant professor at Thayer, wrote in a 2007 IEEE paper. Calling PQS “a new algorithmic and software paradigm,” Cybenko and Berk say that it “models the dynamic processes that exist within a social network, not merely the static structural artifacts — such as who knows whom — of such a network. PQS can use the temporal nature of communications and transactions to extract processes.” The method provides a way to watch for deviations from normal interaction or activities, which may indicate malevolent and other kinds of anomalous organizational and individual behavior.

Cybenko has also been working with Professor Eugene Santos on a growing area of social modeling called human terrain or human geography. Cybenko compares it to the kind of data available on Google Earth. “Google Earth shows you a lot of things about the imagery of the surface and the roads and the geographic features but it doesn’t show you anything about the human layer,” he says. “What kind of people live there? What do they do? This is sort of human geography digitalized.”

The goal, Cybenko explains, is to build quantitative digital models of human behavior. Applications include computer security, insider threat detection, sensor network analysis, finance, social network analysis, marketing, political campaigning, and the military. “In areas of strife,” says Cybenko, “you need to know what the population composition is, what the tribal relationships are if you are in a tribal area, and what the objectives and sentiments of different communities and groups are.”

Cybenko is working with a Thayer team on a crucial step in human terrain research: developing a new computer language, Human Behavioral Modeling Language (HBML), to describe behaviors and processes. “Our current research suggests that a common framework for the systematic analysis of behaviors of people, networks, and engineered systems is both possible and much needed,” team members wrote in a 2008 paper. HBML is a promising beginning, they add, for supporting “large-scale computational systems’ behavioral modeling and analysis across a variety of domains.”

According to Cybenko’s team, Human Behavioral Modeling Language captures relationships and behaviors from dense data streams. Image courtesy of Professor George Cybenko.
According to Cybenko’s team, Human Behavioral Modeling Language captures relationships and behaviors from dense data streams. Image courtesy of Professor George Cybenko.

—Lee Michaelides is a contributing editor and Karen Endicott is the editor of Dartmouth Engineer.

Categories: Features

Tags: complex systems, faculty, M.E.M., students

comments powered by Disqus