Soccer Robots Are Getting Smarter At RoboCup

(Credit: Image courtesy of Carnegie Mellon University)
Robot soccer players from Carnegie Mellon University competing in this month's RoboCup 2010 world championship in Singapore should be able to out-dribble their opponents, thanks to a new algorithm that helps them to predict the ball's behavior based on physics principles.

That means that the CMDragons, the Carnegie Mellon team that competes in RoboCup's fast-paced Small-Size League, likely will be able to out-maneuver their opponents and find creative solutions to game situations that could even surprise their programmers. It's possible that the physics-based planning algorithm also might enable the players to invent some new kicks. "Over the years, we have developed many successful teams of robot soccer players, but we believe that the physics-based planning algorithm is a particularly noteworthy accomplishment," said Manuela Veloso, professor of computer science and leader of Carnegie Mellon's two robot soccer teams.

"Past teams have drawn from a repertoire of pre-programmed behaviors to play their matches, planning mostly to avoid obstacles and acting with reactive strategies. To reach RoboCup's goal of creating robot teams that can compete with human teams, we need robots that can plan a strategy using models of their capabilities as well as the capabilities of others, and accurate predictions of the state of a constantly changing game," said Veloso, who is president of the International RoboCup Federation.

 In addition to the Small-Size League team, which uses wheeled robots less than six inches high, Carnegie Mellon fields a Standard Platform League team that uses 22-inch-tall humanoid robots as players. Both teams will join more than 500 other teams with about 3,000 participants when they converge on Singapore June 19-25 for RoboCup 2010, the world's largest robotics and artificial intelligence event.

RoboCup includes five different robot soccer leagues, as well as competitions for search-and-rescue robots, for assistive robots and for students up to age 19. The CMDragons have been strong competitors at RoboCup, winning in 2006 and 2007 and finishing second in 2008. Last year, the team lost in the quarterfinals because of a programming glitch, but had dominated teams up to that point with the help of a preliminary version of the physics-based planning algorithm.

"Physics-based planning gives us an advantage when a robot is dribbling the ball and needs to make a tight turn, or any other instance that requires an awareness of the dynamics of the ball," said Stefan Zickler, a newly minted Ph.D. in computer science who developed the algorithm for his thesis. "Will the ball stick with me when I turn? How fast can I turn? These are questions that the robots previously could never answer."

The algorithm could enable the robots to concoct some new kicks, including bank shots, Zickler said. But the computational requirements for kick planning are greater than for dribbling, so limited computational power and time will keep this use to a minimum.

Each Small-Size League team consists of five robots. The CMDragon robots include two kicking mechanisms -- one for flat kicks and another for chip shots. They also are equipped with a dribble bar that exerts backspin on the ball. Each team builds their own players; Michael Licitra, an engineer at Carnegie Mellon's National Robotics Engineering Center, built the CMDragons' highly capable robots. Like many robots in the league, the CMDragons have omni-directional wheels for tight, quick turns. In addition to physics-based planning, the CMDragons are preparing to use a more aggressive strategy than in previous years.

"We've noticed that in our last few matches against strong teams, the ball has been on our side of the field way too much," Zickler said. "We need to be more opportunistic. When no better option is available, we may just take a shot at the goal even if we don't have a clear view of it."

"Figuring out how to get robots to coordinate with each other and to do so in environments with high uncertainty is one of the grand challenges facing artificial intelligence," Veloso said. "RoboCup is focusing the energies of many smart young minds on solving this problem, which ultimately will enable using distributed intelligence technology in the general physical world."

Source: Carnegie Mellon University

See also: Take Your Brain To The Gym and Kids Who Exercise Can Get Better Grades

Soccer Robots Getting Smarter At RoboCup

Anyone who has ever bravely volunteered to coach a youth soccer team is familiar with the blank stares that ensue when trying to explain the offsides rule. The logic that combines moving players, the position of the ball and the timing of a pass is always a challenge for 10-year-old brains to grasp (let alone 40-year-old brains.) Imagine trying to teach this rule to an inanimate, soccer-playing robot, along with all of the other rules, movements and strategies of the game.

Now researchers have developed an automated method of robot training by observing and copying human behavior.

Why are scientists teaching robots to play soccer? The short-term motivation is to win the annual RoboCup competition, the "World Cup" of robotic development. International teams build real robots that go head to head with no human control during the game. This year's competition is in Graz, Austria in June.

Here's the final match from the 2008 RoboCup:



The long-term goal is to develop the underlying technologies to build more practical robots, including an offshoot called RoboCup Rescue that develops disaster search and rescue robotics.

In a study released in the March 2009 online edition of Expert Systems with Applications, titled "Programming Robosoccer agents by modeling human behavior", a team from Carlos III University of Madrid used a technique known as machine-learning to teach a software agent several low-level basic reactions to visual stimuli. "The objective of this research is to program a player, currently a virtual one, by observing the actions of a person playing in the simulated RoboCup league," said Ricardo Aler, lead author of the study.

In addition to actual robots, RoboCup also has a simulation software league that is more like a video game. In the study, human players were presented with simple game situations and were given a limited set of actions they could take. Their responses were recorded and used to program a "clone" agent with many if-then scenarios based on the human's behavior. By automating this learning process, the agent can build its own knowledge collection by observing many different game scenarios.

The team has seen early success at learning rudimentary actions like moving towards the ball and choosing when to shoot, but the goal is to advance to higher-level cognition, including the dreaded offsides rule. Implanting the physical robots with this knowledge set will give them a richer set of actions to choose from when they are exposed to visual stimuli from the playing field.

Previous attempts at machine learning relied on the robot/software to learn rules and reactions entirely on their own, similar to neural networks. Aler's team hopes to jump start the process by seeding the knowledge base with human players’ choices. While current video soccer games like FIFA 2009 already use a detailed simulation engine, transferring this to the physical world of robots is the key to future research.

RoboCup organizers are not shy about their ultimate tournament in the year 2050. According to their website, "By mid-21st century, a team of fully autonomous humanoid robot soccer players shall win the soccer game, comply with the official rules of the FIFA, against the winner of the most recent World Cup."

That's right; they plan on the robots beating the current, human World Cup champions. "It's like what happened with the Deep Blue computer when it managed to beat Kasparov at chess in 1997," says Aler.

Maybe they can also build a robot linesman who can always get the offsides call correct!