Artificial Intelligence Gets A Kick From Soccer Androids

The world's best players may soon be facing a new challenge from football playing robots, which their creators claim will be able to play and beat a human team. According to new research in WIREs Cognitive Science, building robots to play football is driving the development of artificial intelligence and robotic technology which can be used for roles including search and rescue and home help.

The author, Claude Sammut, from the ARC Centre of Excellence for Autonomous Systems in Sydney, reviewed the technology demonstrated at the RoboCup international robot soccer competition which this year took place in Singapore. Competitions have become a popular way for motivating innovations in robotics and provide teams of scientists with a way of comparing and testing new methods of programming artificial intelligence (AI).

"Football is a useful task for scientists developing robotic artificial intelligence because it requires the robot to perceive its environment, to use its sensors to build a model of that environment and then use that data to reason and take appropriate actions," said Sammut. "On a football pitch that environment is rapidly changing and unpredictable requiring a robot to swiftly perceive, reason, act and interact accordingly."

As with human players football also demands communication and cooperation between robotic players and crucially requires the ability to learn, as teams adjust their tactics to better take on their opponents.
Aside from football the competition also includes leagues for urban search and rescue and robotic home helpers which take place in areas simulating collapsed buildings and residential homes, revealing the multiple use of this technology.

While a football pitch layout is structured and known in advance, a search and rescue environment is highly unstructured and so the competition's rescue arena presents developers with a new set of challenges. On the football pitch the robots are able to localize and orientate themselves by recognising landmarks such as the goal post, yet in a rescue situation such localization is extremely difficult, meaning that the robot has to simultaneously map its environment while reacting and interacting to the surroundings.

In the home help competitions the robot is programmed to recognise appliances and landmarks which will be common in most homes, but in addition to orientating themselves they must react and interact with humans.

As the robotic technology continues to develop the rules of the competitions are altered and made harder to encourage innovation, it is the organisers' aim that this will drive the technology to a level where the football playing robots could challenge a human team.

"In 1968 John McCarthy and Donald Michie made a bet with chess champion David Levy that within 10 years a computer program could beat him," concluded Sammut. "It took a bit longer but eventually such programs came into being. It is in that same spirit of a great challenge that RoboCup aims, by the year 2050, to develop a team of fully autonomous robots that can win against the human world soccer champion team."

So while, for the moment, football players can focus on beating each other to lift silverware, tomorrow they may be facing a very different challenge.

Source: Claude Sammut. Robot soccer. Wiley Interdisciplinary Reviews: Cognitive Science, 2010; DOI: 10.1002/wcs.86 and  Wiley - Blackwell

http://www.blogdash.com/full_profile/?claim_code=3f0607c6e0595617e95cbc4bcc3846fa

See also: Soccer Robots Are Getting Smarter At RoboCup and Soccer Robots Getting Smarter At RoboCup

Soccer Robots Are Getting Smarter At RoboCup

(Credit: Image courtesy of Carnegie Mellon University)
Robot soccer players from Carnegie Mellon University competing in this month's RoboCup 2010 world championship in Singapore should be able to out-dribble their opponents, thanks to a new algorithm that helps them to predict the ball's behavior based on physics principles.

That means that the CMDragons, the Carnegie Mellon team that competes in RoboCup's fast-paced Small-Size League, likely will be able to out-maneuver their opponents and find creative solutions to game situations that could even surprise their programmers. It's possible that the physics-based planning algorithm also might enable the players to invent some new kicks. "Over the years, we have developed many successful teams of robot soccer players, but we believe that the physics-based planning algorithm is a particularly noteworthy accomplishment," said Manuela Veloso, professor of computer science and leader of Carnegie Mellon's two robot soccer teams.

"Past teams have drawn from a repertoire of pre-programmed behaviors to play their matches, planning mostly to avoid obstacles and acting with reactive strategies. To reach RoboCup's goal of creating robot teams that can compete with human teams, we need robots that can plan a strategy using models of their capabilities as well as the capabilities of others, and accurate predictions of the state of a constantly changing game," said Veloso, who is president of the International RoboCup Federation.

 In addition to the Small-Size League team, which uses wheeled robots less than six inches high, Carnegie Mellon fields a Standard Platform League team that uses 22-inch-tall humanoid robots as players. Both teams will join more than 500 other teams with about 3,000 participants when they converge on Singapore June 19-25 for RoboCup 2010, the world's largest robotics and artificial intelligence event.

RoboCup includes five different robot soccer leagues, as well as competitions for search-and-rescue robots, for assistive robots and for students up to age 19. The CMDragons have been strong competitors at RoboCup, winning in 2006 and 2007 and finishing second in 2008. Last year, the team lost in the quarterfinals because of a programming glitch, but had dominated teams up to that point with the help of a preliminary version of the physics-based planning algorithm.

"Physics-based planning gives us an advantage when a robot is dribbling the ball and needs to make a tight turn, or any other instance that requires an awareness of the dynamics of the ball," said Stefan Zickler, a newly minted Ph.D. in computer science who developed the algorithm for his thesis. "Will the ball stick with me when I turn? How fast can I turn? These are questions that the robots previously could never answer."

The algorithm could enable the robots to concoct some new kicks, including bank shots, Zickler said. But the computational requirements for kick planning are greater than for dribbling, so limited computational power and time will keep this use to a minimum.

Each Small-Size League team consists of five robots. The CMDragon robots include two kicking mechanisms -- one for flat kicks and another for chip shots. They also are equipped with a dribble bar that exerts backspin on the ball. Each team builds their own players; Michael Licitra, an engineer at Carnegie Mellon's National Robotics Engineering Center, built the CMDragons' highly capable robots. Like many robots in the league, the CMDragons have omni-directional wheels for tight, quick turns. In addition to physics-based planning, the CMDragons are preparing to use a more aggressive strategy than in previous years.

"We've noticed that in our last few matches against strong teams, the ball has been on our side of the field way too much," Zickler said. "We need to be more opportunistic. When no better option is available, we may just take a shot at the goal even if we don't have a clear view of it."

"Figuring out how to get robots to coordinate with each other and to do so in environments with high uncertainty is one of the grand challenges facing artificial intelligence," Veloso said. "RoboCup is focusing the energies of many smart young minds on solving this problem, which ultimately will enable using distributed intelligence technology in the general physical world."

Source: Carnegie Mellon University

See also: Take Your Brain To The Gym and Kids Who Exercise Can Get Better Grades

Soccer Robots Getting Smarter At RoboCup

Anyone who has ever bravely volunteered to coach a youth soccer team is familiar with the blank stares that ensue when trying to explain the offsides rule. The logic that combines moving players, the position of the ball and the timing of a pass is always a challenge for 10-year-old brains to grasp (let alone 40-year-old brains.) Imagine trying to teach this rule to an inanimate, soccer-playing robot, along with all of the other rules, movements and strategies of the game.

Now researchers have developed an automated method of robot training by observing and copying human behavior.

Why are scientists teaching robots to play soccer? The short-term motivation is to win the annual RoboCup competition, the "World Cup" of robotic development. International teams build real robots that go head to head with no human control during the game. This year's competition is in Graz, Austria in June.

Here's the final match from the 2008 RoboCup:



The long-term goal is to develop the underlying technologies to build more practical robots, including an offshoot called RoboCup Rescue that develops disaster search and rescue robotics.

In a study released in the March 2009 online edition of Expert Systems with Applications, titled "Programming Robosoccer agents by modeling human behavior", a team from Carlos III University of Madrid used a technique known as machine-learning to teach a software agent several low-level basic reactions to visual stimuli. "The objective of this research is to program a player, currently a virtual one, by observing the actions of a person playing in the simulated RoboCup league," said Ricardo Aler, lead author of the study.

In addition to actual robots, RoboCup also has a simulation software league that is more like a video game. In the study, human players were presented with simple game situations and were given a limited set of actions they could take. Their responses were recorded and used to program a "clone" agent with many if-then scenarios based on the human's behavior. By automating this learning process, the agent can build its own knowledge collection by observing many different game scenarios.

The team has seen early success at learning rudimentary actions like moving towards the ball and choosing when to shoot, but the goal is to advance to higher-level cognition, including the dreaded offsides rule. Implanting the physical robots with this knowledge set will give them a richer set of actions to choose from when they are exposed to visual stimuli from the playing field.

Previous attempts at machine learning relied on the robot/software to learn rules and reactions entirely on their own, similar to neural networks. Aler's team hopes to jump start the process by seeding the knowledge base with human players’ choices. While current video soccer games like FIFA 2009 already use a detailed simulation engine, transferring this to the physical world of robots is the key to future research.

RoboCup organizers are not shy about their ultimate tournament in the year 2050. According to their website, "By mid-21st century, a team of fully autonomous humanoid robot soccer players shall win the soccer game, comply with the official rules of the FIFA, against the winner of the most recent World Cup."

That's right; they plan on the robots beating the current, human World Cup champions. "It's like what happened with the Deep Blue computer when it managed to beat Kasparov at chess in 1997," says Aler.

Maybe they can also build a robot linesman who can always get the offsides call correct!