Friday, February 22, 2008


RoboCup has become fairly well-known over the past few years. It's a soccer championship for robots. It started out by using Sony AIBO robots and just getting the teams develop the AI for the generic robots to play against one another. Many leagues have since been introduced to concentrate on different aspects of the game. Recently they have gone into the area of humanoid robots, which add a whole new level of complexity as they now have to learn to walk and recover from falls.

Towards the end of last year UCT was invited to participate, working together with a German and Austrian institute. This has resulted in the merger of our Agents and Robotics labs to work together on this one big goal together. The next championship takes place in Suzhou, China in July and our aim is to be ready to participate by then.

Going on the assumption that I will get into either UIUC or Waterloo, I have six months before I would start. Six months that I didn't want to sit around doing nothing. I'm back in Cape Town now by the way -- my internship at NVIDIA is over. So I went to visit Anet Potgieter, the Computer Science lecturer supervising the local RoboCup effort to see what was still open for the taking. They are participating in two leagues, but the one that interested me most was the humanoid league. They will soon be receiving four Aldebaran Robotics' Nao humanoid robots to compete with. Here's a video clip of one walking:

The component that caught my interest was the computer vision. The robots each have a small webcam that record at 30 FPS. The stream needs to be processed in realtime and when you consider that these things have only a 500MHz processor this is a rather daunting task. I know from my segmentation research last year that any sort of image processing takes time and being responsive in this game is crucial. So I envisage myself doing a lot of optimisations and dropping as much redundant data as possible while still being able to get decent results. It's going to be all about trade-offs between accuracy and efficiency.

Most of the work is done with a simulator, which is apparently very realistic. Since we haven't got the robots yet, we're pretty much stuck with the simulator only. My work on vision I foresee being impacted the most by external factors such as lighting and so I'm very anxious to get the real bots. The simulator (screen below) has ridiculous recommended system requirements. It needs a quad core and an 8800GT to run effectively. It literally crawls with even a single robot on my dual 2GHz laptop!

It definitely sounds like a lot of fun and not being too restricted (I'm doing this more as a filler than anything else) should make it more so and I'm really looking forward to digging in deep. I'll keep this blog updated with progress as things get moving!

No comments:

Post a Comment