The rugby world cup final is history now, but I found a copy of the front page of The Independent from the morning of that match. It's just classic! Makes you wonder why we put so much pressure to apply racial quotas to our sports.
Yes, England has it's fair share of many of the things we, and more importantly the rest of the world, make such a fuss about. I find this article so great coming from the English themselves! I don't think there's much more I can add -- it's all in that article.
Wednesday, October 31, 2007
The rugby world cup final is history now, but I found a copy of the front page of The Independent from the morning of that match. It's just classic! Makes you wonder why we put so much pressure to apply racial quotas to our sports.
Monday, October 29, 2007
Boy, I hate seeing everyone else getting these wonderful new toys to play with while we in South Africa have to sit back and be patient. Google, just open that SA office already so we can get to experience more of your wonderful products once and for all!
I doubt very many people heard of Google Transit, but that's because it was in Labs. This month all that changes as Transit has graduated! It's now fully incorporated into Google Maps. So, what does it do? In the past, when querying directions on Maps you got driving (or swimming!) directions such as these. If you go to that page you'll notice a new option to "Take Public Transit". Click and it and you should get this view as shown below.
If you view the full page you will see that it gives you step-by-step directions, providing departure and arrival times as well as alternate routes. It even gives you the fare of the journey and compares it to the estimated driving cost to compare to.
Coverage is still very limited, but remember this is still a new product and requires a lot of data. I see though that Google have made available a Google Transit Feed Specification. Now I know Google are known for their open APIs as well as getting others to do their work for them, but I find this amazing and would be excellent if it works! They're basically telling agencies that run public transportation that if they want their schedules included into the Transit system, then they have to transform their data into the format specified by Google. You gotta love these guys for their courage!
One thing they could really improve on is their walking directions. Currently they just plonk straight lines, however, this could result in a longer route if the walking line goes across a river, for example. They even mention this in one of their help answers, so they might end up adding this eventually. I imagine that most maps data defines pedestrian roads, so I don't foresee it being too difficult to add.
I love how they fail to update their FAQ! Just look at the following answer from here:
6. Why isn't this product a part of Google Maps?
Google Transit Trip Planner is currently a Google Labs product. We haven't integrated it with Google Maps because we wanted to develop the product further by learning, through user feedback, how people really use public transportation data, and thus how Google Transit can be improved to be as useful as possible.
Saturday, October 27, 2007
This is part 3 in my review of the CS honours modules at UCT. Read after reading part 1 and part 2.
Distributed Computing (DC):
There's confusion over the name of this module. It is often referred to as Distributed Components -- the notes say Components, but the official name is Computing. It's lectured by Ken and well, yes, it's not the most thrilling lectures you will attend. I just attended lectures, but never took it for credits so I skipped the assignment and exam. There is so much theory that is covered in this module. It's all thrown at you in huge chunks and it's not easy to consume. Most of the class take DC, but I'd be hesitant to recommend it. It's too much of this RPC stuff such as Corba and DCOM and then stuff on COM+, .Net and EJBs.
The assignment was on EJBs and I remember everyone scrambling to figure out how to get JBoss and EJBs to work! For those of you doing it next year, Sean posted some HOWTOs which seemed to be very useful. Ken, as in the past, kept pushing the deadline back and the demos were only done last month (for a first quarter module)!! Now this may sound cool, but it's not when you consider that it ends up interfering with the other assignments. Last year he never marked it -- everyone who handed in got 100%. The exam was apparently rather hand-wavy. So, up to you, but I don't recommend taking it.
Interaction Design (ID):
What is CS at UCT without lectures by Gary? He's simply the best at lecturing in the department, no doubt about it. I would usually be completely deterred from an HCI course as I find it mostly fud, but I could never miss a Gary lecture. He goes into all sorts of things to consider when designing an interface and shows cool video clips that keep you interested and add to the lectures.
This is another course I only took the lectures for and nothing else, and I think it was a good decision. Most of the class take this module, but unless you're good with design (and I'm not!) the assignment and exam are finicky. The assignment was about building a paper prototype for an RFID tag reader for a cell phone. You then had to implement a prototype that just simulated the interface without requiring functionality and write a report on it. The exam was apparently fairly decent and it's usually based on case studies, which means lots of writing.
Effective Virtual Environments (EVE):
EVE was an interesting module and I'm disappointed I had to drop it after two weeks. All the really good modules were run concurrently, which was a pity. EVE lectures are more like discussions on beliefs than lecturers -- the content covered is very much dependent on what the class finds interesting. It's taken by Edwin, who actually does a good job of the course. I was very surprised at the low numbers that took this module as I'd recommend it above other modules such as DC and NIS.
The course changes drastically year-on-year. Edwin tries to remove the need for an exam by pumping up the practical workload, although if the class prefers an exam he is willing to sway. This year there was one big assignment split into three and a essay on some papers. The assignment was about using Erlicht, a 3D games engine, to design a putt-putt course. The first part was a design document, which was fairly lengthy. Then you had to code it up using Erlicht and you were marked based on how closely your end product tied in with your design document. The final part was to test a hypothesis such as "Do textures increase presence?" You're told to get six test subjects, although you will never get a significant result with such a small number, so please get more people next year. It's not that difficult!
DB is lectured by Sonia and it is the only module for which I never attended a single lecture. That should sum up my interest in databases at such a level. The content is rather boring, although it's easy marks, mostly covering object relational databases such as Oracle. The class is usually fairly large, although I can't stand taking a module just for easy marks. Half the lectures were paper reviews presented in groups for marks. The exam is fairly easy apparently.
Can't say much more here, best speak to someone who did the module.
Computability and Complexity (CC):
Computer Science courses are very practical in nature, that's not going to change. However, I feel that it's necessary to break away a little. That's where the Mathematics of Computer Science modules come in. They're highly theoretical and have little practical application. You need a strong Mathematics background to get through these modules as not only is the content pretty tough, the pace is insane. Ever had a Maths lecture with slides? With CC, every lecture is based on slides - even the nasty proofs!
The module is lectured by Vasco Brattka, the best Maths lecturer I have ever had! He has a list of the course contents available here. He also has the course slides and tutorials there, but they are password protected. If you're considering the module then I can give you the password, although I'll have to do so privately. All the Maths modules are four credits, so they cover a lot of work. You get one period a week dedicated to discussing the tutorial questions before you have to attempt them. They can be tricky, although the help in lectures is decent.
The class is mixed with Computer Scientists and Mathematics honours students. This year only three of us CS students took the Maths modules along with two Maths students. The computability stuff can be scary for a Computer Scientists, especially when the Maths guys catch on fairly well. The module is packed with proofs, including some that were open standing problems for decades. The proofs in the complexity section are more algorithmic and so it switches around to the CS students understanding better. The module culminates in showing why the P-NP problem is so hard and that under certain assumptions is provable to be unprovable either way!
The exam is slightly easier than the tutorials, although it is closed book and therefore rather challenging. However, if your Maths background is strong it's not so bad. I would definitely recommend the module to anyone that majored in Maths. Since it's a four credit course it gives you more choice with what CS modules to pick from and lets you get away without doing the crappy assignments. I still attended many of the lectures for the other CS modules, although it's nice having the comfort of dropping out of the ugly ones.
Cryptography and Complexity (Crypto):
Vasco used to offer a Quantum Computing (QC) module, which expanded on the work from CC. However, he went on sabbatical for the second half of this year and so it was replaced by a new course. If QC returns next year, and it most likely will, it might replace this course. Although I never got to do QC, I've heard from previous years that it is very good and if CC is anything to go by I can believe it.
The new Crypto module was split in two parts covered by different lecturers. Christine Swart took the first half, covering topics on public key cryptography. There's a lot of number theory involved, so don't expect it to be like a CS module. It's fairly practical compared to the rest of the CS of Maths modules. You learn all about factorising very large numbers efficiently and showing that cracking the RSA algorithm is equivalent to factorising such numbers. The Diffie-Hellman key exchange is also covered and it's shown that cracking it is equivalent to the discrete logarithm problem.
Holger Spakowski takes the second half, covering topics related to complexity of cryptography. This section goes back to where Vasco left off in CC. It's all about probabilistic algorithms that can be proved to produce the correct answer with a given probability. From this the concept of probabilistic Turing machines are developed and this leads into a whole new area of complexity classes. This then leads into proving various classes equivalent to one another and a number of inclusion relations. It gets highly theoretical, although it's a great module for those interested in Mathematics and Statistics.
Christine was very bad with the tutorials and we only had one that was marked in the end. Holger gave us three, which were fairly challenging as a Computer Scientist. The exam is next month and since it's a new course I can't comment on it. Overall I would recommend the course. The cryptography section should appeal to a larger number of CS students than CC. The major problem with this course is that the second half continues into the project time. So while the rest of the CS class is enjoying no lectures to work on their projects you have to worry about tutorial hand-ins and so on.
So I've summarised my experiences of the honours modules and given my comments for you. Make sure you've also read part 1 and part 2. Remember that this is just my thoughts on the modules and that I have very different interests to many people. If you're going into honours I expect you to take what I said with some common sense.
Take a look at the Graduate Handbook, especially the section on the modules as they provide module descriptions. Bear in mind that the modules on offer are likely to change next year, so don't go out of your way to settle on a selection. If you want to decide early try find out from Anet who will be the honours course convener next year and ask that person what the module list is looking like. You may also find that looking through my exam archive might help ease the decision.
Something that no-one mentioned to us last year and that I found critical was that your choice depends largely on the timetable. You can get our timetable here (with ITE added in at the same time as Agents, GA and II). As you'll see, many of the content-heavy courses such as DC, DB and ID are all grouped together at the same time, which makes it difficult to skip them all. I think the basic timetable remains the same each year, but make sure by asking the course convener.
The way I personally tackled the decision was just to attend all the lectures. For each module I attended at least a quarter of the lectures and by that time we usually got through enough to understand what the assignment involved. If the assignment looked icky, I dropped the course, otherwise I continued with it. It might sound like a lot of extra effort, but you're there for some lectures so you might as well just check the other maybes out. I found it worked really well as I got to choose given first hand experience of what the lectures were like. After four lecturers you generally get a gut feeling. You just have to make a rough selection at the beginning of the year, which can easily be changed any time by notifying your course convener.
So that's that! If anyone wants to chat further, I'm only around until mid-November. I'd be happy to have to discuss it though as it's a crucial decision. Very little word goes around about what the modules are like since most people ask so late. I might add another post on the honours project, but I need to know who would be interested? If you want to hear my thoughts on the project or anything else please let me know.
So, next month I head on over to Santa Clara to work for the big green machine. I'm there for three months for an internship. I start on 19 November and work there through their winter until 15 February when I'll hopefully be sad to leave.
There have been so many of us going there from UCT over the past few years, ever since Shaun Nirenstein started out there. Bruce, Carl, Nick, Chris, Ashley...and plenty others have all had their stint. From what they've told me it's a lot of fun, although I'll soon be able to compare to my experiences at Google. It's only nine miles to Google HQ in Mountain View, so I'll finally get a chance to check them out.
I've started my search for accommodation and the likes a tad late, as usual. So I'm without a roommate at the moment. I might have to settle for the exorbitant rates of a single room apartment if I don't find someone soon, so take this as a call for help if you know anyone out there. ;-)
Friday, October 26, 2007
This past Saturday was an amazing day for more reasons than one. Not only did the Bokke claim their second set of hands on the William Webb Ellis trophy, but UCT simply extended their domination of the South African ACM ICPC regional contest.
For many of us coaches it was our last year running the contest and so we wanted to leave with a bang. So we went in search of some sponsors that would help us do something special, something we couldn't achieve on our own. That sponsor quickly turned out to be the one and only Google! Our intention was to use their involvement in the contest to make the single improvement that has the greatest impact - more contestants!
Last year we had ten teams participate from UCT. There was an obvious split between a few teams that were very successful and the rest that were in it just for the fun. This year we wanted to get more top teams, while filling in that middle gap. The gathering of contestants turned out to be very successful, with Google proving to be a crucial factor. We started reaching numbers such that we started becoming concerned that we would have insufficient space in the competition lab. After some discussion between us we settled on cutting the numbers at 22, with 5 teams from each of Stellenbosch and UWC.
For the past three years, we had been running training for the teams. For these three years we were fine tuning the training every year, making changes from the experiences from the previous year. However, there was a wide diversity of teams attending the training. The strongest teams would typically get together for their own training, while the teams that had lots of potential, but lack of enthusiasm, would just pitch up for the contest without practice.
This year we made some serious changes to the structure of our training. First of all, as silly as it may sound, providing free pizza during the training has a huge impact on turnout. Thanks to our success in swiping all the R325,000 prize money from the Standard Bank IT Challenge we were able to do this, although in future it would be interesting to see what can be done about obtaining sponsorship for this as well. We then ran full mock contests, using full problem sets from past contests and the Abacus contest system used in our regional contest.
Providing full rankings as in the actual contest seemed to attract the stronger teams along. They couldn't resist showing off and seeing who was better, although in the one session that Harry and I participated for fun we beat the lot. One thing that this contest requires to succeed is exceptional team work. This is one reason I find the teams that remain together for several contests are the ones that win. Without practicing together as a team, you cannot achieve the level of team work required to win this contest. This is where the new form of training adopted this year proved very successful, attracting many more people than ever before. The improved training also appeared to have some effect on the turnout for the contest itself.
With more than a week to go before the official registration closed, we hit our limit of 22 teams from UCT. We were forced to close down registration early, turning several teams away. It was unfortunate that we had to do this and next year we will try our best to accommodate the larger numbers. It was, however, already cramped with some rows of four PCs allocated two teams.
Yesterday, Saturday 20 October. This was the big day. The day we had been building up for for so long. The day before Bruce and Carl did the final preparations. They pre-allocated all the t-shirts, name badges and all sorts of other goodies into paper bags for each team. With a total of 30 teams (22 UCT, 3 Stellenbosch and 5 UWC) arriving in the morning we had to prepare ourselves for a speedy registration process. There were a host of other preparations they had to do, all to make the contest day run as smooth as we could possibly make it.
We told everyone to start arriving at 07:30, so we arrived at 07:00 to for final preparations. With the sponsorship from Google, we were able to use the budget from Pretoria to provide breakfast before and snacks during the contest. Unfortunately we underestimated the consumption rate, so we understocked. Before anyone got a chance to invade the breakfast supplies they had to register and pick up their team bags full of goodies. There was a steady flow of people, which is always better than a massive flood.
As people were still arriving at 08:15, we hung around the registration area a little longer until eventually moving down at 08:30. Everyone found their seats quickly as we had maps outside the lab. By the time everyone was seated and logged into the PCs, the practice contest had started. The practice contest was a poor start. There were several errors in the problem statements, some of which wasted several teams a lot of time which should have been used in testing the environment and not the problems. I hate laying the blame, but Pretoria ran the contest and they didn't get things off to a good start in this regard.
We were scheduled for a 09:00 start, however, I don't think we have ever started on time. There were four locations across South Africa where teams were participating from. Each site had to give the ready signal and any last minute technical difficulties needed to be sorted out. To make matters worse, they had a brief power failure in Pretoria! We finally got going at 09:45.
We were very anxious to see the problems. Since we don't run the contest, the first time we get to see the selected problems is when the contest starts. We submitted five problems to Pretoria, which was much larger than our typical number of one or two problems. This was in part due to part due to the increased preparation we were doing for the contest this year on our end and that there were four of us interested in submitting problems. With our experience in running the SACO we were used to working together on checking each others problems thoroughly.
This is where we got a little disappointed. After having submitted five problems, only one of them was accepted. Happily for me, it was one of mine, but it was still a disappointment. Especially when we consider the reduced difficulty level in this years problem set compared to previous years. While this might have been done purposefully, I tend to disagree for reasons that should become apparent later in this post. I'd be interested to discover where the selected problems usually come from.
I will probably provide solutions to the problems in a later post. However, a quick analysis for now. The problems should be made available here shortly. Problems A (blue) and F (yellow) were both trivial - there was nothing algorithmically difficult about them. Problem E (red) was also algorithmically trivial, although the coding was slightly more complicated. Problem C (pink) wasn't very tricky either, but the Maths could prove problematic for some. Problem B (green) was tough on constraints, requiring a linear solution. Lots of teams timed out by using a quadratic solution, or even if they had the correct algorithm got an "abnormal termination" due to too much memory usage. Problem D (purple) was the evil one, and to make you all hate me let me be the one to inform you that I set it. :-P
The gang got off to a quick start with UCT team ʇ|nɐɟƃ3s claiming the first problem in 16 minutes, followed closely by UCT teams teh_solverers and The G-dS-B Conjecture, Wits team Blackhole Constants and then UCT team Custard Frog in that order. All of them solved problem B first. This resulted in a quick spurt of blue balloons that we had to dish out, such that we quickly changed strategy to preparing balloons before a team solved a problem.
As more and more teams came in with the blue balloon, so teh_solvers claimed top spot by solving problem F and claiming the first yellow balloon. They never lost the top spot! Once the scoreboard settled down a bit we had the top three teams all from UCT - teh_solverers, The G-dS-B Conjecture and ʇ|nɐɟƃ3s. That was the way I was expecting the scoreboard to look like at the end, but that wasn't to be.
teh_solverers pulled ahead, claiming the red problem after about 90 minutes. Shortly after this something disastrous unfolded. Several people were attempting the green problem, but getting timeouts. Since the time limit was two minutes, we thought that the slight backlog of pending submissions was due to this. However, the backlog grew larger and larger to the point where it just came to a grinding halt! Once we realised what was unfolding we quickly contacted Pretoria to find out more about the situation. It turned out that the main Abacus server was churning away so hard that it was out of memory, so they were building Abacus on a new server with more memory.
They told us it would be back up in about 15 minutes, so we made an announcement to the teams, however, even then they were disgruntled by the scenario. And this was before things started getting a lot worse! The problem seemed to be caused by the 30MB test data some of some of the problems, which was being polled by the marker servers every time they started evaluating a new submission. Add this to the numerous last minute commits to the Abacus software just before the contest and I still wonder why we use it over PC2.
After an hour the backlog of submissions grew ridiculously long and still things weren't getting any better. Pretoria were rebuilding Abacus on a new server with more memory, however, this was taking some time. When they eventually got it linked up they only had two markers attached in order to prevent the server from crashing again. This wasn't working though as submissions were coming it at a faster rate than they were being processed.
About 90 minutes into the saga, they finally attached 10 markers to the server and it started processing the submissions much quicker. Only problem though, for some reason the system was developed in a way such that a new server would start evaluating the submissions right from when the contest started. So it sat there churning away through all the old submissions that had already been evaluated! I still don't understand this design decision, but at least things were moving. We had to wait another 15 minutes or so before the server got to the new submissions.
The next 10 minutes was balloon chaos!! Think of all the problems solved during that span of nearly two hours. Picture us having to blow up and dish out all these balloons as fast as we could. That's what it was like for these next 10 minutes - FUN! This was also the period during which any doubts about the problem set not being easier than previous years faded away. There were so many teams that hit three balloons! I'd guess it was about 40-50 balloons that we distributed in this short period of time. Determine colour of next balloon, blow up, tie, attach piece of string, run!
While the markers were borked, the standings were obviously idle. The spurt of balloons after they came back online shuffled things up a bit. teh_solverers further secured their lead with the pink balloon and the green balloon soon after the spurt. They therefore had two hours to work on the final, purple, balloon and were well on their way to defending their title. My recollection of the order further down is a bit fuzzy due to the chaotic balloon rush, but Bacon & Eggs climbed up into second place with the pink balloon, just ahead of ʇ|nɐɟƃ3s who also got pink. The G-dS-B Conjecture were slipping down a bit, while Team LOL was climbing.
At the start of the last hour, when the scoreboard was frozen to keep some suspense, there were three teams that had solved all but the purple balloon: teh_solverers, Bacon & Eggs and ʇ|nɐɟƃ3s (all UCT teams!). The time gap between second and third was a paltry 7 minutes! Blackhole Constants (Richard Starfield and co.) from Wits was the top-ranked non-UCT team on four problems. They needed both the remaining problems to clinch it as they were far behind on time. With 40 minutes remaining, Bit Chow Down grabbed fourth place to make it UCT with the only four on five problems! And any of those four could clinch in the remaining 40 minutes by solving my purple problem.
The balloons trickled real slow in the last push. The De Bruyn Altimeter (Nicholas Pilkington and co.) from Rhodes joined the other UCT teams on five problems right near the end, but other than that the results were pretty much unchanged.
The top four teams remained UCT teams, in the following order:
- Timothy Stranex
- Migael Strydom
- Tamara von Glehn
- Bacon & Eggs
- Bit Chow Down
- Christian Geist
- Dave Jacka
- Ashley Reid
The full standings are available at:
It was all over. All that hard work paid off. This was an awesome result for us. It shows that our new training methods are working effectively. It's often said that having many teams highly ranked shows the coaching abilities more than having one do exceptionally well. We've definitely succeeded in getting on the many highly ranked teams side. A lot of this is due to this community that has been evolving at UCT, a community of people interested in competitive programming. We've grown to a point where we now work together and learn from one another.
After the contest we went to Da Vinci's in Kloof Street for lunch. We took up most of the restaurant we there were so many of us. They served some excellent pizza, only a tad slow on the delivery of the food. After the food, we had a little prize giving. The top three teams each got Google goodies that Google supplied us with as well as something small from Pretoria.
1st place: teh_solverers (left to right: Migael, Tamara, Timothy)
2nd place: Bacon & Eggs (left to right: Ian, Ingrid, Bertus)
There has always been a shortage of females in Computer Science. This is always evident in the numbers in these contests. This year we had three females out of 66 contestants at UCT, which is less than 5%! Therefore we were glad to see two of them fighting it out at the top, eventually taking the top two spots. On top of it, they are twins.
Left to right: Ingrid, Tamara
What would the contest be without us wonderful coaches? No training, no motivation to participate and worst of all no contest at all! For three of the coaches, this was likely the last ever contest they will be running. For me, this was likely my last ICPC regional at UCT (hopefully I can find some new minions to train when I move on).
Left to right: Carl, Donald, Bruce, me, Shen
So now I will be taking teh_solverers back to the World Finals next year, this time in lonely Alberta, Canada. This year they came 26th, can they make the medals next year?
Thursday, October 25, 2007
So, my honours project has reached completion status. Yesterday my partner, Keri Woods, and I demoed to our supervisor and second reader. Following the demo, our project went on display along with all the others in the department for our annual Computer Science Open Day.
I have a full project website, which you can check out here:
The project was about using a genetic algorithm to solve the problem of colour image segmentation. There has been an abundance of research into image segmentation, but there is still no one true algorithm that solves the problem once and for all. Our aim was to see if it was possible to get as close to this one true algorithm by using a genetic algorithm to handle the uncertainty in the input.
A secondary aim was to develop a genetic algorithm model for the Grid. There has largely been a lack of research in this area and we attempted to spark some interest. Lots of research, especially testing on larger scales, needs to go into this before it is usable.
Check out the website if any of that catches your interest. I certainly enjoyed the project a lot and it's disappointing it's come to an end. I will probably still try make some time to put some further work into it.
Sunday, October 21, 2007
If the good old saying "sweet victory" were anything to go by, this weekend has been an exceptionally sweet one. Any victory is well-taken, but none more so than a championship victory. I have experienced, in different ways, three of these so-called victories this weekend alone.
It all started early yesterday morning with the South African ACM Intercollegiate Programming Contest regional contest. Out of 66 teams nation-wide, UCT claimed the top spot for the sixth year running. And this is destined to continue for the years to come, as even though this will be this year's winning team's last contest, we have many more teams coming through eagerly vying for that taste of glory.
To put it briefly for you, UCT swiped all four top places from the rest of the country! And if that weren't bad enough, due in part to our excellent record in the past we managed to gather a record 22 teams from UCT to participate. This formed a third of the national teams! The full results are available here, with the top four teams being:
- Timothy Stranex
- Migael Strydom
- Tamara von Glehn
- Bertus Labuschagne
- Ian Tunbridge
- Ingrid von Glehn
- Charles Bradshaw
- Max Rabkin
- Keegan Carruthers-Smith
- Christian Geist
- Dave Jacka
- Ashley Reid
The sight of victory:
The second victory happened over in Paris, where we repeated what we last did on that memorable day back in 1995. While this victory will never be just as glorious as the first, it will always be remembered as a huge success. I tend to put a lot of the success on the shoulders of Jake White, which is why I feel it is a huge upset that he is leaving and mostly due to the poor management in this country's sport.
If you don't watch rugby at all you will be a bit lost. The South African rugby team took the William Webb Ellis trophy along with the rugby world cup champions role away from defending champions in the final, beating them 15-6 in fairly convincing style. The streets were wild apparently, although I was sane enough to remain indoors.
The sight of victory:
These two above were fairly expected results. UCT has dominated the ACM ICPC for the past six years and South Africa had already creamed The Pommies 36-0 in a group match just five weeks ago. This last one, however, required half a miracle to achieve. The odds were so low that I personally had written it off.
Before the final race of the Formula 1 season this evening, Kimi Raikonnen was seven points behind championship leader Lewis Hamilton, with a maximum ten points up for grabs. Hamilton's teammate, Fernando Alonso, was even closer with just a four point margin. The odds against Kimi winning the championship in the final race were awfully low.
However, a combination of mechanical faults, driver error and poor choice of strategy pushed Hamilton all the way back into 7th giving him only two points. With him out the way, Kimi and Massa ran away with an awesome 1-2 Ferrari victory leaving Alonso trailing 57 seconds behind in 3rd. So it couldn't have ended any tighter. Kimi ends the season on 110 points, with Hamilton and Alonso tying on 109 points each! Simply amazing!
The sight of victory:
Sunday, October 14, 2007
Syabas have finally put themselves into the hardware game, after being in the firmware development scene for many years. What they've come up with is an impressively compact media player at 270mm x 132mm x 32mm:
The unit is called the Popcorn Hour A-100. As with most of these bleeding edge products, we'll have to see how they meet up to what they offer after release (scheduled for 30 October). It's based on the Sigma SMP8635 chip and so it supports H.264 above all the standard DivX, XviD and WMV9 codecs. It streams all media content over Ethernet! A nice feature they offer is downloading of BitTorrents, which is slowly starting to become a requirement for a top spot in this field.
Currently my favourite unit is the Ziova CS505. I would love to try this baby out, but my collection is already large enough. The price is good at $179 though, so it's tempting. It appears to have a lot over the Ziova in not only price ($179) and size, but in features. It's good to see them dropping the DVD drives as they always seem to prove a nuisance on the firmware side and a hence a hindrance to the bleeding edge side of things.
MPC Club Article
Update: MPC has reserved 100 units for those in Europe at a total price of at most €199. Read more.
Saturday, October 13, 2007
Last week attended the International Conference on Functional Programming (ICFP Conference). I attended as what some functional programmers would call their enemy, since like most people, I stick to the widely used imperative languages. There's one difference though - I do have an interest in functional programming (see below)!
Putting my own experiences aside for the now, the focus of this post is functional programming in industry. As an outsider with an interest in the community I often wonder why it is so small and how the situation can be improved. Everyone keeps going on about how beneficial functional programming is over imperative programming: faster development, shorter code, less bug prone and easier to test to name just a few.
So with all this power, where's the massive rush? Most of the successful languages of late have been successful mainly due to their success in industry. Java is a prime example. So a good way to become successful is for functional languages to attract the attention of industry. What's holding industry back though?
The problem I fear is a viscous circle that's difficult to slingshot out of. The number of FP programmers is very low. Because of this functional programmers are more likely to work in very small groups or even on their own on major projects. This trend seems to continue in industry, especially when there is a very small number of functional programmers in a company vying for the use of FP on some project within the company.
So we have come to the conclusion that FP groups in industry are very small or even solo. But how is this a problem? While the team works on the project all appears to be going very well. What happens when someone leaves though? Someone has to take over his section of the project. This is where the problems are hit! Finding a replacement is a mission in its own right. This is mainly caused by the lack of functional programmers, but also due to the next problem - programming style.
Why is programming style a problem? Well, this is an issue with imperative languages as well. Everyone has their own style of programming and when you have a large mass of code it can be very difficult for someone to take over without decent documentation. This is why many software development companies have very strict style guidelines such that all programmers follow and become quickly familiar with the same, well documented style.
Where are the style guidelines for functional languages?? This is the crux of an interesting discussion I had with someone at the ICFP Conference last week. All the above leads to this sad conclusion. Style guidelines are seriously underused by functional programmers. And this is one serious problem, because it takes so long for someone to take over a mass of code if it doesn't follow such guidelines. This might sound finicky, but it's true! In the conversation he gave a couple of examples such as one where it took a year to find a replacement Lisp programmer and a year for that replacement to understand the code enough to start maintaining it properly!
So I did a quick search for style guides for some common FP langauges and this is what I dug up:
- Haskell: This style guide comes with an automatic style checker. It appears to be rather rigorous, which is great!
- OCaml: This covers some aspects, but does not appear to be as thorough as the Haskell guide.
- Lisp: This is a fairly detailed style guide. It is still, however, in draft phase.
- Erlang: Appears to be lacking!!
So, back to the beginning of this post. Why aren't people rushing to FP? There are these small success stories in industry, but not very many that have lasted over the long term. I believe that this vicious circle of functional programmers not wanting to let go of their favoured styles and change to a style guide is holding many people back from joining this elite community.
Is there anything that can be done to improve the situation? I believe that the community is too full of high quality programmers that the jump is just too far that most people within reach are also the type of people that far prefer following their own style. Gosh, even I prefer following my own style!
My functional programming experiences:
While I still cannot claim to be able to program in a functional language, the idea has fascinate me for some time. Ever since I've learned Python I have made use of several functional aspects it offers. And I love it! Earlier this year I started perusing through some OCaml tutorials, however the syntax really put me off it. I soon after started looking at Haskell. After sifting through some mediocre tutorials I finally settled on YAHT. I reached chapter 8 after I got sucked into other things that prevented me from progressing any further for some time. So I know a fair amount of the theoretical side of Haskell, although I've had next to zero practical experience.
Thursday, October 11, 2007
Ok, I've been a bit silent for the past week here. And a couple people have asked why. Well, that mostly has to do with the last push in my honours project. There's still a lot of work that needs to go into it, but that's what happens when you choose an ambitious project (and certain things don't quite go the way you expected, but that's a touchy subject).
It was also a bit unfortunate that the ICFP Conference had to fall right in the middle of things. I really need to cut down on my travelling. I've been on five international trips this year and I have another long one in the form of Nvidia just around the corner. Just have a look here and, while there will be those that will say how they've traveled more, 74 flights I believe is plenty. Unfortunately it's looking like next years trips will mostly be to the American continent - I far prefer the rich cultural history of Europe.
Next Saturday the 20th is a big one for me. For one it's the day after our project reports are due, so it will be a moment for rest and relaxation. More importantly though, it's the South African ACM ICPC Regional Contest. If that website looks a bit bland, then try the international site. This is our first year being sponsored by Google. It was a nice catch as we have an amazing record 22 teams from UCT alone! Compare this to last years 10. We even had to close registration early due to space constraints! While there are other factors that affect the large growth, I'm confident a fair share is due to Google's involvement.
We're hoping for major UCT domination this year, although we'd really love to see some competition coming from the rest of the country. It's the competition that makes it fun and when all the competition is concentrated at one source it takes away some of the fun. My team can't compete (as was the case last year) since I've already attended two world finals, so we'll be watching the scoreboard, being the hopeless onlookers unable to help the falling, cheering on the rising.
After the contest we're planning on heading off to Da Vinci's on Kloof Street for some good pizza. It looks like we'll be taking up the whole place, so it should be interesting to see how that turns out.
Then there's the moment I hope we all get to enjoy as a country. There's no reason why we should lose to Argentina. If we do we get the chokers label along with the other Tri Nations teams for losing to the lessers. If you're lost already, it's the Rugby World Cup I'm talking about. The final is in the evening at 21:00, on this very same day as the ICPC. If things pan out as they should South Africa should be there. This is going to be one almighty match. Assuming we do make it, we play either France or England. This is our best opportunity to regain our '95 success. They've been playing so well and the main competitors have foolishly knocked themselves out. However, I rather not jinx the boys so I will stop right there.
It's going to be a fabulous day and let's hope we're not let down in any regard. Go UCT! Go Bokke!
Wednesday, October 3, 2007
I've been posting a lot about the ICFP Contest and how we did this year. Just recently the results were announced, placing us in second position behind last years winners Team Smartass. Up until then we kept most of how we did to ourselves to prevent the public from guessing where we came. However, that's all public knowledge now and so what follows is a continuation of this post describing how we tried to save Endo.
The tools formed the most important part of our success. I briefly touched on our DNA->RNA and RNA->image simulators in the previous post. Extras included in our DNA->RNA simulator include translating the DNA into a more readable form as well as several further hacks which helped us better understand the DNA sequence. Extras included in our RNA->image converter include hacking the default image to white to reveal hidden black-on-black messages, post-processing to make the images printer-friendly (we're al-cheapo in SA :-P) and some OpenGL code to render the image as it's being drawn.
Then we had plenty debugging tools to reverse engineer the DNA. Each function spews out a unique RNA marker upon entering and CFPICFP upon returning (Undocumented RNA). With this we were able to create a stack trace. We had tools to extract the code from a function, to extract a page from the Repair Guide. There was a tool to generate the prefix to call a function. It's worth noting here that we had a table mapping from function name to offset and size so we could call by name making things go a lot faster. We even had a tool to investigate the DNA between consecutive functions which found odd things such as overlapping functions, a fairly constant gap with some really large gaps.
We wrote a common library that covered many things including quoting, unquoting, text and integer encoding, integer decoding. It also includes higher level functions to process the symbol table, interpret symbolic addresses and generate DNA to call the adapter, copy pieces of DNA around, make patterns and templates and so on.
We had tools to generate prefixes. We had one that would call a function using the adapter, encoding parameters in several formats. Another was used to patch the DNA by overwriting selected portions. Then we had a tool that would either overwrite a function with another one of the same size or replace it with a stub to call another function. We used this extensively to nuke functions by replacing them with no-op code (e.g. init function), allowing us to easily erase objects such as Endo! Related to that, we had another tool that would search a section of DNA and replace all calls to another function with a third. This was useful in cases where we wanted to redirect from within certain functions only as well as a debugging tool to poke at function parameters. We had tools to encrypt and decrypt, but since we didn't figure it out in time it wasn't useful. There was a tool that output DNA to reverse a piece of DNA and a colour mixer to generate a required colour.
The DNA->RNA and RNA->image converters were written in C++. All the tools mentioned above were written mostly in Perl with a little Python as well. Since the tools written in Perl were the most fundamental in our success, that was the language we chose to nominate.
All these tools are great, but how did we morph Endo? Since we had a mash of Perl and Python scripts generating bits of the prefix, we chose middle ground in Bash as the glue. We wrote simple Bash functions as wrappers around the Perl and Python tools to make the script very easy to work with. There was a lot of crap that went into it so the more concise the better.
We started off with nuking Endo and several other objects that weren't in the target image. We then figured out how to do translations to get the different objects into the correct positions. It was very tedious, but that's where having a large team of ten was a big advantage. Bertus spent most of the time placing the objects, with help from others at times of course. We did the ducks first, then the cow, the caravan and so on. The function that drew the middle spot on the cow was broken and needed an error correcting code (I can't remember where this came from though!). One problem we had was that there was an anti-compressant function which drew hatching lines that was getting called before our add-ons were being drawn. This was losing us lots of points so we scurried to figure it out. Due to a couple silly misunderstandings this took us unnecessarily long to work out, but we eventually got it.
We rotated the windmill by calling setGlobalPolarRotation. We changed the colour of the cargo box by hacking the colours. To draw the whale we first nuked the ufo-with-smoke function and poked in the ufo-cup function to get it to spew out RNA codes to rotate it by 180 degrees. This was placed on top of a layer of water which was clipped with the correct shape. The scenario function was patched at the offset where the whale was located to translate the whale to the correct location. We nuked the crater in which the dead whale was lying. The motherduck-with-chicks was buggy and the anti-compressant wouldn't work on it or anything drawn after it. We couldnt' fix this so we drew it last. The lone duckling is drawn by enabling ducksShown, but to get it to the correct location required hacking of the DNA. The goldFishLeft and goldFishRight functions were swapped which got most of the fish correct. The colour functions for the flowers were swapped to get the correct colours, while the cow's flower required some nasty hacks to get working.
The sun required a little hacking to get working. After some investigation we noticed that the sunflower function had the same length as the sun function. When we noticed this we decided to XOR the sunflower and flower functions, replacing the sun function with the resulting data. Setting the weather value to 3 and calling sky-day-bodies caused the sun to be correctly rendered. To get the correct colour we had to nuke the colour bucket in the sun function and replace it with our own. We only found the spirograph help screen after the contest so we couldn't finish the sun completely.
Getting the clouds to render correctly was also a pain. With the palindromes hint, we noticed that the function name duolc was actually the reverse of cloud. With a little investigation we noticed the contents were similar, but in reverse. We then dumped the reverse of the data in duolc into the cloud function and that fixed the cloud function! Using some tracing data we noticed the function took a parameter, which we discovered was a scaling factor.
Setting the hillsEnabled value to true drew the hills, however, it was infected with a virus causing the rest of the image to be buggered. When running it through our trace tool we discovered that it output an invalid RNA command. We tried patching the RNA output to replace this with various drawing RNA, and found that emptyBucket worked. We then found the bogus RNA in the raw DNA, triple-quoted, and patched it. The shape of the hills were slightly off. Fixing this was simply a matter of finding the correct parameters to draw the correct polynomials, however, we never got around to doing this in time.
There were still a few things we never quite got by the end of the 72 hours. Some of the things we solved after the contest, but those don't count. The spirograph in the sun we solved afterwards. The bit of the left cloud drawn over the windmill we solved afterwards. The shape of the hills we know how to solve, but haven't attempted to fix them. The weeds are solved by setting the rng seed to 8128, the fourth perfect number, which we found out from the contest report. We haven't figured out how to fix the ducks from breaking the anti-compressant. An explanation of how to move the grass is in the contest report, but it involves quite a lot of effort. The whale spout is displayed when viewing an invalid gene table page.
The fish are moved into place by modifying the structure defined in the goldenFish_adaptation data. The speech bubble was apparently the only thing that cannot be drawn by embedded code and had to be approximated by a polygon. The mu inside the bubble was one of the characters in the character set, but needed some repairing using a hint drawn behind the previous contest pages. The cow's tail was infected by a virus it's possible it could be fixed in a similar manner to the hills, however, we never got to trying this out. There were another couple small things such as the "Endo has morphed! text and the whale's face, but these made little difference.
This was our final submission:
And this is a diff between our final image and the target:
Our history of submissions:
|Risk||Survival Chance||Prefix length||Message||Submit time|
|Error||Error||Error||Zip corrupted or no Zip.||2007-07-22 00:16: 36.408618|
The results of the Tenth Interstellar Contest on Fuun Programming were announced this afternoon. If you've been wondering where I've been the last couple of days, I'm in Freiburg attending the ICFP Conference. Yes, we were invited after being told we had "won a prize" the week after the contest.
We came in a very close 2nd with 84.92%, beaten only by Team Smartass who took top honours yet again (they won last year) with 90.22%. Team Smartass is a team of four Googlers from the Mountain View office, with an ex-South African (Daniel Wright). And surprisingly, the only member of their team to come to the conference was him! So three of the four prize winners that are here are South African. A quote from the contest report sums it all up:
Incidentally, Africa was the continent with the highest percentage of winning teams.Go Africa!! There was also another South African team, which I'm curious to find out who they were. There's a graph in the report showing the scores of the top six teams over the 72 hours and we were in the lead until the very last hour when Smartass made a big jump just leaping past us.
The organisers were concerned about a brute-force approcach of drawing the scene manually doing too well. Well, they were correct to be concerned as the third place team and judges prize went to Celestial Dire Badger (Jed Davis on his own) who got 75.59% using this strategy.
The top 15 scores:
|Place||Score||Survival chance||Team name|
|2||224623||84.92%||United Coding Team|
|3||293898||75.59%||Celestial Dire Badger|
So the judges have declared that:
Perl is a fine tool for many applicationsKeep your eyes open for a post on our tools and how we morphed Endo! There is just one thing we'd like to ask all the keen Endo hackers out there that is not answered by the contest report. Anyone have a blinking clue as to what the "crucial hint to solving the contest sneakily hidden in the Major Imp stories" is?