Wednesday, 7 December 2011

Euro 2012: How deadly is the group of death?

Much (well, some) has been made of the Euro 2012 draw last Friday, where thanks in part to co-hosts Poland and Ukraine being the top two seeds a rather nasty looking 'group of death' formed. Group B sees the Netherlands, Germany, Portugal and Denmark pitted against one another, with certainly three of the four capable of winning the tournament. But how deadly are we talking?

It gets interesting if you compare the latest UEFA rankings with the latest FIFA ones which are very similar (about 95% correlation) but not identical. Highlights include Portugal, who are ranked 11th by UEFA, but are the 5th best European team according to FIFA The table below summarises the Euro 2012 groups with each team's UEFA and FIFA ranking (where I've filtered out all the non-UEFA teams). (Click for a full-resolution version.)

Based on FIFA rankings, group B is indeed the deadliest, with an average country ranking of 4.5 (and, interestingly, every team a higher rank than any in group A). According to UEFA, however, it's not group B but group C - home to the Republic of Ireland - which is the hardest, although there is very little to choose between the two.

Sunday, 25 September 2011

How do you solve a problem like Sebastian?

A while ago over on Significance I looked at what happened if you compared different Formula One scoring systems with that year's drivers' championship. Yesterday, Sebastian Vettel came within 1 point of snatching the title with a whopping five races to go. In 14 races he's won nine times, come second twice and fourth once, amassing a ridiculous 309 points out of a possible 350.

As it stands, Vettel is 124 points clear of his nearest rival Jenson Button, who would have to win every single remaining race and hope that Vettel scores nothing if he is to win the drivers' championship. In short, the season is as good as over, but is the scoring system to blame? A couple of years ago the FIA tweaked the scoring system to try and encourage second-placed drivers to 'race to win'. Previously you got eight points for second and ten for first, which (it was perceived) didn't offer enough incentive to try and push on for first place. Now you get 25 points for winning and 18 for second, a greater incentive that - in theory - will encourage more aggressive racing.

So what happens if we run this season's results (so far) under the older system? If you're happy to assume that the scoring system doesn't significantly affect how a driver races (quite a big assumption, I admit, but this is Just For Fun) you can enjoy this table:
As my previous dabbles with comparing scoring systems suggest, it doesn't make much difference. Admittedly, Vettel would have already won the championship by now, but only just (a 51 point lead with 50 points available), and no-one in their right mind thinks he won't win this year anyway.

If you ask me, we need something more radical than a tweaked scoring system to make things exciting. My proposal: do away with qualifying and have the cars line up in reverse finishing order from the previous race. It's simple, it would certainly increase overtaking, and no-one likes qualifying anyway.

Addendum: have a bonus table, including the results under the old(er) system of ten points for first, six for second. Perhaps unsurprisingly, Vettel is even further ahead on this one.

Tuesday, 6 September 2011

Under the weather

I recently got over a cold. Like most people (I hope), I don't like colds, and they often seem to be going on forever. If only there was an easy way to tell when I was over the hump, and the worst was behind me. Sure, I could pay attention to whether I 'feel' better, but that's not very scientific. What's scientific is graphs. And what could be more scientific than a graph plotting the number of tissues I use over the course of a cold?

So, for science, I kept count of the number of tissues I used during my cold. It's not glamorous work, but such sacrifices have always been necessary in the pursuit of human knowledge. The main features seem to be a very sharp increase early on (I had a couple of days of mostly a sore throat before the sniffles set in), a peak at around the 5 day mark, and then a more gradual decline as I got it out of my system. I hope many of you will appreciate the choice of colour here, the "communication theory" module I took as part of my statistics MSc is surely not wasted.

Thursday, 30 June 2011

Cereal Killer

I don't know about the rest of you, but I spend far too much of my time in supermarkets reading the backs of cereal packets. I really like breakfast cereal, and seem to think that as a consequence it is a worthwhile use of my time to obsess over exactly how bad for me it is. Unfortunately, there are quite a few different parameters at play when it comes to assessing how nutritious (or otherwise) a particular cereal is, and so I thought I'd devise a simple metric to help me decide quite how guilty I should feel about my bowl of sugar. To keep it straightforward, I decided to focus on the three factors I think are most important: calories, sugar and fibre.

Now, in my rather simplistic school of nutrition, calories and sugar are Bad, and fibre is Good. What's more, calories and sugar are equally as Bad as fibre is Good, and so to calculate how Good overall a cereal is I simply take its fibre content per 100g (as a % of one's guideline daily amount (GDA)), and subtract from this the equivalent number of calories and amount of sugar. It's pretty crude (most notably for not taking into account what type of sugar we're talking about), but who has the time to worry about such things? I ultimately standardise these numbers so the best cereal scores 100 and the worst scores 0, with numbers in between reflecting how far along this scale a particular product is.

Crunching (y'know, like Crunchy Nut) the numbers on some of the major cereals (mostly Kellogg's, but with a couple of Weetabix ones thrown in for good measure), I can exclusively reveal the first ever Statscream Cereal Assessment Ranking:

  1. All Bran (100.0)
  2. Bran Flakes (56)
  3. Weetabix (55)
  4. Raisin Wheats (45)
  5. Frosted Wheats (41)
  6. Fruit 'n' Fibre (34)
  7. Corn Flakes (28)
  8. Just Right (21)
  9. Rice Krispies (20)
  10. Special K (19)
  11. Honey Loops (19)
  12. Krave (11)
  13. Crunchy Nut (2)
  14. Coco Pops (1)
  15. Frosties (0)
It's perhaps little surprise at either end of the scale, although All Bran's utter dominance is perhaps a little disheartening. Of the also-rans I reckon Frosted Wheats offer the best tastiness-to-healthiness ratio, although clearly this is something that needs to be taken into account in a future model. There are also plenty of cereals that could be included, but time is finite after all. If you'd like to know how your particular (non-Nestlé) cereal fares, let me know and I'll run the numbers (if, somehow, you don't quite have the motivation to do it yourself).

Tuesday, 31 May 2011

Eurovision 2011 post-mortem

I noticed the other day that the split jury/televote results of this year's Eurovision Song Contest had appeared, meaning it's time to dig into them to see what (if anything) we can find. For the uninformed, the contest has, in an attempt to curb 'political' voting, used a 50% jury, 50% public vote system since 2009. In theory, the jury will be nice and objective, reining in any political tendencies amongst the hoi polloi. What's interesting, then, is to compare how the entrants fared with the jury and the public. Here are some of the highlights:

  • Biggest winner under the televote this year was Russia, who finished a whopping 18 places higher with the public than with the jury (where they came rock bottom).
  • Also faring well with the public, perhaps surprisingly, was the UK, finishing 17 places higher. Whilst we may try and take an "everybody hates us, we don't care" attitude to Eurovision, it seems that if we send a 'famous' boy band we can at least win over the public. Now if only they'd had a good song...
  • At the other end, Austria, Slovenia and Denmark were the biggest losers amongst the public. They finished 19, 18 and 15 places lower on the televote than jury vote respectively.
  • This is the first year when the public and the jury have disagreed over who should win. The public went for eventual winners Azerbaijan, whereas the jury preferred Italy (whose 11th place in the televote meant they could only manage second overall).
Finally, as with last year's results, I've produced a map summarising the differences between jury and televote. I haven't bothered including the semis this time round, and whilst there seems to be a bit of an Eastern bias, it's not overly convincing. (Click for big.)

Thursday, 26 May 2011

The English Premier League Under Pseudo-AV: Bonus Material

I recently put something up on Significance where I took this year's English Premier League table and did a sort-of alternative vote analysis of it. This involved taking the bottom placed team out, removing all the results involving them, recomputing the points, redoing the table, removing who was now last, and so on (and who said AV was complicated?). You can see what happens if you read the article, but as a bit of bonus fun, I thought I'd see what happens if you did it in the other direction, and here are the results:
Liverpool and Everton fall spectacularly, with Stoke and my beloved Blackburn doing the opposite. As with the 'proper' AV I initially did, this reflects (to an extent) where these teams were getting their points from. For instance, Liverpool did well against the top teams, but badly against the lower ones, whilst the opposite can be said of Stoke. Neat, eh?

Sunday, 15 May 2011

Eurovision Blues

Eurovision came and went again. I wrote about the impact of automatically qualifying for the final for Significance, (as well as rehashing an old Statscream post whilst I was at it). Azerbaijan won, which was nice, but mostly because it earnt me some £££. I would have liked somewhere closer to home so I could go next year, but I don't think I'm quite ready for Baku yet.

Surprisingly, Italy (returning to the contest for the first time since 1997) came second, despite having a song that struck me as being not very Eurovision-y at all. Nevertheless, it gives us an excuse to compare where the votes for a Western European country come from with somewhere rather more Eastern. To that end - maps! First up, Azerbaijan's points - did they all come from those mysterious Eastern countries which are surprisingly difficult to find on a map?

Hmm, pretty much. How about Italy? Were they equivalently well supported by their Western allies?

It seems so. Strong evidence of the Eurovision politics we all know and love? Maybe. This is of course an entirely non-rigorous look at the question of bloc voting (the BBC did a good article about this a few years ago if you fancy something more thorough), but is quite a nice visual illustration of how this year's top two fared.

"What about Blue?" I hear you say? Well they had reasonably pan-European support, although with a definite Eastern leaning to it. If we'd won over a bit more of the west we might have done slightly better than a mere 11th, but at least we didn't come last. Again.

Wednesday, 2 March 2011

Mostly Harmless

So a fun game to play is where you (and a friend, if you have any who would also find this fun) try and make successive google searches, each time trying to get fewer hits. You 'lose' if you end up getting more hits than the previous go, or if you get zero hits. Obviously this doesn't work without some rules to stop it being super easy, and I had a go the other day with the phrase "no BLANK were harmed in the making of". Here's the results of a few games.

Friday, 28 January 2011

Newspaper Probability 101

The Sun had a recent article about a 'remarkable' couple whose third child was born at 7:43am. Naturally, this time isn't special in itself, but what is is that their previous 2 children were born at 7:43 as well. Crikey, that's pretty impressive, isn't it? Three children born at the same time? Well, not quite, one was born at 7:43pm, rather than am, but still, what are the chances?

The Sun reckons the couple had "defied odds estimated at 300million to one", which means it's time to play that fun tabloid game: How Do They Work That One Out?

Let's look at the situation. All three babies were born in the same minute, but what minute it was wasn't specified in advance. As such, the first baby could have been born at any time, and then what's remarkable is that the subsequent 2 were both born at that particular time as well.

The probability that a child is born in a given minute (on a 12-hour clock system) is just 1 in 720 - that's how many minutes there are in 12 hours. Next, if we assume that children are born at times completely independently of one another, then the probability that 2 will be born at a particular time is 1 in 720 x 720 = 518,400. This is way short of the 300 million the article claims, so what have they done? It's a classic mistake, they've overlooked that the first baby could have been born whenever, and so they've done 720 x 720 x 720, which is 373,248,000 - much more like the probabilistic claim being made.

So the Sun claimed 300 million, our calculations put it at a much less remarkable 518,400, and that's assuming birth times are independent (something which I cannot find data for one way or another right now). That's still a fairly long shot, but there is still the lottery ticket factor - it's fairly common for the lottery to be won by someone, even though the odds for that one person are 14 million to 1. That's because sufficiently many people by a ticket it becomes fairly likely that someone will win. In this case we need to look at how many families could experience three children born at the same time.

According to the ONS, in 2004 there were 17 million families in the UK, of which 16% had 3 or more children. That's 2.72 million families with a ticket to a 500,000 to 1 lottery. Unsurprisingly, it's not so surprising after all.

Monday, 10 January 2011

Rough Stats: More on University Admissions

Playing around with UCAS admissions data, looking at success rate of applicants by ethnicity. Mostly inspired by David Lammy's investigations and articles that led to a previous entry on here about low success rate of black applicants to Oxbridge.

Oxford hit back over the accusations, giving a fairly good account of itself as it identified various reasons for why black applicants might be few in number in the first place, as well as why they may have comparatively lower success rates in their applications. I've poked through the UCAS data for the last 6 years and looked at the success rate of different ethnic groups, comparing each to the 'average' success rate for that year. Here's a quick and dirty graph:

Some real food for thought there. Black applicants seem to consistently underperform, having a 10% lower chance of succeeding in their university application than the 'average' student. Those identifying themselves as white, Asian and 'mixed' all seem to hover around the average (although with applicants being around three-quarters white this group has a big sway in determining the average to begin with). 'Others' don't do too well either, maintaining a fairly steady -5%.

What's most peculiar, though, is the 'unknowns', who suddenly shoot up to overachieving as much as black applicants underachieve from 2006 onwards. In fact, they pretty much perfectly mirror the black applicants for those 4 years, including the jump from 2008 to 2009. Coincidence? No idea.

One could easily point at these data as proof of prejudice in university admissions, but to do so would be missing some pretty glaring questions. In Oxford's rebuttal of Lammy's accusations they point out that their black applicants tended to apply for their more competitive courses, which went at least some way to explaining their poorer success rate. Is there any reason to think this pattern isn't repeated on a national scale? Another big question is the unknowns - what data are hiding there? Is it reasonable to assume that those who choose not to disclose their ethnicity are representative of the entire applying population? I'm going to go with "probably not" (and at some point get around to flicking through the literature for a better answer).