‘More than 40 per cent of Australians do not know how long it takes the Earth to travel around the sun, according to a new survey.’
This was how ABC news opened its story last month on the Australian Academy of Science’s findings about science literacy in Australia – the results of an online survey of 1515 people. Other news outlets ran the story too, giving column space to how people failed to identify how much of the earth’s surface was ocean (only 39% got it right), or whether humans coexisted with dinosaurs (73% passed). Only Gizmodo framed their headline to mirror exactly what most of us were thinking: ‘The Five Dumbest Science ‘Facts’ Believed by Australians.‘
‘How can people be so stupid?’ we asked each other over coffee or on Twitter the next day. ‘I blame the parents!‘ commented one person on the ABC site; another wrote that ‘I would put it down to the dilution of education … Excursions for films, way to much sport and so on’. ‘It’s the dumbest people having the most children,’ wrote another. The Academy’s Professor Les Field himself pointed the finger at Google and the school curriculum. ‘I would hope that a survey like ours is a wake-up call that says there is an issue, an underlying issue that we need to address,’ he said.
The problem is, this survey actually tells us very little. Its questions ask the wrong things, its method is severely flawed, and what data it does provide is used to leap to invalid conclusions, not just by the media and the public, but by the academics at the Academy themselves. In fact, when compared to proper scientific studies on science education outcomes, this study paints a completely wrong picture of the state of Australia’s science literacy – a reprehensible outcome for a prestigious organisation that’s influencing science policy.
In fact, the whole project uses such bad science from start to finish that it’s a great teaching tool about science – but only by demonstrating what not to do. Let’s take a look at where it goes wrong, why it’s giving the wrong message to policymakers and a detrimental message to the public, and how things could be done better next time. Because science literacy is vital enough that it’s worth getting right.
Take the test
There were 7 questions in the survey, which was carried out by Auspoll for the Australian Academy of Science. You can access the Auspoll report here.
In reproducing the questions for you here, I run into the first problem: nowhere in the report can I find a clear statement of whether the questions were open-ended or multiple-choice. This is a serious omissions, because as we’ll see later, knowing how the questions were asked will be important in judging the results. However, the graphs in the report break the responses down according to answer groups (e.g. 10-15% water, 16-20% water etc). Others have assumed these were the multiple choice brackets offered to the respondents; I will too.
The questions:
- How long does it take for the Earth to go around the sun? (a) One day, (b) one week, (c) one month, (d) one year, (e) not sure.
- Is the following statement true or false? The earliest humans lived at the same time as dinosaurs.
- What percentage of the Earth’s surface is water? (a) 0-25% (b) 26-50% (c) 51-60% (d) 61-69% (e) 70% (f) 71-80% (g) 81-100% (h) not sure.
- What percentage of the Earth’s water is fresh? (a) 0% (b) 1% (c) 2% (d) 3% (e) 4-10% (f) 11-25% (g) 26-50% (h) 51-60% (i) 61-70% (j) 71-80% (k) 81-100%
- Do you think that evolution is occurring? (a) Yes, I think evolution is currently occurring. (b) No, I do not think evolution is currently occurring. (c) No, I do not believe in evolution. (d) Not sure.
- Do you think that humans are influencing the evolution of other species? (a) Yes (b) No (c) No, I do not believe in evolution (d) Not sure.
- In your opinion, how important is science education to the Australian economy? (a) absolutely essential (b) very important (c) somewhat important (d) not at all important (e) not sure.
Answers are discussed in the rest of this article, but are given in a list at the very end, if you want to duck down and check how you went.
Ask a silly question…
It’s essential for any scientific investigation to have a clearly-stated aim. In this case, the report gives it as ‘to determine [Australians’] level of science literacy and how it has changed over the past 3 years.’ But in his comments to the media, Professor Field draws conclusions about much more than just that: specific things, like the quality of the school curriculum, the effect the internet is having on our knowledge, and our ability to participate in civic debate. So let’s look at these issues and see whether the questions are suited to discovering anything about them at all.
Education quality
The main problem with using this survey to assess education quality is that the people surveyed are from a range of age groups. 90% of the people surveyed are 25 and over, meaning their results reflect on different curricula going back more than 50 years. In addition, there’s an extra few decades in there for forgetting what has been taught, or getting ‘noise’ from pop culture or other sources.
In fact, of the 1515 respondents, only 147 were in the youngest age group of 18-24 years old – and some of those will have been out of school for nearly ten years. Any scientist will tell you that the reliability of the survey decreases as the number of participants goes down. At the full sample size of 1515, the uncertainty is already given as 5% (with confidence interval 95%), meaning a statement like ‘40% of all respondents got it right’ really means ‘We’re 95% sure that the actual result is between 37.5 and 42.5%.’ For the much smaller subset of 147, this uncertainty will be much higher. We aren’t actually told what the uncertainty is – so no real claims can be made about individual sub-groups in this study.
For example, the Academy makes this claim: ‘The proportion of 18-24 year-olds who correctly answered that it takes one year for the Earth to orbit the sun fell to 62%, from 74%.’ Translated into real numbers, this means 91 got it right this year, compared to 109 in 2010. That’s such a small number of people that it’s not unlikely a few ‘donkey voters’ or just the natural differences between the groups sampled could be responsible for the whole perceived change. If they published the confidence intervals we’d see whether it was statistically significant or not – but they haven’t, so Prof. Field should know better than to cite these numbers, because without the ‘error bars’ we just don’t know if they mean anything at all.
Let’s suppose for a moment that the survey results are reliable, and only 62% of young adults know it takes a year for the earth to go around the sun. Does this represent a crisis in education standards? To answer that, we need some context. I don’t follow the research on education quality, but some basic research skills helped me track down at least one, much better, study that contradicts the Academy’s conclusions.
Yes, while the Academy is chasing headlines with this small survey (some have called it out as ‘concern trolling’), other organisations are producing serious results using proper scientific methods. The Programme for International Student Assessment (PISA) is an OECD undertaking that aims to compare education outcomes across countries. Their 2006 survey included 57 countries and about 400,000 students – 14,170 in Australia alone, which is a hundred times more than the youngest age bracket in the Academy’s survey. All people surveyed were 15 years old, which meant apples were being compared with apples.
The OECD findings? Far from being dunces, Australian kids are at the top of the world: Australia was only outperformed in scientific literacy by three countries: Finland, Hong Kong and Canada. In addition, in Australia there was no significant gender difference on the overall science literacy scale – contradicting the Academy’s (statistically unreliable, as we have seen) claims that men outperform women. But the larger samples of the OECD study do give us enough detail to show us some real causes for concern: students performed far worse if they were in regional schools, or were indigenous Australians. In addition, there were fewer female high-achievers in recent years than usual.
In other words, the Academy is sending taxpayer dollars to the wrong targets. Our education standards are not poor, they are excellent, but need work in specific areas. Instead of spending millions to overhaul the whole curriculum, maybe it should be put into regional schools, indigenous education, and programmes to reverse the slippage of girls at the top.
Google-itis
‘Google has a lot to answer for,’ Professor Field was quoted as saying in the Telegraph. ‘I do think younger generations rely much more on technology rather than fundamental understanding of the basics.’
But did the survey questions really test fundamental knowledge? In my opinion, not really – and where they did, people generally did much better than the Academy made out.
Let’s take the two water questions. To get them right, you needed to answer that 70% (exactly) of the earth is covered by water, and 3% (not 2 or 4%) of that water is fresh. Does that test fundamental underlying knowledge? I don’t think so. Here’s why.
In the fresh water question, I’d argue the ‘fundamental knowledge’ is that there is vastly more salt water than fresh – say, knowing more than 90% is salty would count as a pass. By this metric, 55% of people who gave a numerical answer got it right. This is very different to the picture the Academy paints: ‘Only 9% correctly gave the answer.’ Still, 55% still seems low. Is this cause for concern?
Out of interest, I asked a few highly educated friends of mine the same question. A CSIRO scientist with a PhD guessed 1% fresh water; a surgeon guessed 5%; an engineer said 4% (that would have been my guess too). I would argue that being out by a few percent doesn’t make them bad at their science-reliant jobs. In fact, that’s exactly what google and wikipedia are for – checking the fine details. The important thing then becomes having the training in critical thinking to interpret the sources you find – and that, despite the Academy’s gesturing, is not addressed in any of their survey questions even though it’s arguably a much more important aspect of science literacy.
Similarly with the oceans question. The Academy reports their findings as ‘39% know that 70% of the Earth’s surface is underwater.’ But you could also report it like this: ‘97% of all Australians accurately identify that ocean covers more than half the earth’s surface.’
There’s also a big problem in the way they asked the questions that probably distorts all the results. Anyone who’s been through high school has probably done hundreds of multiple-choice tests, and you get pretty good at gaming the system. Good teachers know that if you ask a question like ‘What colour was the widget? (a) yellow-blue (b) yellow-green (c) yellow-orange (d) pink,’ the answer isn’t going to be pink, or why would the examiner bother putting in all the fine distinctions between yellows? But the Academy makes their answer choices suggestive of the ‘right’ answer. In the fresh water question, you can choose between 1%, 2%, 3%, 4-10%, or other higher and larger ranges. A student knowing nothing about water immediately knows the answer is going to be very low, and chooses one of those. With the ocean question, it’s even worse: every option is a range except the ‘correct’ answer. It’s like there’s a big arrow pointing to it saying ‘pick me!’.
One last point about Google: I did take the time to google the ocean question after I got it wrong (I said 71%, not 70%) and I found this site, from the US Government National Oceanic and Atmospheric Administration (NOAA). The very first sentence? ‘The ocean covers 71 percent of the Earth’s surface.’ Another, from Australian national science organisation CSIRO: ‘71% of the global surface by area’.
That’s right – ocean science experts would have marked the Academy wrong on one of their very own questions.
Informed civic engagement
Probably the most important reason to gauge Australians’ science literacy is because there are so many science-related issues that involve us as citizens. As members of the public we need to make decisions about how to vote on energy policy issues, how much we want to pressure our politicians about climate change, whether we should choose the non-GM foods on the supermarket shelves, and whether it’s worth paying a bit more for a fuel-efficient car. These all require basic science knowledge.
Did the survey ask questions that gauge this? Maybe. Certainly, understanding that things evolve, and that the process is influenced by humans is important – for example, in understanding antibiotic resistance or the impact of ecosystem change. But if the Academy were really interested in this topic, I think they could have chosen even better questions. I’m not sure yet which ones I’d choose, but they’d test basic concepts integral to current issues or everyday decisions – like electricity production or usage, food ingredients and health, genetic modification, climate change, and ecosystem management. I’m sure you can think of others.
And as I mentioned before, an even better bit of science to have in your citizen’s toolkit than specific facts is the ability to think critically and compare evidence. My own version of the survey would also ask about concepts like: if there are two studies that show different things, how do you choose which to believe?
As mentioned before, the Academy study doesn’t ask any questions about critical thinking or evaluation of evidence. They do ask for people’s opinion on whether they think science is important – but the Academy can hardly use the 79% ‘yes’ response to lobby government now they’ve shown we’re all dunces, can they?
Bad science
I’ve talked about the bad methodology (asking leading multiple-choice questions), the bad statistics (drawing conclusions from sub-groups with small sample sizes and unspecified uncertainty), and the bad conclusions (overreaching the limits of what the study was designed to do, and framing results in misleading ways). But there’s more.
Any good scientific endeavour aims to make itself reproducible. That is, it gives enough detail about the method they used so that researchers in a different organisation can try and replicate the study. In this case, the Academy fails. They don’t even state clearly whether the respondents were asked open or multiple-choice questions (I have inferred it was multiple choice). Were the respondents asked to do the survey at a particular time of day? Would they have been distracted, say, by a TV while doing it? How was the survey website set up? Was its design professional-looking to convey a sense that the survey should be taken seriously? We don’t know – and scientists know that these things can have big impacts on the quality of response. But we’re left to guess.
A good bit of scientific enquiry also cites other researchers’ work. It’ll tell you what other researchers have already done, how this study adds to the knowledge, and how its results compare to others. If the Academy had done this, they might have pointed out the OECD results I mentioned earlier.
Scientists should also always try to point out the flaws in their own work. Professor Field does concede that this survey is ‘just a snapshot’ of science literacy. But he continues to make unjustified claims about its findings.
This headline will reverberate in the media and the public mind for months, if not years. Even if it is just a ‘snapshot’ and not a peer-reviewed report, it’s negligent for the Academy to put out bad science, and then chastise the rest of us for being scientifically illiterate.
Sending a bad message
If we do want to improve people’s understanding of science, the last thing we want is to shame them.
We all have misconceptions – even scientists within their own fields. As Professor Field says, some of them we get from movies like Jurassic Park (and some of them we get from the Academy’s own graphics for this very survey, one of which shows the Earth way out of proportion to the sun). But the only way to overcome them is to be able to see that we have them, and for that we need to be less defensive and more open to exploring our own deficiencies.
Distinguished biology educator Professor Joel Mintzes once wrote, ‘One of the points I have tried to make over the past 25 years is that there should be no shame associated with conceptual errors, either within or outside of one’s field of ‘expertise’.’ (Quoted in Heavenly Errors, a book about astronomy misconceptions by Neil F. Comins. Incidentally, Comins found that 90% of his uni students had a misconception about the relative sizes of the earth and sun – ring any bells?)
Instead of pointing the finger at our ignorance, the Academy could have handled things differently. They could have made it a friendly reminder that we all have misconceptions – and that it’s nothing to be ashamed of, as long as we’re open to improving ourselves.
In summary
The Academy says their survey shows education standards are slipping, there’s a significant gender difference, people are losing deep understanding in favour of Google, and our failings show we can’t engage in civic debate.
In fact, their survey shows none of this. Any claim involving a sub-group (e.g. 18-24 year-olds, men versus women etc) has such a small sample size we don’t know if the differences are statistically significant. Furthermore, their questions are not targeted to the conclusions they are trying to make. Both their data and their conclusions are contradicted by bigger, better surveys. And while they are pointing the finger at Gen X and Y for googling things, Google could have shown them one of their own answers disagrees with information from CSIRO and NOAA.
The Australian Academy of Science’s motto is ‘Promoting excellence in Australian science’. Unfortunately this survey doesn’t live up to their ideals. By aiming for headlines instead of genuine information, they could cause taxpayer money to be spent trying to fix the wrong problems. And when science literacy is such a vital issue, we all need them to do better.
———————————
Survey question answers supplied by the Academy: 1: one year. 2: false. 3: e (70%). 4: d (3%). 5: a (yes, evolution is occuring). 6: a (yes, we are influencing evolution). 7: [No answer supplied, but we know it’s a or b, of course.]
Excellent analysis. Ironically they may be right that our science literacy and standards are slipping if this is what our own Academy of Science is putting out there. The questions are terrible, as you say. Inconsistent and leading the person filling out the questionairre. Did they also survey religiosity or beliefs for the survey? Asking the question as ‘do you think evolution is occurring’ is likely to show a religious influence rather than scientific understanding, especially since it is phrased as an opinion, which it is not. I also couldn’t agree more with your line of reasoning on real science literacy being critical thinking and analysis, not whether you know the fresh water make-up to +/- 1%. Ridiculous. (FYI, I chose 4-10% for fresh water and 71-80% for coverage. Guess I better give up on academia and throw my PhD in the bin…).
Hey Jess, great point about the religious aspects of the questions. The Academy’s report doesn’t say anything about whether respondents were asked their religion, but it does imply that results weren’t weighted (like gender, age and location were) to reflect national religious demographics. So you’re right, the religious status of those quizzed could skew these questions hugely away from the true national snapshot. And in any case the results don’t necessarily reflect science literacy, since the people could well know what scientists think but choose to reject it for other reasons.