Example 3.1 Section
In January 2015 a report by the Pew Research Center commissioned by the American Association for the Advancement of Science (AAAS) compared the views of the American public and scientists belonging to the AAAS. Here is a link to a summary of the report "Public and Scientists’ Views on Science and Society". (note that the complete report is available as a link in the upper right-hand corner of that web page and another link provides the exact questions asked).
Example 3.2 Section
In January 2015 a report on ABC news discussed a study regarding the relationship between the ingredients in certain medicines and the onset of dementia in the elderly. Here is a link to the ABC report "Common Over-the-Counter Medicines Linked to Dementia in New Study".
Example 3.3 Section
In January 2015 a report on ScienceDaily discussed a study regarding whether memory might be associated with whether you have your eyes open or closed when you try to recall recent events. Here is a link to the report "Closing your eyes boosts memory recall, new study finds".
Hundreds of research studies are published each week and many of these find their way to become the basis of reports in the popular press or on websites devoted to providing science news to the public. In Lessons 1 and Lesson 2 we have seen how important it is to critically evaluate the process used to gather the data of the study as we decide on the veracity of the claims made.
In this lesson, we review this material and provide step-by-step guidelines for evaluating research studies like examples 3.1, 3.2, and 3.3.
Evaluating Research Studies
Step One: Determine the type of study conducted (e.g. sample survey, observational study, randomized experiment). Example 3.1 is based on data from a sample survey; example 3.2 is based on data from a prospective observational study, and example 3.3 is based on data from a randomized experiment.
Step Two: Determine the critical components of the research (See pages 20-23 in the text).Here you must seek answers to questions like who funded the research? Who were the subjects and how did they come to take part? What are the exact measurements made or questions asked? What was the research setting? What differences between comparison groups might exist besides the factor of interest? How big of an effect was seen and did the researchers claim it was statistically significant (unlikely to happen by chance)?
Step two lies at the heart of the material in Lessons 2 and 3. Have a look at the three examples above and judge for yourself whether understanding the critical components suggests potential sources of bias.
Step Three: Check the "Difficulties and Disasters" that we saw in earlier lessons.
1) For sample surveys
- check if a probability method was used in generating the sample
- check if the sampling frame was close to the population of interest
- check if there were difficulties in reaching certain parts of the sampling frame that could be related to what's being studied
- check the response rate and think about how non-respondents might differ from respondents
In example 3.1, there were two surveys - one of the public and one of the scientists. The "public" sample involved 2002 people chosen randomly using a stratified sample from a sampling frame that included American adults with access to a landline or cell phone. Interviews were conducted in either English or Spanish depending on the preference of the respondent. Because different demographic groups have very different response rates, the Pew Center weights the responses in a way to bring them into alignment with census data on the true proportions of the population for factors like age, gender, race, and education level. The exact response rates are not provided for this survey but are given in great detail by Pew for many studies. Typically they find that there is no answer for about 1/3 of the working phone numbers they dial and that about 60% of the people they contact refuse to participate and another 10% are ineligible (e.g. a child's phone). That leaves a response rate of about 20% of working phone numbers they call. This may seem low, but it is actually on the high end for the polling industry - that's why survey organizations spend so much effort in their methods for weighting the results given to adjust for ways that the respondents might be different from the non-respondents. The poll of scientists involved 3,748 scientists chosen randomly from the membership list of AAAS which includes about 150,000 scientists. About 19,000 members were sent e-mails to introduce the study so the response rate was around 20%. Pew again weighted the results to align the sampled scientists with the sampling frame according to membership status (graduate student member/active faculty/retired faculty, and a fellow of the society versus non-fellows). Two areas might be of concern here. First, we notice that people in the general public survey were contacted by phone while the scientists were initially contacted by e-mail. Second, the membership of AAAS (the sampling frame) may be different from the population of scientists in general (e.g. different fields are more heavily represented in AAAS than others).
2) For comparative observational studies
- check for possible confounding variables and whether claims of causation were made (ask: Is there an important factor related to what brought the subjects into the groups being compared?)
- check whether claims about generalizing the results to a larger population are appropriate
- check whether a retrospective or prospective design was used (if retrospective - is there a reliance on the accuracy of a subject's memory?)
In example 3.2 the data was gathered prospectively using data on prescription drug reimbursements from a large HMO in the Seattle, Washington area (note that the study was about prescription drugs even though the headlines mention over-the-counter drugs!). The prospective design and avoiding patient self-reports about drug use were strengths of this study (the accuracy of self-report data is always a concern). Generalizing the results to the public at large might be a concern if the elderly membership of this HMO differed significantly from the general elderly population in ways that might affect dementia. For example, having health insurance in the first place indicates an economic and educational status that might be associated with the onset of dementia.
3) For randomized experimental studies
- check for possible confounding variables when small samples are used (with large samples the randomization will help with this)
- check for interacting variables (ask: Do the results stay the same for different sub-populations?)
- check for placebo, Hawthorne, and experimenter effects (ask: Was the experiment double-blind? Would subjects behave differently because they are being studied?)
In example 3.3 the subjects could not be blinded as to whether they had their eyes closed or not, but the researchers evaluating how well they did in remembering the events they had seen were blinded. The task in this experiment involved watching a video showing an electrician stealing items from a job site. An important concern here might be the artificial nature of the setting in which the experiment was done (watching a video of a robbery compared to see a robbery in person for example). Would the results carry over to recalling important events in real life?
Step Four: If the information needed to critique is not provided in the report, try to find the original source and see if the missing information is provided there.
Each of the three examples here provides a direct link to the full original report although, for example, 3.3, the original scientific publication must be purchased. Example 3.2 is a good example to see how there can often be a good deal of difference between the information provided in the original scientific report and the information in a report in the popular press and especially in the headline used to promote the article.
Step Five: Do the results make sense? Are they based on a sound scientific footing?
For example, the link between long-term heavy cell phone use and brain cancer is very controversial. The association has been seen in a number of retrospective case-control studies but no laboratory experiments have been able to provide a biological mechanism for how a causal link might happen. Perhaps the retrospective studies are flawed because the memories of patients with brain tumors about their cell phone use years earlier differ in accuracy from the memories of the control groups studied.
Step Six: Ask if there is an alternative explanation for the results?
In example 3.1 some of the issues on which public opinion is seen to be quite different from the opinion of scientists involve public policy controversies. Since opinions on such topics are strongly associated with political beliefs - it is possible that political affiliations are a confounding variable that explains some part of the differences seen.
Step Seven: Decide if the results are strong enough to encourage you to change your behavior or beliefs.
Does example 3.1 suggest to you that there is a greater need for better science education in the United States? Would example 3.2 lead you to examine the ingredients of prescription drugs you use and seek alternatives if you spotted the ingredient in the study? Would example 3.3 encourage you to ask someone to close their eyes in order for them to recall a memory you are asking them about?