2.6 - Identifying outliers: IQR Method

2.6 - Identifying outliers: IQR Method

The IQR Method

Some observations within a set of data may fall outside the general scope of the other observations. Such observations are called outliers. Here, you will learn a more objective method for identifying outliers.

We can use the IQR method of identifying outliers to set up a “fence” outside of Q1 and Q3. Any values that fall outside of this fence are considered outliers. To build this fence we take 1.5 times the IQR and then subtract this value from Q1 and add this value to Q3. This gives us the minimum and maximum fence posts that we compare each observation to. Any observations that are more than 1.5 IQR below Q1 or more than 1.5 IQR above Q3 are considered outliers. This is the method that Minitab Express uses to identify outliers by default.

 Case Study: Heart Attack Risk

Now that we have learned the basics of variability, along with different strategies to represent variability, return back Susie’s article and the heart attack example. If everyone has a heart attack, there is zero variability in the number (or percentages) of people having heart attacks, so there is no need to explain why some people do and do not get heart attacks! If there is variability and not everyone gets a heart attack, then another way to think about the variability is to think about this as error. In other words, the fact that we do NOT know precisely that everyone will get a heart attack means in the absence of any additional information we will automatically have a certain amount of error when predicting the occurrence of a heart attack as compared to when we KNOW everyone will have a heart attack. In this way, variability is the error! This may sound confusing at first, but if you think about this in another context, the bathroom scale, the idea might become clearer. If you have a broken down old bathroom scale you might get on it first thing in the morning. You see your weight and you know it is an old scale so you step off and then step back on, and voila your weight is different. Now you are thinking you have two different values for weight, so you try a third time and voila you have a third value (yes this is a really old scale). If you think about the three weights on the old scale you might want to say your scale has a lot of error, but this is also the variability of the measurement of your weight.

So now that this idea of error and variability are on your radar we can begin to differentiate two kinds of variability, within group and between group. We are not going to do much with between group variability yet, we will cover that later in the course. However, what wehave been discussing so far actually IS within group variability. Our three scale measurements represent the “within group” variability of the measurement of our weight. In our original example, the fact that not everyone will get a heart attack is within group variability (if you want to get more technical you could say that the chance, or probability of getting a heart attack will differ for each person). Now, you might be reading this and saying but almost everyone knows there are certain lifestyle or genetic reasons impacting the chance of a heart attack!! That is exactly the point of statistics and research! Over the years, researchers have done research to help EXPLAIN the variability of heart attacks (by showing the relationship between the lifestyle and genetic factors and heart attacks). If you understand this, then you understand why the idea of within group variability is fundamental to statistics!

Susie’s article pointed out some of the more recent developments around taking low dose aspirin. While taking low dose aspirin has long been accepted to be a sound strategy to prevent heart attacks, as noted, there are now mixed findings that are being reported in the scholarly literature and even the media. One of the reasons, identified in the example used above, is that the actual effect of the aspirin is actually quite low.

Effect Size

How do researchers know this all of the sudden? Weren’t the original findings based on science and statistics? Well, yes the original findings were based on science and statistics, but with a lot more research around low dose aspirin and heart attacks, researchers can now calculate an effect size to determine the actual impact of the aspirin. What is an effect size? The effect size arose from social science research, where researchers wanted to be able to draw conclusions across studies. Effect sizes measure the impact of a “treatment” on an outcome. We will learn later in the course that this is also what many of our statistical tools do, the advantage of effect sizes are that they parse out the impact of the size of the sample. If this does not make complete sense at this point, that is okay, as we progress the sample size issue will become clearer.

Adapted from Coe, R. (2002). It’s the effect size, stupid. What effect size is and why it is important. Annual Conference of the British Educational Research Association. Exeter: England. Sept 12-14, 2002.

Effect Size

Probability that you could guess which group a person was in from knowledge of their ‘score’.

Equivalent correlation, r

0.0

0.50

0.00

0.1

0.52

0.05

0.2

0.54

0.10

0.3

0.56

0.15

0.4

0.58

0.20

0.5

0.60

0.24

0.6

0.62

0.29

0.7

0.64

0.33

0.8

0.66

0.37

0.9

0.67

0.41

1.0

0.69

0.45

1.2

0.73

0.51

1.4

0.76

0.57

1.6

0.79

0.62

1.8

0.82

0.67

2.0

0.84

0.71

2.5

0.89

0.78

3.0

0.93

0.83

The effect size calculation is similar to the z score calculation, the difference is in the interpretation. Instead of relying on probabilities, the effect size uses industry standards (as summarized in the table) related to the probability of belonging in a “treatment” group compared to a “control” group (in our example, this could be the group receiving low-dose aspirin (treatment) and those who do not (control). An effect size of 0 indicates the outcome of receiving the treatment (low-dose aspirin) does not impact the chance of a heart attack any more than chance. In contrast, an effect size of 2, indicates a strong effect, meaning the outcome is “effected” by the treatment and it is now possible to conclude the group membership can be highly probably based on the outcome (with an effect size of 2, you can predict group membership with an 84% probability).

In recent years, prestigious social science publications require the inclusion of effect sizes to distinguish findings that are impactful, beyond statistically significant. Reporting effectsizes also allows other researchers to aggregate across studies, a technique called meta-analysis, which we will not cover in the course, but may be of interest to students in the social sciences.


Legend
[1]Link
Has Tooltip/Popover
 Toggleable Visibility