# 10 Multivariate Data

## Overview

So far we’ve worked with two variables at a time, but often we have more - sometimes many more. Here we’ll introduce several useful tools for working with multivariate data, using Chapter 10 of Essential R Course Notes. Note that this is a very brief overview, and that we won’t discuss many multivariate tools such as PCA, ordination, and others.

Objectives

Upon completion of this lesson, you should be able to:

- make three-way frequency tables,
- make pairs plots using the function
`pairs()`

, - use lattice graphics to make plots conditioned on a third variable,
- carry out Principal Components Analysis (PCA), and
- carry out heirarchical clustering and k-means clustering.

## Data and R Code Files

The R code file and data files for this lesson can be found on the Essential R - Notes on learning R page.

## 10.1 Multiple Variables

In this video we’ll demonstrate the use of `table()`

for three-way frequency tables and the use of `pairs()`

to create correlation plot matrices, which are very useful in exploratory analysis.

## 10.2 Lattice Graphics

Here we’ll introduce some functions from the package ‘lattice’ which allow making groups of plots, with groups defined by a variable.

## 10.3 An Example with Data Import, `pairs()`

, and `by()`

Here I work through a brief example, beginning with importing data, then checking for correlation with `pairs()`

, and finally demonstrating how to use `by()`

to extract some group means. Note that I didn’t exclude one of the factor variables here, and you can see how it is displayed in the last row and column of the scatterplot matrix.

## 10.4 Principal Components Analysis

Here we will demonstrate Principal Components Analysis, or PCA, which can be a useful way to get some idea of which variables are contributing the most variability to a data set. Note that the biplot may be a bit small to easily see in the “plot” pane. If you are following along in R, click the “zoom” button above the plot pane to see a larger version.

## 10.5 Heirarchical Clustering and Dendrograms

Hierarchical clustering groups observations by finding those that are “nearest” each other and then defining clusters. While there are different ways to define “nearest” and different ways to define clusters, the idea is the same. Here we work with the root anatomy data, and it seems like the sample locations (L1 vs L2) are a bit more clustered than the genotypes.

## 10.6 K-means Clustering

k-means clustering is a clustering method that looks for k clusters in the data, meaning we must tell it how many groups to look for. Nonetheless it can still be very useful. Here we ask how well the three species of iris in the `iris`

dataset can be separated based on their morphology (as captured by the four quantitative variables in the dataset).