Learning Objectives
- Describe the purpose of the
dplyr
andtidyr
packages.- Select certain columns in a data frame with the
dplyr
functionselect
.- Select certain rows in a data frame according to filtering conditions with the
dplyr
functionfilter
.- Link the output of one
dplyr
function to the input of another function with the ‘pipe’ operator%>%
.- Add new columns to a data frame that are functions of existing columns with
mutate
.- Use the split-apply-combine concept for data analysis.
- Use
summarize
,group_by
, andcount
to split a data frame into groups of observations, apply summary statistics for each group, and then combine the results.- Describe the concept of a wide and a long table format and for which purpose those formats are useful.
- Reshape a data frame from long to wide format and back with the
pivot_wider
andpivot_longer
commands from thetidyr
package.- Export a data frame to a .csv file.
dplyr
and tidyr
How much time do computational biologists spend on “data wrangling”? Answer: A Lot. The tidyverse is an amazing set of tools for making these tasks easier and more streamlined.
Bracket subsetting is handy, but it can be cumbersome and difficult to read, especially for complicated operations. Enter dplyr
. dplyr
is a package for making tabular data manipulation easier. It pairs nicely with tidyr
which enables you to swiftly convert between different data formats for plotting and analysis.
Packages in R are basically sets of additional functions that let you do more stuff. The functions we’ve been using so far, like str()
or data.frame()
, come built into R; packages give you access to more of them. Before you use a package for the first time you need to install it on your machine, and then you should import it in every subsequent R session when you need it. You should already have installed the tidyverse
package. This is an “umbrella-package” that installs several packages useful for data analysis which work together well such as tidyr
, dplyr
, ggplot2
, tibble
, etc.
The tidyverse
package tries to address 3 common issues that arise when doing data analysis with some of the functions that come with R:
We have seen in our previous lesson that when building or importing a data frame, the columns that contain characters (i.e., text) are coerced (=converted) into the factor
data type. We had to set stringsAsFactors
to FALSE
to avoid this hidden argument to convert our data type.
This time we will use the tidyverse
package to read the data and avoid having to set stringsAsFactors
to FALSE
If we haven’t already done so, we can type install.packages("tidyverse")
straight into the console. In fact, it’s better to write this in the console than in our script for any package, as there’s no need to re-install packages every time we run the script.
Then, to load the package type:
dplyr
and tidyr
?The package dplyr
provides easy tools for the most common data manipulation tasks. It is built to work directly with data frames, with many common tasks optimized by being written in a compiled language (C++). An additional feature is the ability to work directly with data stored in an external database.
This addresses a common problem with R in that all operations are conducted in-memory and thus the amount of data you can work with is limited by available memory. The database connections essentially remove that limitation in that you can connect to a database of many hundreds of GB, conduct queries on it directly, and pull back into R only what you need for analysis.
The package tidyr
addresses the common problem of wanting to reshape your data for plotting and use by different R functions. Sometimes we want data sets where we have one row per measurement. Sometimes we want a data frame where each measurement type has its own column, and rows are instead more aggregated groups - like plots or aquaria. Moving back and forth between these formats is non-trivial, and tidyr
gives you tools for this and more sophisticated data manipulation.
To learn more about dplyr
and tidyr
after the workshop, you may want to check out this handy data transformation with dplyr
cheatsheet and this one about tidyr
.
We’ll read in our data using the read_delim()
function, from the tidyverse package readr
, instead of read.csv()
.
metadata <- read_delim("metadata.tsv", delim = "\t")
metadata <- read_delim("metadata.tsv", delim = "\t", comment = "#")
You will see the message Parsed with column specification
, followed by each column name and its data type. When you execute read_delim
on a data file, it looks through the first 1000 rows of each column and guesses the data type for each column as it reads it into R. For example, in this dataset, read_delim
reads weight
as col_double
(a numeric data type), and species
as col_character
. You have the option to specify the data type for a column manually by using the col_types
argument in read_delim
.
Notice that the class of the data is now tbl_df
This is referred to as a “tibble”. Tibbles tweak some of the behaviors of the data frame objects we introduced in the previous episode. The data structure is very similar to a data frame. For our purposes the only differences are that:
character
are never converted into factors.We’re going to learn some of the most common dplyr
functions:
select()
: subset columnsfilter()
: subset rows on conditionsmutate()
: create new columns by using information from other columnsgroup_by()
and summarize()
: create summary statistics on grouped dataarrange()
: sort resultscount()
: count discrete values[]_join()
: merge two data framesTo select columns of a data frame, use select()
. The first argument to this function is the data frame (metadata
), and the subsequent arguments are the columns to keep.
To select all columns except certain ones, put a “-” in front of the variable to exclude it.
There are also fancier ways to select columns, including the function starts_with()
:
To choose rows based on a specific criterion, use filter()
:
What if you want to select and filter at the same time? There are three ways to do this: use intermediate steps, nested functions, or pipes.
With intermediate steps, you create a temporary data frame and use that as input to the next function, like this:
This is readable, but can clutter up your workspace with lots of objects that you have to name individually. With multiple steps, that can be hard to keep track of.
You can also nest functions (i.e. one function inside of another), like this:
This is handy, but can be difficult to read if too many functions are nested, as R evaluates the expression from the inside out (in this case, filtering, then selecting).
The last option, pipes, are a recent addition to R. Pipes let you take the output of one function and send it directly to the next, which is useful when you need to do many things to the same dataset. Pipes in R look like %>%
and are made available via the magrittr
package, installed automatically with dplyr
. If you use RStudio, you can type the pipe with Ctrl + Shift + M if you have a PC or Cmd + Shift + M if you have a Mac.
In the above code, we use the pipe to send the metadata
dataset first through filter()
to keep rows where weight
is less than 5, then through select()
to keep only the species_id
, sex
, and weight
columns. Since %>%
takes the object on its left and passes it as the first argument to the function on its right, we don’t need to explicitly include the data frame as an argument to the filter()
and select()
functions any more.
Some may find it helpful to read the pipe like the word “then”. For instance, in the above example, we took the data frame metadata
, then we filter
ed for rows with weight < 5
, then we select
ed columns species_id
, sex
, and weight
. The dplyr
functions by themselves are somewhat simple, but by combining them into linear workflows with the pipe, we can accomplish more complex manipulations of data frames.
If we want to create a new object with this smaller version of the data, we can assign it a new name:
Note that the final data frame is the leftmost part of this expression.
Challenge
Using pipes, subset the
metadata
data to include samples before diet only, and retain only the columnsMouseID
andTime_days
.
Frequently you’ll want to create new columns based on the values in existing columns, for example to do unit conversions, or to find the ratio of values in two columns. For this we’ll use mutate()
.
To create a new column with Time and Treatment combined:
You can also create a second new column based on the first new column within the same call of mutate()
:
metadata %>%
mutate(Group = paste(Treatment, Time_days, sep = "_"),
GroupShort = gsub("Diet_", "", Group))
If this runs off your screen and you just want to see the first few rows, you can use a pipe to view the head()
of the data. (Pipes work with non-dplyr
functions, too, as long as the dplyr
or magrittr
package is loaded).
metadata %>%
mutate(Group = paste(Treatment, Time_days, sep = "_"),
GroupShort = gsub("Diet_", "", Group)) %>% head()
We can also combine mutate with filter, select, and any other similar function.
metadata %>%
filter(!is.na(Time_days)) %>%
mutate(Group = paste(Treatment, Time_days, sep = "_")) %>%
head()
is.na()
is a function that determines whether something is an NA
. The !
symbol negates the result, so we’re asking for every row where weight is not an NA
.
Let’s read in the feature table to test out these tools. To do so, we will use the read_qza
function in the qiime2R
package.
The feature table is imported as a matrix. We’ll convert it to a tibble so that we can manipulate it more easily. We’ll also convert the row names (sequence IDs) into a column of the tibble, named FeatureID.
feature_table <- feature_table %>% data.frame() %>% rownames_to_column(var = "FeatureID") %>% as_tibble()
class(feature_table)
names(feature_table)
str(feature_table)
Challenge
Select the FeatureID column and one other sample column from the feature table, filter it to rows where its value is greater than 100 reads, and make a new column representing the relative abundance of the remaining reads.
pivot_longer()
and pivot_wider()
You can imagine it would be annoying to repeat the same commands if we want to calculate relative abundances for all samples individually. Our current feature table is set up to compare how each feature ID (rows) varies across all the different samples. Instead, we want to reorganize the feature table so each observation to have its own row, which will allow us to compare them across Samples, Features, and other variables. We can do this transformation, and its reverse, using the tidyr
package. Previously, the recommended functions to do this were called “spread” and “gather”, which you can still use if you’ve used them before, but more intuitive functions to do the same thing have recently been introduced: pivot_longer()
and pivot_wider()
.
Here, we want to make our feature table longer (more rows, fewer columns).
pivot_longer()
takes four main arguments:
pivot_wider()
performs the reverse operation:
summarize()
functionMany data analysis tasks can be approached using the split-apply-combine paradigm: split the data into groups, apply some analysis to each group, and then combine the results. dplyr
makes this very easy through the use of the group_by()
function.
summarize()
functiongroup_by()
is often used together with summarize()
, which collapses each group into a single-row summary of that group. group_by()
takes as arguments the column names that contain the categorical variables for which you want to calculate the summary statistics. Let’s try out a few options to summarize the average number of reads for each feature and see those with the most and least reads:
features_long %>%
group_by(FeatureID) %>%
summarize(AvgReads = mean(Reads, na.rm = TRUE))
features_long %>%
group_by(FeatureID) %>%
summarize(AvgReads = mean(Reads, na.rm = TRUE)) %>%
arrange(AvgReads) %>%
print(n = 15)
features_long %>%
group_by(FeatureID) %>%
summarize(AvgReads = mean(Reads, na.rm = TRUE)) %>%
arrange(AvgReads) %>%
filter(AvgReads < 1)
features_long %>%
group_by(FeatureID) %>%
summarize(AvgReads = mean(Reads, na.rm = TRUE)) %>%
arrange(desc(AvgReads)) %>%
print(n = 15)
Why is it important to calculate these kinds of exploratory statistics? What else might we want to check?
Let’s add another column for relative abundance so this is more informative.
features_long %>%
group_by(SampleID) %>%
mutate(RelAbund = Reads/sum(Reads))
features_long <- features_long %>%
group_by(SampleID) %>%
mutate(RelAbund = Reads/sum(Reads)) %>%
ungroup()
What if we want to summarize feature abundances across a metadata variable like Treatment group or day? We need to join the two tables together. There are a set of dplyr functions to do this: inner_join()
, full_join()
, left_join()
, and right_join()
. The type of join that you want to do depends on whether you want to keep just the shared observations between your two tables, or all observations from one or both tables. You can read more about join functions by typing ?join
.
Some other convenient summary functions:
The count()
function is shorthand for something we’ve already seen: grouping by a variable, and summarizing it by counting the number of observations in that group. In other words, surveys %>% count()
is equivalent to:
For convenience, count()
provides the sort
argument:
Try calculating some other summary statistics:
Now that you have learned how to use dplyr
to extract information from or summarize your raw data, you may want to export these new data sets to share them with your collaborators or for archival.
Similar to the read_delim()
function used for reading data files into R, there is a write_csv()
function that generates CSV files from data frames. Let’s make a reduced table of only ASVs with at least 1% relative abundance in at least 1 sample, and save it to a file.
features_keep <- features_long %>%
filter(RelAbund > 0.01) %>%
count(FeatureID)
features_keep <- inner_join(features_long, features_keep, by = "FeatureID") %>%
select(FeatureID, SampleID, MouseID, Treatment, Time_days, Reads, RelAbund)
write_csv(features_keep, path = "FeatureTableFiltered.csv")
Challenge
Import the taxonomy table, and join it to the ASV feature table. You will need to first read it in as a qza object, convert it to a tibble, and then join it. Hint: there is a
parse_taxonomy()
function in qiime2R that should be helpful.
Page built on: 📆 2020-04-23 ‒ 🕢 11:25:39
Data Carpentry, 2014-2019.
Questions? Feedback?
Please file
an issue on GitHub.
On Twitter: @datacarpentry
If this lesson is useful to you, consider