Week 2: Reproducibility and Data Transformations

Emorie D Beck

Overview

Today’s class outline:
- Welcome back, questions on homework (10-15 minutes)

  • Reproducibility and Your Personal Values (10 minutes)

  • Building a Reproducible Workflow Using Projects (45 miuntes)

  • Data Transformations using dplyr (45 minutes)

Reproducibility and Your Personal Values

Why reproducibility AND values?

  • The definition of reproducibility is somewhat debated
    • “‘Reproducibility’ refers to instances in which the original researcher’s data and computer codes are used to regenerate the results”
    • “‘Reproducibility’ refers to independent researchers arriving at the same results using their own data and methods”
  • But regardless of what definition you choose, reproducibility starts with a commitment in research to be
  • clear
  • transparent
  • honest
  • thorough

Why reproducibility AND values?

  • Reproducibility is ethical.

  • When I post a project, I pour over my code for hours, adding comments, rendering to multiple formats, trying to flag locations in online materials in the mansucript, etc.

  • I am trying to prevent errors, but I am also trying to make sure that other people know what I did, especially if I did make errors

Why reproducibility AND values?

  • Reproducible research is also equitable.

  • A reproducible research workflow can be downloaded by another person as a starting point, providing tools to other researchers who may not have the same access to education and resources as you

Where should we reproducible?

  • Planning
    • Study planning and design
    • Lab Protocols
    • Codebooks
    • etc.
  • Analyses
    • Scripting
    • Communication
    • etc.

Aspects of Reproducibility

  • Data within files should be ‘tidy’ (next week – tidyr)
  • Project based approach (today)
  • Consistency: naming, space, style (today)
  • Documentation: commenting and README (today)
  • Literate programming e.g. Rmarkdown (every day!)

Building a Reproducible Workflow Using Projects

Reproducible Workflow

A reproducible workflow is organized. What does it mean to be be organized? At least:

  • Use a project based approach, e.g., RStudio project or similar
  • Have a hierarchical folder structure
  • Have a consistent and informative naming system that ‘plays nice’
  • Document code with comments and analyses with README

More advanced (later in the class)

  • Generalize with functions and packages
  • version control

What is a project?

  • A project is a discrete piece of work which has a number of files associated with it such as the data and scripts for an analysis and the production reports.

  • Using a project-oriented workflow means to have a hierarchical folder structure with everything needed to reproduce an analysis.

One research project might have several organizational projects associated with it, for example:

  • data files and metadata (which may be made into a package)
  • preregistration
  • analysis and reporting
  • a package developed for the analysis
  • an app for allowing data to be explored by others

Example

Good Workflows are:

  • structured
  • systematic
  • repeatable

Naming

  • human and machine readable
    • no spaces
    • use snake/kebab case
    • ordering: numbers (zero left padded), dates
    • file extensions
-- ipcs_data_2019
   |__ipcs_data_2019.Rproj
   |__data
      |__raw_data
         |__2019-03-21_ema_raw.csv
         |__2019-03-21_baseline_raw.csv
      |__clean_data
         |__2019-06-21_ema_long.csv
         |__2019-06-21_ema_long.RData
         |__2019-06-21_baseline_wide.csv
         |__2019-06-21_baseline_wide.RData
   |__results
      |__01_models
         |__E_mortality.RData
         |__A_mortality.RData
      |__02_summaries
         |__E_mortality.RData
         |__A_mortality.RData
      |__03_figures
         |__mortality.png
         |__mortality.pdf
      |__04_tables
         |__zero_order_cors.RData
         |__descriptives.RData
         |__key_terms.RData
         |__all_model_terms.RData
   |__README.md
   |__refs
      |__r_refs.bib
      |__proj_refs.bib
   |__analyses
      |__01_background.Rmd
      |__02_data_cleaning.Rmd
      |__03_models.Rmd
      |__04_summary.Rmd

What is a path (Hierarchical File Structure)?

A path gives the address - or location - of a filesystem object, such as a file or directory.

  • Paths appear in the address bar of your browser or file explorer.
  • We need to know a file path whenever we want to read, write or refer to a file using code rather than interactively pointing and clicking to navigate.
  • A path can be absolute or relative
    • absolute = whole path from root
    • relative = path from current directory

Absolute paths

  • An Absolute path is given from the “root directory” of the object.

  • The root directory of a file system is the first or top directory in the hierarchy.

  • For example, C:\ or M:\ on windows or / on a Mac which is displayed as Macintosh HD in Finder.

Absolute paths

The absolute path for a file, pigeon.txt could be:

  • windows: C:/Users/edbeck/Desktop/pigeons/data-raw/pigeon.txt
  • Mac/unix systems: /Users/edbeck/Desktop/pigeons/data-raw/pigeon.txt
  • web: http://github.com/emoriebeck/pigeons/data/pigeon.txt

What is a directory?

  • Directory is the old word for what many now call a folder 📂.

  • Commands that act on directories in most programming languages and environments reflect this.

  • For example, in R this means “tell me my working directory”:

  • getwd() get working directory in R

What is a working directory?

  • The working directory is the default location a program is using. It is where the program will read and write files by default. You have only one working directory at a time.

  • The terms ‘working directory’, ‘current working directory’ and ‘current directory’ all mean the same thing.

Find your current working directory with:

getwd()
[1] "/Users/emoriebeck/Documents/teaching/PSC290-cleaning-fall-2023/04-workshops/02-week2-dplyr"

Relative paths

A relative path gives the location of a filesystem object relative to the working directory, (i.e., that returned by getwd()).

  • When pigeon.txt is in the working directory the relative path is just the file

  • name: pigeon.txt

  • If there is a folder in the working directory called data-raw and pigeon.txt is in there then the relative path is data-raw/pigeon.txt

Paths: moving up the hierarchy

  • ../ allows you to look in the directory above the working directory

  • When pigeon.txt is in folder above the working the relative path is ../pigeon.txt

  • And if it is in a folder called data-raw which is in the directory above the working directory then the relative path is ../data-raw/pigeon.txt

What’s in my directory?

You can list the contents of a directory using the dir() command

  • dir() list the contents of the working directory
  • dir("..") list the contents of the directory above the working directory
  • dir("../..") list the contents of the directory two directories above the working directory
  • dir("data-raw") list the contents of a folder call data-raw which is in the working directory.

Relative or absolute

  • Most of the time you should use relative paths because that makes your work portable (i.e. to a different machine / user / etc.).

  • 🎉 The tab key is your friend!

Relative or absolute

  • You only need to use absolute paths when you are referring to filesystem outside the one you are using.

  • I often store the beginning of that path as object.

    • web_wd <- “https://github.com/emoriebeck/pigeons/”
    • Then I can use sprintf() or paste() to add different endings
web_wd <- "https://github.com/emoriebeck/pigeons/"
sprintf("%s/data-raw/pigeon.txt", web_wd)
[1] "https://github.com/emoriebeck/pigeons//data-raw/pigeon.txt"

RStudio Projects

RStudio Projects

  • Project is obviously a commonly used word. When I am referring to an RStudio Project I will use the capitalised words ‘RStudio Project’ or ‘Project’.
  • In other cases, I will use ‘project’.
  • An RStudio Project is a directory with an .Rproj file in it.
  • The name of the RStudio Project is the same as the name of the top level directory which is referred to as the Project directory.

RStudio Projects

For example, if you create an RStudio Project ipcs_data_2019 your folder structure would look something like this:

-- ipcs_data_2019
   |__ipcs_data_2019.Rproj
   |__data
      |__raw_data
         |__2019-03-21_ema_raw.csv
         |__2019-03-21_baseline_raw.csv
      |__clean_data
         |__2019-06-21_ema_long.csv
         |__2019-06-21_ema_long.RData
         |__2019-06-21_baseline_wide.csv
         |__2019-06-21_baseline_wide.RData
   |__results
      |__01_models
      |__02_summaries
      |__03_figures
      |__04_tables
   |__README.md
   |__refs
      |__r_refs.bib
      |__proj_refs.bib
   |__analyses
      |__01_background.Rmd
      |__02_data_cleaning.Rmd
      |__03_models.Rmd
      |__04_summary.Rmd

RStudio Projects

  • the .RProj file which is the defining feature of an RStudio Project

  • When you open an RStudio Project, the working directory is set to the Project directory (i.e., the location of the .Rproj file).

  • This makes your work portable. You can zip up the project folder and send it to any person, including future you, or any computer.

  • They will be able to unzip, open the project and have all the code just work.

  • (This is great for sending code and/or results to your advisors)

Directory structure

You are aiming for structured, systematic and repeatable. For example, the Project directory might contain:

  • .RProj file
  • README - tell people what the project is and how to use it
  • License - tell people what they are allowed to do with your project
  • Directories
  • data/
  • prereg/
  • scripts/
  • results/
  • manuscript/

README

  • READMEs are a form of documentation which have been widely used for a long time. They contain all the information about the other files in a directory. They can be extensive.

  • Wikipedia README page

  • GitHub Doc’s About READMEs

  • OSF

README

A minimal README might give:

  • Title
  • Description, 50 words or so on what the project is
  • Technical Description of the project
    • What software and packages are needed including versions
    • Any instructions needed to run the analysis/use the software
    • Any issues that a user might face in running the analysis/using the software
  • Instructions on how to use the work
  • Links to where other files, materials, etc. are stored
    • E.g., an OSF readme may point to GitHub, PsyArxiv, etc.

Here’s an example from one of my webapps

License

A license tells others what they can and can’t do with your work.

choosealicense.com is a useful explainer.

I typically use:

Exercise

Exercise

  • You are going to create an RStudio Project with some directories and use it to organize a very simple analysis.
  • The analysis will import a data file, reformat it and write the new format to file. It will then create a figure and write the image to file.
  • You’ll get practice with tidying data (more on that next week) and plotting data.

RStudio Project infrastructure

create a new Project called iris by:

  • clicking File->New Project…

  • clicking on the little icon (second from the left) at the top

  • Choose New Project, then New Directory, then New Project. Name the RStudio Project iris.

  • Create folders in iris called data-raw, data-processed and figures.

  • Start new scripts called 01-import.R, 02-tidy.R, and 03-figures.R

Save and Import

  • Save a copy of iris.csv to your data-raw folder. These data give the information about different species of irises.

  • In your 01-import.R script, load the tidyverse set of packages.

library(tidyverse)
write_csv(iris, file = "data-raw/iris.csv")

Save and Import

  • Add the command to import the data:
iris <- read_csv("data-raw/iris.csv")
  • The relative path is data-raw/iris.csv because your working directory is the Project directory, iris.

Reformat the data

This dataset has three observations in a row - it is not ‘tidy’.

  • Open your 02-tidy.R script, and reshape the data using:
iris <- pivot_longer(data = iris, 
                     cols = -Species, 
                     names_to = "attribute", 
                     values_to = "value")
  • This reformats the dataframe in R but does not overwrite the text file of the data.

  • Don’t worry too much about this right now. We’ll spend a lot of time talking about reshaping data next week!

Writing files

Often we want to write to files.

  • My main reasons for doing so are to save copies of data that have been processed and to save manuscripts and graphics.
  • Also, as someone who collects a lot of data, the de-identified, fully anonymized data files I can share and the identifiable data I collect require multiple versions (and encryption, keys, etc.)

Writing files

  • Write your dataframe iris to a csv file named iris-long.csv in your data-processed folder:
file <- "data-processed/iris-long.csv"
write_csv(iris, file)
  • Putting file paths into variables often makes your code easier to read especially when file paths are long or used multiple times.

Create a plot

Open your 03-figures.R script and create a simple plot of this data with:

fig1 <- ggplot(
  data = iris
  , aes(y = Species, x = value, fill = Species)
  ) + 
  geom_boxplot() +                       
  facet_grid(attribute~.) + 
  scale_x_continuous(name = "Attribute") +
  scale_y_discrete(name = "Species") +
  theme_classic() + 
  theme(legend.position = "none")

View plot

View plot with:

fig1

Write ggplot figure to file

  • A useful function for saving ggplot figures is ggsave().

  • It has arguments for the size, resolution and device for the image. See the ggsave() reference page.

Write ggplot figure to file

  • Since I often make more than one figure, I might set these arguments first.
  • Assign ggsave argument values to variables:
# figure saving settings
units <- "in"  
fig_w <- 3.2
fig_h <- fig_w
dpi <- 600
device <- "tiff" 
  • Save the figure to your figures directory:
ggsave("figures/fig1.tiff",
       plot = fig1,
       device = device,
       width = fig_w,
       height = fig_h,
       units = units,
       dpi = dpi)
  • Check it is there!

Data Manipulation in dplyr

dplyr Core Functions

dplyr Core Functions

  1. %>%: The pipe. Read as “and then.”
  2. filter(): Pick observations (rows) by their values.
  3. select(): Pick variables (columns) by their names.
  4. arrange(): Reorder the rows.
  5. group_by(): Implicitly split the data set by grouping by names (columns).
  6. mutate(): Create new variables with functions of existing variables.
  7. summarize() / summarise(): Collapse many values down to a single summary.

Core Functions

  1. %>%
  2. filter()
  3. select()
  4. arrange()
  5. group_by()
  6. mutate()
  7. summarize()

Although each of these functions are powerful alone, they are incredibly powerful in conjunction with one another. So below, I’ll briefly introduce each function, then link them all together using an example of basic data cleaning and summary.

1. %>%

  • The pipe %>% is wonderful. It makes coding intuitive. Often in coding, you need to use so-called nested functions. For example, you might want to round a number after taking the square of 43.
sqrt(43)
[1] 6.557439
round(sqrt(43), 2)
[1] 6.56

1. %>%

The issue with this comes whenever we need to do a series of operations on a data set or other type of object. In such cases, if we run it in a single call, then we have to start in the middle and read our way out.

round(sqrt(43/2), 2)
[1] 4.64

1. %>%

The pipe solves this by allowing you to read from left to right (or top to bottom). The easiest way to think of it is that each call of %>% reads and operates as “and then.” So with the rounded square root of 43, for example:

sqrt(43) %>%
  round(2)
[1] 6.56

2. filter()

Often times, when conducting research (experiments or otherwise), there are observations (people, specific trials, etc.) that you don’t want to include.

data(bfi) # grab the bfi data from the psych package
bfi <- bfi %>% as_tibble()
head(bfi)
# A tibble: 6 × 28
     A1    A2    A3    A4    A5    C1    C2    C3    C4    C5    E1    E2    E3
  <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int>
1     2     4     3     4     4     2     3     3     4     4     3     3     3
2     2     4     5     2     5     5     4     4     3     4     1     1     6
3     5     4     5     4     4     4     5     4     2     5     2     4     4
4     4     4     6     5     5     4     4     3     5     5     5     3     4
5     2     3     3     4     5     4     4     5     3     2     2     2     5
6     6     6     5     6     5     6     6     6     1     3     2     1     6
# ℹ 15 more variables: E4 <int>, E5 <int>, N1 <int>, N2 <int>, N3 <int>,
#   N4 <int>, N5 <int>, O1 <int>, O2 <int>, O3 <int>, O4 <int>, O5 <int>,
#   gender <int>, education <int>, age <int>

2. filter()

Often times, when conducting research (experiments or otherwise), there are observations (people, specific trials, etc.) that you don’t want to include.

summary(bfi$age) # get age descriptives
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
   3.00   20.00   26.00   28.78   35.00   86.00 

2. filter()

Often times, when conducting research (experiments or otherwise), there are observations (people, specific trials, etc.) that you don’t want to include.

bfi2 <- bfi %>% # see a pipe!
  filter(age <= 18) # filter to age up to 18

summary(bfi2$age) # summary of the new data 
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
    3.0    16.0    17.0    16.3    18.0    18.0 

But this isn’t quite right. We still have folks below 12. But, the beauty of filter() is that you can do sequence of OR and AND statements when there is more than one condition, such as up to 18 AND at least 12.

2. filter()

Often times, when conducting research (experiments or otherwise), there are observations (people, specific trials, etc.) that you don’t want to include.

bfi2 <- bfi %>%
  filter(age <= 18 & age >= 12) # filter to age up to 18 and at least 12

summary(bfi2$age) # summary of the new data 
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
   12.0    16.0    17.0    16.4    18.0    18.0 

Got it!

2. filter()

  • But filter works for more use cases than just conditional
    • <, >, <=, and >=
  • It can also be used for cases where we want a single values to match cases with text.
  • To do that, let’s convert one of the variables in the bfi data frame to a string.
  • So let’s change gender (1 = male, 2 = female) to text (we’ll get into factors later).
bfi$education <- plyr::mapvalues(bfi$education, 1:5, c("Below HS", "HS", "Some College", "College", "Higher Degree"))

2. filter()

Now let’s try a few things:

1. Create a data set with only individuals with some college (==).

bfi2 <- bfi %>% 
  filter(education == "Some College")
unique(bfi2$education)
[1] "Some College"

2. filter()

Now let’s try a few things:

2. Create a data set with only people age 18 (==).

bfi2 <- bfi %>%
  filter(age == 18)
summary(bfi2$age)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
     18      18      18      18      18      18 

2. filter()

Now let’s try a few things:

3. Create a data set with individuals with some college or above (%in%).

bfi2 <- bfi %>%
  filter(education %in% c("Some College", "College", "Higher Degree"))
unique(bfi2$education)
[1] "Some College"  "Higher Degree" "College"      

%in% is great. It compares a column to a vector rather than just a single value, you can compare it to several

bfi2 <- bfi %>%
  filter(age %in% 12:18)
summary(bfi2$age)
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
   12.0    16.0    17.0    16.4    18.0    18.0 

3. select()

  • If filter() is for pulling certain observations (rows), then select() is for pulling certain variables (columns).
  • it’s good practice to remove these columns to stop your environment from becoming cluttered and eating up your RAM.

3. select()

  • In our bfi data, most of these have been pre-removed, so instead, we’ll imagine we don’t want to use any indicators of Agreeableness (A1-A5) and that we aren’t interested in gender.
  • With select(), there are few ways choose variables. We can bare quote name the ones we want to keep, bare quote names we want to remove, or use any of a number of select() helper functions.

3. select():

A. Bare quote columns we want to keep:

bfi %>%
  select(C1, C2, C3, C4, C5) %>%
  print(n = 6)
# A tibble: 2,800 × 5
     C1    C2    C3    C4    C5
  <int> <int> <int> <int> <int>
1     2     3     3     4     4
2     5     4     4     3     4
3     4     5     4     2     5
4     4     4     3     5     5
5     4     4     5     3     2
6     6     6     6     1     3
# ℹ 2,794 more rows
bfi %>%
  select(C1:C5) %>%
  print(n = 6)
# A tibble: 2,800 × 5
     C1    C2    C3    C4    C5
  <int> <int> <int> <int> <int>
1     2     3     3     4     4
2     5     4     4     3     4
3     4     5     4     2     5
4     4     4     3     5     5
5     4     4     5     3     2
6     6     6     6     1     3
# ℹ 2,794 more rows

3. select():

B. Bare quote columns we don’t want to keep:

bfi %>% 
  select(-(A1:A5), -gender) %>% # Note the `()` around the columns
  print(n = 6)
# A tibble: 2,800 × 22
     C1    C2    C3    C4    C5    E1    E2    E3    E4    E5    N1    N2    N3
  <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int>
1     2     3     3     4     4     3     3     3     4     4     3     4     2
2     5     4     4     3     4     1     1     6     4     3     3     3     3
3     4     5     4     2     5     2     4     4     4     5     4     5     4
4     4     4     3     5     5     5     3     4     4     4     2     5     2
5     4     4     5     3     2     2     2     5     4     5     2     3     4
6     6     6     6     1     3     2     1     6     5     6     3     5     2
# ℹ 2,794 more rows
# ℹ 9 more variables: N4 <int>, N5 <int>, O1 <int>, O2 <int>, O3 <int>,
#   O4 <int>, O5 <int>, education <chr>, age <int>

3. select():

C. Add or remove using select() helper functions.

  • starts_with()
  • ends_with()
  • contains()
  • matches()
  • num_range()
  • one_of()
  • all_of()
bfi %>%
  select(starts_with("C"))
# A tibble: 2,800 × 5
      C1    C2    C3    C4    C5
   <int> <int> <int> <int> <int>
 1     2     3     3     4     4
 2     5     4     4     3     4
 3     4     5     4     2     5
 4     4     4     3     5     5
 5     4     4     5     3     2
 6     6     6     6     1     3
 7     5     4     4     2     3
 8     3     2     4     2     4
 9     6     6     3     4     5
10     6     5     6     2     1
# ℹ 2,790 more rows

4. arrange()

  • Sometimes, either in order to get a better sense of our data or in order to well, order our data, we want to sort it
  • Although there is a base R sort() function, the arrange() function is tidyverse version that plays nicely with other tidyverse functions.

4. arrange()

So in our previous examples, we could also arrange() our data by age or education, rather than simply filtering. (Or as we’ll see later, we can do both!)

# sort by age
bfi %>% 
  select(gender:age) %>%
  arrange(age) %>% 
  print(n = 6)
# A tibble: 2,800 × 3
  gender education       age
   <int> <chr>         <int>
1      1 Higher Degree     3
2      2 <NA>              9
3      2 Some College     11
4      2 <NA>             11
5      2 <NA>             11
6      2 <NA>             12
# ℹ 2,794 more rows
# sort by education
bfi %>%
  select(gender:age) %>%
  arrange(education) %>%
  print(n = 6)
# A tibble: 2,800 × 3
  gender education   age
   <int> <chr>     <int>
1      1 Below HS     19
2      1 Below HS     21
3      1 Below HS     17
4      1 Below HS     18
5      1 Below HS     18
6      2 Below HS     18
# ℹ 2,794 more rows

4. arrange()

We can also arrange by multiple columns, like if we wanted to sort by gender then education:

bfi %>%
  select(gender:age) %>%
  arrange(gender, education) %>% 
  print(n = 6)
# A tibble: 2,800 × 3
  gender education   age
   <int> <chr>     <int>
1      1 Below HS     19
2      1 Below HS     21
3      1 Below HS     17
4      1 Below HS     18
5      1 Below HS     18
6      1 Below HS     32
# ℹ 2,794 more rows

Bringing it all together: Split-Apply-Combine

Bringing it all together: Split-Apply-Combine

  • Much of the power of dplyr functions lay in the split-apply-combine method

  • A given set of of data are:

    • split into smaller chunks
    • then a function or series of functions are applied to each chunk
    • and then the chunks are combined back together

5. group_by()

  • The group_by() function is the “split” of the method
  • It basically implicitly breaks the data set into chunks by whatever bare quoted column(s)/variable(s) are supplied as arguments.

5. group_by()

So imagine that we wanted to group_by() education levels to get average ages at each level

bfi %>%
  select(starts_with("C"), age, gender, education) %>%
  group_by(education) %>%
  print(n = 6)
# A tibble: 2,800 × 8
# Groups:   education [6]
     C1    C2    C3    C4    C5   age gender education   
  <int> <int> <int> <int> <int> <int>  <int> <chr>       
1     2     3     3     4     4    16      1 <NA>        
2     5     4     4     3     4    18      2 <NA>        
3     4     5     4     2     5    17      2 <NA>        
4     4     4     3     5     5    17      2 <NA>        
5     4     4     5     3     2    17      1 <NA>        
6     6     6     6     1     3    21      2 Some College
# ℹ 2,794 more rows

5. group_by()

  • Hadley’s first law of data cleaning: “What is split, must be combined”
  • This is super easy with the ungroup() function:
bfi %>%
  select(starts_with("C"), age, gender, education) %>%
  group_by(education) %>%
  ungroup() %>%
  print(n = 6)
# A tibble: 2,800 × 8
     C1    C2    C3    C4    C5   age gender education   
  <int> <int> <int> <int> <int> <int>  <int> <chr>       
1     2     3     3     4     4    16      1 <NA>        
2     5     4     4     3     4    18      2 <NA>        
3     4     5     4     2     5    17      2 <NA>        
4     4     4     3     5     5    17      2 <NA>        
5     4     4     5     3     2    17      1 <NA>        
6     6     6     6     1     3    21      2 Some College
# ℹ 2,794 more rows

5. group_by()

Multiple group_by() calls overwrites previous calls:

bfi %>%
  select(starts_with("C"), age, gender, education) %>%
  group_by(education) %>%
  group_by(gender, age) %>%
  print(n = 6)
# A tibble: 2,800 × 8
# Groups:   gender, age [115]
     C1    C2    C3    C4    C5   age gender education   
  <int> <int> <int> <int> <int> <int>  <int> <chr>       
1     2     3     3     4     4    16      1 <NA>        
2     5     4     4     3     4    18      2 <NA>        
3     4     5     4     2     5    17      2 <NA>        
4     4     4     3     5     5    17      2 <NA>        
5     4     4     5     3     2    17      1 <NA>        
6     6     6     6     1     3    21      2 Some College
# ℹ 2,794 more rows

6. mutate()

  • mutate() is one of your “apply” functions
  • When you use mutate(), the resulting data frame will have the same number of rows you started with
  • You are directly mutating the existing data frame, either modifying existing columns or creating new ones

6. mutate()

To demonstrate, let’s add a column that indicated average age levels within each age group

bfi %>%
  select(starts_with("C"), age, gender, education) %>%
  arrange(education) %>%
  group_by(education) %>% 
  mutate(age_by_edu = mean(age, na.rm = T)) %>%
  print(n = 6)
# A tibble: 2,800 × 9
# Groups:   education [6]
     C1    C2    C3    C4    C5   age gender education age_by_edu
  <int> <int> <int> <int> <int> <int>  <int> <chr>          <dbl>
1     6     6     3     4     5    19      1 Below HS        25.1
2     4     3     5     3     2    21      1 Below HS        25.1
3     5     5     5     2     2    17      1 Below HS        25.1
4     5     5     4     1     1    18      1 Below HS        25.1
5     4     5     4     3     3    18      1 Below HS        25.1
6     3     2     3     4     6    18      2 Below HS        25.1
# ℹ 2,794 more rows

6. mutate()

mutate() is also super useful even when you aren’t grouping

We can create a new category

bfi %>%
  select(starts_with("C"), age, gender, education) %>%
  mutate(gender_cat = plyr::mapvalues(gender, c(1,2), c("Male", "Female")))
# A tibble: 2,800 × 9
      C1    C2    C3    C4    C5   age gender education    gender_cat
   <int> <int> <int> <int> <int> <int>  <int> <chr>        <chr>     
 1     2     3     3     4     4    16      1 <NA>         Male      
 2     5     4     4     3     4    18      2 <NA>         Female    
 3     4     5     4     2     5    17      2 <NA>         Female    
 4     4     4     3     5     5    17      2 <NA>         Female    
 5     4     4     5     3     2    17      1 <NA>         Male      
 6     6     6     6     1     3    21      2 Some College Female    
 7     5     4     4     2     3    18      1 <NA>         Male      
 8     3     2     4     2     4    19      1 HS           Male      
 9     6     6     3     4     5    19      1 Below HS     Male      
10     6     5     6     2     1    17      2 <NA>         Female    
# ℹ 2,790 more rows

6. mutate()

mutate() is also super useful even when you aren’t grouping

We could also just overwrite it:

bfi %>%
  select(starts_with("C"), age, gender, education) %>%
  mutate(gender = plyr::mapvalues(gender, c(1,2), c("Male", "Female")))
# A tibble: 2,800 × 8
      C1    C2    C3    C4    C5   age gender education   
   <int> <int> <int> <int> <int> <int> <chr>  <chr>       
 1     2     3     3     4     4    16 Male   <NA>        
 2     5     4     4     3     4    18 Female <NA>        
 3     4     5     4     2     5    17 Female <NA>        
 4     4     4     3     5     5    17 Female <NA>        
 5     4     4     5     3     2    17 Male   <NA>        
 6     6     6     6     1     3    21 Female Some College
 7     5     4     4     2     3    18 Male   <NA>        
 8     3     2     4     2     4    19 Male   HS          
 9     6     6     3     4     5    19 Male   Below HS    
10     6     5     6     2     1    17 Female <NA>        
# ℹ 2,790 more rows

7. summarize() / summarise()

  • summarize() is one of your “apply” functions
  • The resulting data frame will have the same number of rows as your grouping variable
  • You number of groups is 1 for ungrouped data frames
# group_by() education
bfi %>%
  select(starts_with("C"), age, gender, education) %>%
  arrange(education) %>%
  group_by(education) %>% 
  summarize(age_by_edu = mean(age, na.rm = T))  
# A tibble: 6 × 2
  education     age_by_edu
  <chr>              <dbl>
1 Below HS            25.1
2 College             33.0
3 HS                  31.5
4 Higher Degree       35.3
5 Some College        27.2
6 <NA>                18.0

7. summarize() / summarise()

  • summarize() is one of your “apply” functions
  • The resulting data frame will have the same number of rows as your grouping variable
  • You number of groups is 1 for ungrouped data frames
# no groups  
bfi %>% 
  select(starts_with("C"), age, gender, education) %>%
  arrange(education) %>%
  summarize(age_by_edu = mean(age, na.rm = T))  
# A tibble: 1 × 1
  age_by_edu
       <dbl>
1       28.8

Attributions

Part 1 of these slides was adapted from Emma Rand’s course on reproducibility at York University.

Rand E. (2023). White Rose BBSRC DTP Training: An Introduction to Reproducible Analyses in R (version v1.2). DOI: https://doi.org/10.5281/zenodo.3859818 URL: https://github.com/3mmaRand/pgr_reproducibility