Download .Rmd (won’t work in Safari or IE)
See GitHub Repository

What Are Data?

Data are the core of everything that we do in statistical analysis. Data come in many forms, and I don’t just mean .csv, .xls, .sav, etc. Data can be wide, long, documented, fragmented, messy, and about anything else that you can imagine.

Although data could arguably be more means than end in psychology, the importance of understanding the structure and format of your data cannot overstated. Failure to understand your data could end in improper techniques and flagrantly wrong inferences at worst.

In this tutorial, we are going to talk data management and basic data cleaning. Other tutorials will go more in depth into data cleaning and reshaping. This tutorial is meant to prepare you to think about those in more nuanced ways and to help you develop a functional workflow for conducting your own research.

Workspace

When I create an rmarkdown document for my own research projects, I always start by setting up my my workspace. This involves 3 steps:

  1. Packages
  2. Codebook(s)
  3. Data

Below, we will step through each of these separately, setting ourselves up to (hopefully) flawlessly communicate with R and our data.

Packages

Packages seems like the most basic step, but it is actually very important. ALWAYS LOAD YOUR PACKAGES IN A VERY INTENTIONAL ORDER AT THE BEGINNING OF YOUR SCRIPT. Package conflicts suck, so it needs to be shouted.

For this tutorial, we are going to quite simple. We will load the psych package for data descriptives, some options for cleaning and reverse coding, and some evaluations of our scales. The plyr package is the predecessor of the dplyr package, which is a core package of the tidyverse, which you will become quite familiar with in these tutorials. I like the plyr package because it contains a couple of functions (e.g. mapvalues()) that I find quite useful. Finally, we load the tidyverse package, which is actually a complilation of 8 packages. Some of these we will use today and some we will use in later tutorials. All are very useful and are arguably some of the most powerful tools R offers.

## ── Attaching packages ─────────────────────────────────────────────────────────────────── tidyverse 1.2.1 ──
## ✔ ggplot2 3.2.1     ✔ purrr   0.3.0
## ✔ tibble  2.0.1     ✔ dplyr   0.8.3
## ✔ tidyr   0.8.2     ✔ stringr 1.4.0
## ✔ readr   1.1.1     ✔ forcats 0.3.0
## ── Conflicts ────────────────────────────────────────────────────────────────────── tidyverse_conflicts() ──
## ✖ ggplot2::%+%()     masks psych::%+%()
## ✖ ggplot2::alpha()   masks psych::alpha()
## ✖ dplyr::arrange()   masks plyr::arrange()
## ✖ purrr::compact()   masks plyr::compact()
## ✖ dplyr::count()     masks plyr::count()
## ✖ dplyr::failwith()  masks plyr::failwith()
## ✖ dplyr::filter()    masks stats::filter()
## ✖ dplyr::id()        masks plyr::id()
## ✖ dplyr::lag()       masks stats::lag()
## ✖ dplyr::mutate()    masks plyr::mutate()
## ✖ dplyr::rename()    masks plyr::rename()
## ✖ dplyr::summarise() masks plyr::summarise()
## ✖ dplyr::summarize() masks plyr::summarize()

Codebook

The second step is a codebook. Arguably, this is the first step because you should create the codebook long before you open R and load your data.

In this case, we are going to using some data from the German Socioeconomic Panel Study (GSOEP), which is an ongoing Panel Study in Germany. Note that these data are for teaching purposes only, shared under the license for the Comprehensive SOEP teaching dataset, which I, as a contracted SOEP user, can use for teaching purposes. These data represent select cases from the full data set and should not be used for the purpose of publication. The full data are available for free at https://www.diw.de/en/diw_02.c.222829.en/access_and_ordering.html.

For this tutorial, I created the codebook for you (Download (won’t work in Safari or IE)), and included what I believe are the core columns you may need. Some of these columns may not be particularly helpful for every dataset.

Here are my core columns that are based on the original data:
1. dataset: this column indexes the name of the dataset that you will be pulling the data from. This is important because we will use this info later on (see purrr tutorial) to load and clean specific data files. Even if you don’t have multiple data sets, I believe consistency is more important and suggest using this.
2. old_name: this column is the name of the variable in the data you are pulling it from. This should be exact. The goal of this column is that it will allow us to select() variables from the original data file and rename them something that is more useful to us.
3. item_text: this column is the original text that participants saw or a description of the item.
4. scale: this column tells you what the scale of the variable is. Is it a numeric variable, a text variable, etc. This is helpful for knowing the plausible range. 5. reverse: this column tells you whether items in a scale need to be reverse coded. I recommend coding this as 1 (leave alone) and -1 (reverse) for reasons that will become clear later.
6. mini: this column represents the minimum value of scales that are numeric. Leave blank otherwise.
7. maxi: this column represents the maximumv alue of scales that are numeric. Leave blank otherwise.
8. recode: sometimes, we want to recode variables for analyses (e.g. for categorical variables with many levels where sample sizes for some levels are too small to actually do anything with it). I use this column to note the kind of recoding I’ll do to a variable for transparency.

Here are additional columns that will make our lives easier or are applicable to some but not all data sets:
9. category: broad categories that different variables can be put into. I’m a fan of naming them things like “outcome”, “predictor”, “moderator”, “demographic”, “procedural”, etc. but sometimes use more descriptive labels like “Big 5” to indicate the model from which the measures are derived.
10. label: label is basically one level lower than category. So if the category is Big 5, the label would be, or example, “A” for Agreeableness, “SWB” for subjective well-being, etc. This column is most important and useful when you have multiple items in a scales, so I’ll typically leave this blank when something is a standalone variable (e.g. sex, single-item scales, etc.).
11. item_name: This is the lowest level and most descriptive variable. It indicates which item in scale something is. So it may be “kind” for Agreebleness or “sex” for the demographic biological sex variable.
12. year: for longitudinal data, we have several waves of data and the name of the same item across waves is often different, so it’s important to note to which wave an item belongs. You can do this by noting the wave (e.g. 1, 2, 3), but I prefer the actual year the data were collected (e.g. 2005, 2009, etc.)
13. new_name: This is a column that brings together much of the information we’ve already collected. It’s purpose is to be the new name that we will give to the variable that is more useful and descriptive to us. This is a constructed variable that brings together others. I like to make it a combination of “category”, “label”, “item_name”, and year using varying combos of "_" and “.” that we can use later with tidyverse functions. I typically construct this variable in Excel using the CONCATENATE() function, but it could also be done in R. The reason I do it in Excel is that it makes it easier for someone who may be reviewing my codebook.
14. meta: Some datasets have a meta name, which essentially means a name that variable has across all waves to make it clear which variables are the same. They are not always useful as some data sets have meta names but no great way of extracting variables using them. But they’re still typically useful to include in your codebook regardless.

Below, I’ll load in the codebook we will use for this study, which will include all of the above columns.

## Parsed with column specification:
## cols(
##   dataset = col_character(),
##   old_name = col_character(),
##   item_text = col_character(),
##   scale = col_character(),
##   category = col_character(),
##   label = col_character(),
##   item_name = col_character(),
##   year = col_integer(),
##   new_name = col_character(),
##   reverse = col_integer(),
##   mini = col_integer(),
##   maxi = col_integer(),
##   recode = col_character()
## )

Data

First, we need to load in the data. We’re going to use three waves of data from the German Socioeconomic Panel Study, which is a longitudinal study of German households that has been conducted since 1984. We’re going to use more recent data from three waves of personality data collected between 2005 and 2013.

Note: we will be using the teaching set of the GSOEP data set. I will not be pulling from the raw files as a result of this. I will also not be mirroring the format that you would usually load the GSOEP from because that is slightly more complicated and somethng we will return to in a later tutorial on purrr (link) after we have more skills. I’ve left that code for now, but it won’t make a lot of sense right now.

This code below shows how I would read in and rename a wide-format data set using the codebook I created.

## Parsed with column specification:
## cols(
##   .default = col_integer()
## )
## See spec(...) for full column specifications.

Clean Data

Recode Variables

Many of the data we work with have observations that are missing for a variety of reasons. In R, we treat missing values as NA, but many other programs from which you may be importing your data may use other codes (e.g. 999, -999, etc.). Large panel studies tend to use small negative values to indicate different types of missingness. This is why it is important to note down the scale in your codebook. That way you can check which values may need to be recoded to explicit NA values.

In the GSOEP, -1 to -7 indicate various types of missing values, so we will recode these to NA. To do this, we will use one of my favorite functions, mapvalues(), from the plyr package. In later tutorials where we read in and manipulate more complex data sets, we will use mapvalues() a lot. Basically, mapvalues takes 4 key arguments: (1) the variable you are recoding, (2) a vector of initial values from which you want to (3) recode your variable to using a vector of new values in the same order as the old values, and (4) a way to turn off warnings if some levels are not in your data (warn_missing = F).

Reverse-Scoring

Many scales we use have items that are positively or negatively keyed. High ratings on positively keyed items are indicative of being high on a construct. In contrast, high ratings on negatively keyed items are indicative of being low on a construct. Thus, to create the composite scores of constructs we often use, we must first “reverse” the negatively keyed items so that high scores indicate being higher on the construct.

There are a few ways to do this in R. Below, I’ll demonstrate how to do so using the reverse.code() function in the psych package in R. This function was built to make reverse coding more efficient (i.e. please don’t run every item that needs to be recoded with separate lines of code!!).

Before we can do that, though, we need to restructure the data a bit in order to bring in the reverse coding information from our codebook. We will talk more about what’s happening here in later tutorials on tidyr, so for now, just bear with me.

## Joining, by = "item"
## Warning: Expected 2 pieces. Missing pieces filled with `NA` in 19618 rows
## [452105, 452106, 452107, 452108, 452109, 452110, 452111, 452112, 452113,
## 452114, 452115, 452116, 452117, 452118, 452119, 452120, 452121, 452122,
## 452123, 452124, ...].

Create Composites

Now that we have reverse coded our items, we can create composites.

BFI-S

We’ll start with our scale – in this case, the Big 5 from the German translation of the BFI-S.

Here’s the simplest way, which is also the long way because you’d have to do it for each scale in each year, which I don’t recommend.

But personally, I don’t have a desire to do that 15 times, so we can use our codebook and dplyr to make our lives a whole lot easier.

## Joining, by = "Procedural__SID"

Life Events

We also want to create a variable that indexes whether our participants experienced any of the life events during the years of interest (2005-2015).

Descriptives

Descriptives of your data are incredibly important. They are your first line of defense against things that could go wrong later on when you run inferential stats. The help you check the distribution of your variables (e.g. non-normally distributed), look for implausible

There are lots of ways to create great tables of descriptives. My favorite way is using dplyr, but we will save that for a later lesson on creating great APA style tables in R. For now, we’ll use a wonderfully helpful function from the psych package called describe() in conjunction with a small amount of tidyr to reshape the data.

For count variables, like life events, we need to use something slightly different. We’re typically more interested in counts – in this case, how many people experienced each life event in the 10 years we’re considering?

To do this, we’ll use a little bit of dplyr rather than the base R function table() that is often used for count data. Instead, we’ll use a combination of group_by() and n() to get the counts by group. In the end, we’re left with a nice little table of counts.

Scale Reliability

When we work with scales, it’s often a good idea to check the internal consistency of your scale. If the scale isn’t performing how it should be, that could critically impact the inferences you make from your data.

To check the internal consistency of our Big 5 scales, we will use the alpha() function from the psych package, which will give us Cronbach’s as well as a number of other indicators of internal consistency.

Here’s the way you may have seen / done this in the past.

But again, doing this 15 times would be quite a pain and would open you up to the possibility of a lot of copy and paste errors.

So instead, to do this, I’m going to use a mix of the tidyverse. At first glance, it may seem complex but as you move through other tutorials (particularly the purrr tutorial), it will begin to make much more sense.

Zero-Order Correlations

Finally, we often want to look at the zero-order correlation among study variables to make sure they are performing as we think they should.

To run the correlations, we will need to have our data in wide format, so we’re going to do a little bit of reshaping before we do.

##          DOB   Sex A_2005 A_2009 A_2013 C_2005 C_2009 C_2013 E_2005 E_2009
## DOB     1.00  0.00  -0.08  -0.07  -0.06  -0.13  -0.12  -0.14   0.10   0.12
## Sex     0.00  1.00   0.18   0.17   0.18   0.05   0.07   0.09   0.08   0.08
## A_2005 -0.08  0.18   1.00   0.50   0.50   0.32   0.20   0.19   0.10   0.06
## A_2009 -0.07  0.17   0.50   1.00   0.55   0.19   0.28   0.18   0.05   0.08
## A_2013 -0.06  0.18   0.50   0.55   1.00   0.18   0.19   0.29   0.04   0.06
## C_2005 -0.13  0.05   0.32   0.19   0.18   1.00   0.52   0.48   0.19   0.10
## C_2009 -0.12  0.07   0.20   0.28   0.19   0.52   1.00   0.55   0.12   0.16
## C_2013 -0.14  0.09   0.19   0.18   0.29   0.48   0.55   1.00   0.13   0.14
## E_2005  0.10  0.08   0.10   0.05   0.04   0.19   0.12   0.13   1.00   0.61
## E_2009  0.12  0.08   0.06   0.08   0.06   0.10   0.16   0.14   0.61   1.00
## E_2013  0.10  0.11   0.04   0.04   0.07   0.10   0.10   0.18   0.59   0.65
## N_2005  0.06 -0.18   0.10   0.06   0.02   0.09   0.06   0.03   0.18   0.10
## N_2009  0.03 -0.22   0.07   0.09   0.03   0.06   0.08   0.05   0.13   0.16
## N_2013  0.02 -0.21   0.06   0.06   0.10   0.04   0.06   0.08   0.10   0.10
## O_2005  0.11  0.06   0.12   0.09   0.07   0.17   0.12   0.08   0.40   0.29
## O_2009  0.10  0.05   0.05   0.11   0.07   0.06   0.13   0.08   0.26   0.36
## O_2013  0.05  0.07   0.08   0.09   0.13   0.07   0.08   0.15   0.24   0.28
##        E_2013 N_2005 N_2009 N_2013 O_2005 O_2009 O_2013
## DOB      0.10   0.06   0.03   0.02   0.11   0.10   0.05
## Sex      0.11  -0.18  -0.22  -0.21   0.06   0.05   0.07
## A_2005   0.04   0.10   0.07   0.06   0.12   0.05   0.08
## A_2009   0.04   0.06   0.09   0.06   0.09   0.11   0.09
## A_2013   0.07   0.02   0.03   0.10   0.07   0.07   0.13
## C_2005   0.10   0.09   0.06   0.04   0.17   0.06   0.07
## C_2009   0.10   0.06   0.08   0.06   0.12   0.13   0.08
## C_2013   0.18   0.03   0.05   0.08   0.08   0.08   0.15
## E_2005   0.59   0.18   0.13   0.10   0.40   0.26   0.24
## E_2009   0.65   0.10   0.16   0.10   0.29   0.36   0.28
## E_2013   1.00   0.11   0.13   0.15   0.26   0.28   0.35
## N_2005   0.11   1.00   0.55   0.53   0.09   0.08   0.06
## N_2009   0.13   0.55   1.00   0.60   0.06   0.07   0.07
## N_2013   0.15   0.53   0.60   1.00   0.05   0.05   0.05
## O_2005   0.26   0.09   0.06   0.05   1.00   0.58   0.55
## O_2009   0.28   0.08   0.07   0.05   0.58   1.00   0.61
## O_2013   0.35   0.06   0.07   0.05   0.55   0.61   1.00

This is a lot of values and a little hard to make sense of, so as a bonus, I’m going to give you a little bit of more complex code that makes this more readable (and publishable ).

Correlations among Personality Indicators. Values on the diagonal represent Chronbach's alpha for each scale in each year. Within-trait correlations represent test-retest correlations.

Correlations among Personality Indicators. Values on the diagonal represent Chronbach’s alpha for each scale in each year. Within-trait correlations represent test-retest correlations.