── Attaching core tidyverse packages ──────────────────────── tidyverse 2.0.0 ──
✔ dplyr 1.1.3 ✔ readr 2.1.4
✔ forcats 1.0.0 ✔ stringr 1.5.0
✔ ggplot2 3.4.2 ✔ tibble 3.2.1
✔ lubridate 1.9.3 ✔ tidyr 1.3.0
✔ purrr 1.0.2
── Conflicts ────────────────────────────────────────── tidyverse_conflicts() ──
✖ ggplot2::%+%() masks psych::%+%()
✖ ggplot2::alpha() masks psych::alpha()
✖ dplyr::arrange() masks plyr::arrange()
✖ purrr::compact() masks plyr::compact()
✖ dplyr::count() masks plyr::count()
✖ dplyr::desc() masks plyr::desc()
✖ dplyr::failwith() masks plyr::failwith()
✖ dplyr::filter() masks stats::filter()
✖ dplyr::id() masks plyr::id()
✖ dplyr::lag() masks stats::lag()
✖ dplyr::mutate() masks plyr::mutate()
✖ dplyr::rename() masks plyr::rename()
✖ dplyr::summarise() masks plyr::summarise()
✖ dplyr::summarize() masks plyr::summarize()
ℹ Use the conflicted package (<http://conflicted.r-lib.org/>) to force all conflicts to become errors
Week 3 Workbook
Week 3 - Data Quality and tidyr
Outline
- Welcome & Q’s on homework
- Part 1: Data Quality and Descriptives
- Part 2:
tidyr
- Problem set & Q time
DATA QUALITY
What is data quality
IBM’s definition of data quality:
“Data quality measures how well a dataset meets criteria for accuracy, completeness, validity, consistency, uniqueness, timeliness, and fitness for purpose”
Aspects of data quality
- Accuracy: Do the data reflect reality / truth?
-
Completeness: Are the data usable or complete (no missing people, values, etc. beyond random)
- Uniqueness: There is no duplicated data
- Validity: Do the data have the correct properties (values, ranges, etc.)
- Consistency: When integrating across multiple data sources, information should converge across sources and match reality
- Timeliness: Can the data be maintained and distributed within a specified time frame
- Fitness for purpose: Do the data meet your research need?
Why should we care about data quality
- You aren’t responsible for poor quality data you receive, but you are responsible for the data products you work with – that is, you are responsible for improving data quality
- Poor quality data threatens scientific integrity
- Poor quality data are a pain for you to work with and for others to work with
What can data quality do for my career?
- The virtuous cycle of data cleaning
- Some people get a reputation for getting their data, analyses, etc. right
- This is important for publications, grant funding, etc.
- It tends to be inter-generational – you inherit some of your reputation on this from your advisor
- Start paying it forward now to build your own career, whether it’s in academia or industry
How do I ensure data quality?
The Towards Data Science website has a nice definition of EDA:
“Exploratory Data Analysis refers to the critical process of performing initial investigations on data so as to discover patterns,to spot anomalies,to test hypothesis and to check assumptions with the help of summary statistics”
- So EDA is basically a fancy word for the descriptive statistics you’ve been learning about for years
I think about “exploratory data analysis for data quality”
- Investigating values and patterns of variables from “input data”
- Identifying and cleaning errors or values that need to be changed
- Creating analysis variables
- Checking values of analysis variables against values of input variables
How I will teach exploratory data analysis
Will teach exploratory data analysis (EDA) in two sub-sections:
- Provide “Guidelines for EDA”
- Less about coding, more about practices you should follow and mentality necessary to ensure high data quality
- Introduce “Tools of EDA”:
- Demonstrate code to investigate variables and relationship between variables
- Most of these tools are just the application of programming skills you have already learned (or will learn soon!)
Guidelines for “EDA for data quality”
Assume that your goal in “EDA for data quality” is to investigate “input” data sources and create “analysis variables”
- Usually, your analysis dataset will incorporate multiple sources of input data, including data you collect (primary data) and/or data collected by others (secondary data)
EDA is not a linear process, and the process will vary across people and projects Some broad steps:
- Understand how input data sources were created
- e.g., when working with survey data, have survey questionnaire and codebooks on hand (watch out for skip patterns!!!)
- For each input data source, identify the “unit of analysis” and which combination of variables uniquely identify observations
- Investigate patterns in input variables
- Create analysis variable from input variable(s)
- Verify that analysis variable is created correctly through descriptive statistics that compare values of input variable(s) against values of the analysis variable
It is critically important to step through EDA processes at multiple points during data cleaning, from the input / raw data to the output / analysis / clean data.
Always be aware of missing values
They will not always be coded as
NA
in input variables (e.g., some projects code them as 99, 99999, negative values, etc.)
“Unit of analysis” and which variables uniquely identify observations
“Unit of analysis” refers to “what does each observation represent” in an input data source
- If each obs represents a trial in an experiment, you have “trial level data”
- If each obs represents a participant, you have “participant level data”
- If each obs represents a sample, you have “sample-level data”
- If each obs represents a year, you have “year level data” (i.e. longitudinal)
How to identify unit of analysis
data documentation
investigating the data set
This is very important because we often conduct analyses that span multiple units of analysis (e.g., between- v within-person, person- v stimuli-level, etc.)
We have to be careful and thoughtful about identifiers that let us do that (important for joining data together, which will be the focus on our
R
workshop today)
Rules for creating new variables
Rules I follow for variable creation
- Never modify “input variable”; instead create new variable based on input variable(s)
- Always keep input variables used to create new variables
- Investigate input variable(s) and relationship between input variables
- Developing a plan for creation of analysis variable
- e.g., for each possible value of input variables, what should value of analysis variable be?
- Write code to create analysis variable
- Run descriptive checks to verify new variables are constructed correctly
- Can “comment out” these checks, but don’t delete them
- Document new variables with notes and labels
DESCRIPTIVES
Data we will use
Use read_csv()
function from readr
(loaded with tidyverse
) to import .csv dataset into R
.
Let’s examine the data [you must run this code chunk]
Rule 1
- Never modify “input variable”; instead create new variable based on input variable(s)
- Always keep input variables used to create new variables
- I already did this before the data were loaded in. I renamed all the input variables with interpretable names and reshaped them so the time variable (year) is long and the other variables are wide
Rule 2
- Investigate input variable(s) and relationship between input variables
- We’ll talk more about this in a bit when we discuss different kinds of descriptives, but briefly let’s look at basic descriptives + zero-order correlations
This doesn’t look great because we’ve negative values where we shouldn’t, which represent flags for different kinds of missing variables. We’ll have to fix that
- I’ll show you a better way later, but we haven’t learned everything to do it nicely yet. So instead, we’ll use
cor.plot()
from thepsych
package to make a simple heat map of the correlations. - We shouldn’t see that many negative correlations, which flags that we need to reverse score some items
Rule 3
- Developing a plan for creation of analysis variable
- e.g., for each possible value of input variables, what should value of analysis variable be?
- I do this in my codebooks, and this topic warrants a discussion in itself. This is our focal topic for next week!
- In this case, we want Big Five (EACNO) composites for each wave and to create composites of life events experienced across all years
Rule 4
- Write code to create analysis variable
- From Rule 2, we know we need to recode missing values to NA and reverse code some items. From Rule 3, we know we need to create some composites.
- Let’s do that now!
Recoding:
Reverse Coding:
Let’s check to make sure some correlations just reversed:
Create Composites:
(Note: I honestly wouldn’t normally do it like this, but we haven’t learned how to reshape data yet! Check the online materials for code on how to do this)
Code
soep_long <- soep_long %>%
group_by(year, Procedural__SID) %>%
rowwise() %>%
mutate(
Big5__E = mean(cbind(Big5__E_reserved, Big5__E_communic, Big5__E_sociable), na.rm = T),
Big5__A = mean(cbind(Big5__A_coarse, Big5__A_friendly, Big5__A_forgive), na.rm = T),
Big5__C = mean(cbind(Big5__C_thorough, Big5__C_efficient, Big5__C_lazy), na.rm = T),
Big5__N = mean(cbind(Big5__N_worry, Big5__N_nervous, Big5__N_dealStress), na.rm = T),
Big5__O = mean(cbind(Big5__O_original, Big5__O_artistic, Big5__O_imagin), na.rm = T)) %>%
group_by(Procedural__SID) %>%
mutate_at(
vars(contains("LifeEvent"))
, lst(ever = ~max(., na.rm = T))
) %>%
ungroup() %>%
filter(year %in% c(2005, 2009, 2013))
Rule 5
- Run descriptive checks to verify new variables are constructed correctly
- Can “comment out” these checks, but don’t delete them
- Uh oh,
Inf
values popping up what went wrong? -
-Inf
pops up when there were no non-missing values and you usena.rm = T
- Let’s recode those as
NA
And check out the descriptives again
Rule 6
- Document new variables with notes and labels
- Again, I do this in my codebooks, so more on this next week!!
EDA
-
One-way descriptive analyses (i.e,. focus on one variable)
- Descriptive analyses for continuous variables
- Descriptive analyses for discreet/categorical variables
-
Two-way descriptive analyses (relationship between two variables)
- Categorical by categorical
- Categorical by continuous
- Continuous by continuous
- Realistically, we’ve actually already covered all this above, so we’ll loop back to this after learning
tidyr
tidyr
- Now, let’s build off what we learned from dplyr and focus on reshaping and merging our data.
- First, the reshapers:
-
pivot_longer()
, which takes a “wide” format data frame and makes it long.
-
pivot_wider()
, which takes a “long” format data frame and makes it wide.
- Next, the mergers:
-
full_join()
, which merges all rows in either data frame
-
inner_join()
, which merges rows whose keys are present in both data frames
-
left_join()
, which “prioritizes” the first data set
-
right_join()
, which “prioritizes” the second data set
(See also:anti_join()
and semi_join()
)
Key tidyr
Functions
1. pivot_longer()
- (Formerly
gather()
) Makes wide data long, based on a key - Core arguments:
-
data
: the data, blank if piped -
cols
: columns to be made long, selected viaselect()
calls -
names_to
: name(s) of key column(s) in new long data frame (string or string vector) -
values_to
: name of values in new long data frame (string) -
names_sep
: separator in column headers, if multiple keys -
values_drop_na
: drop missing cells (similar tona.rm = T
)
-
Basic Application
Let’s start with an easy one – one key, one value:
Code
# A tibble: 69,492 × 6
SID gender education age item values
<chr> <int> <int> <int> <chr> <int>
1 61617 1 NA 16 A1 2
2 61617 1 NA 16 A2 4
3 61617 1 NA 16 A3 3
4 61617 1 NA 16 A4 4
5 61617 1 NA 16 A5 4
6 61617 1 NA 16 C1 2
7 61617 1 NA 16 C2 3
8 61617 1 NA 16 C3 3
# ℹ 69,484 more rows
More Advanced Application
Now a harder one – two keys, one value:
Code
# A tibble: 69,492 × 7
SID gender education age trait item_num values
<chr> <int> <int> <int> <chr> <chr> <int>
1 61617 1 NA 16 A 1 2
2 61617 1 NA 16 A 2 4
3 61617 1 NA 16 A 3 3
4 61617 1 NA 16 A 4 4
5 61617 1 NA 16 A 5 4
6 61617 1 NA 16 C 1 2
7 61617 1 NA 16 C 2 3
8 61617 1 NA 16 C 3 3
# ℹ 69,484 more rows
2. pivot_wider()
- (Formerly
spread()
) Makes wide data long, based on a key - Core arguments:
-
data
: the data, blank if piped -
names_from
: name(s) of key column(s) in new long data frame (string or string vector) -
names_sep
: separator in column headers, if multiple keys -
names_glue
: specify multiple or custom separators of multiple keys -
values_from
: name of values in new long data frame (string) -
values_fn
: function applied to data with duplicate labels
-
Basic Application
More Advanced
A Little More Advanced
More dplyr
Functions
The _join()
Functions
Often we may need to pull different data from different sources
There are lots of reasons to need to do this
We don’t have time to get into all the use cases here, so we’ll talk about them in high level terms
-
We’ll focus on:
full_join()
inner_join()
left_join()
right_join()
Let’s separate demographic and BFI data
Code
# A tibble: 2,800 × 26
SID A1 A2 A3 A4 A5 C1 C2 C3 C4 C5 E1 E2
<chr> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int>
1 61617 2 4 3 4 4 2 3 3 4 4 3 3
2 61618 2 4 5 2 5 5 4 4 3 4 1 1
3 61620 5 4 5 4 4 4 5 4 2 5 2 4
4 61621 4 4 6 5 5 4 4 3 5 5 5 3
5 61622 2 3 3 4 5 4 4 5 3 2 2 2
6 61623 6 6 5 6 5 6 6 6 1 3 2 1
# ℹ 2,794 more rows
# ℹ 13 more variables: E3 <int>, E4 <int>, E5 <int>, N1 <int>, N2 <int>,
# N3 <int>, N4 <int>, N5 <int>, O1 <int>, O2 <int>, O3 <int>, O4 <int>,
# O5 <int>
Code
# A tibble: 2,800 × 4
SID education gender age
<chr> <int> <int> <int>
1 61617 NA 1 16
2 61618 NA 2 18
3 61620 NA 2 17
4 61621 NA 2 17
5 61622 NA 1 17
6 61623 3 2 21
# ℹ 2,794 more rows
Before we get into it, as a reminder, this is what the data set looks like before we do any joining:
# A tibble: 2,800 × 29
SID A1 A2 A3 A4 A5 C1 C2 C3 C4 C5 E1 E2
<chr> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int>
1 61617 2 4 3 4 4 2 3 3 4 4 3 3
2 61618 2 4 5 2 5 5 4 4 3 4 1 1
3 61620 5 4 5 4 4 4 5 4 2 5 2 4
4 61621 4 4 6 5 5 4 4 3 5 5 5 3
5 61622 2 3 3 4 5 4 4 5 3 2 2 2
6 61623 6 6 5 6 5 6 6 6 1 3 2 1
# ℹ 2,794 more rows
# ℹ 16 more variables: E3 <int>, E4 <int>, E5 <int>, N1 <int>, N2 <int>,
# N3 <int>, N4 <int>, N5 <int>, O1 <int>, O2 <int>, O3 <int>, O4 <int>,
# O5 <int>, gender <int>, education <int>, age <int>
3. full_join()
Most simply, we can put those back together keeping all observations.
# A tibble: 2,800 × 29
SID A1 A2 A3 A4 A5 C1 C2 C3 C4 C5 E1 E2
<chr> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int>
1 61617 2 4 3 4 4 2 3 3 4 4 3 3
2 61618 2 4 5 2 5 5 4 4 3 4 1 1
3 61620 5 4 5 4 4 4 5 4 2 5 2 4
4 61621 4 4 6 5 5 4 4 3 5 5 5 3
5 61622 2 3 3 4 5 4 4 5 3 2 2 2
6 61623 6 6 5 6 5 6 6 6 1 3 2 1
# ℹ 2,794 more rows
# ℹ 16 more variables: E3 <int>, E4 <int>, E5 <int>, N1 <int>, N2 <int>,
# N3 <int>, N4 <int>, N5 <int>, O1 <int>, O2 <int>, O3 <int>, O4 <int>,
# O5 <int>, education <int>, gender <int>, age <int>
4. inner_join()
We can also keep all rows present in both data frames
Code
# A tibble: 501 × 29
SID education gender age A1 A2 A3 A4 A5 C1 C2 C3
<chr> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int>
1 64151 3 2 18 1 5 6 5 5 5 6 5
2 64152 4 2 29 1 5 6 5 5 2 1 4
3 64154 5 1 46 2 5 6 5 6 6 6 6
4 64155 5 1 58 5 4 4 4 5 4 4 5
5 64156 5 2 38 1 4 6 6 6 4 4 5
6 64158 5 2 27 2 3 1 1 1 4 2 2
# ℹ 495 more rows
# ℹ 17 more variables: C4 <int>, C5 <int>, E1 <int>, E2 <int>, E3 <int>,
# E4 <int>, E5 <int>, N1 <int>, N2 <int>, N3 <int>, N4 <int>, N5 <int>,
# O1 <int>, O2 <int>, O3 <int>, O4 <int>, O5 <int>
5. left_join()
Or all rows present in the left (first) data frame, perhaps if it’s a subset of people with complete data
# A tibble: 2,577 × 29
SID education gender age A1 A2 A3 A4 A5 C1 C2 C3
<chr> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int>
1 61623 3 2 21 6 6 5 6 5 6 6 6
2 61629 2 1 19 4 3 1 5 1 3 2 4
3 61630 1 1 19 4 3 6 3 3 6 6 3
4 61634 1 1 21 4 4 5 6 5 4 3 5
5 61640 1 1 17 4 5 2 2 1 5 5 5
6 61661 5 1 68 1 5 6 5 6 4 3 2
# ℹ 2,571 more rows
# ℹ 17 more variables: C4 <int>, C5 <int>, E1 <int>, E2 <int>, E3 <int>,
# E4 <int>, E5 <int>, N1 <int>, N2 <int>, N3 <int>, N4 <int>, N5 <int>,
# O1 <int>, O2 <int>, O3 <int>, O4 <int>, O5 <int>
6. right_join()
Or all rows present in the right (second) data frame, such as I do when I join a codebook with raw data
# A tibble: 2,800 × 29
SID education gender age A1 A2 A3 A4 A5 C1 C2 C3
<chr> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int> <int>
1 61623 3 2 21 6 6 5 6 5 6 6 6
2 61629 2 1 19 4 3 1 5 1 3 2 4
3 61630 1 1 19 4 3 6 3 3 6 6 3
4 61634 1 1 21 4 4 5 6 5 4 3 5
5 61640 1 1 17 4 5 2 2 1 5 5 5
6 61661 5 1 68 1 5 6 5 6 4 3 2
# ℹ 2,794 more rows
# ℹ 17 more variables: C4 <int>, C5 <int>, E1 <int>, E2 <int>, E3 <int>,
# E4 <int>, E5 <int>, N1 <int>, N2 <int>, N3 <int>, N4 <int>, N5 <int>,
# O1 <int>, O2 <int>, O3 <int>, O4 <int>, O5 <int>
EDA
-
One-way descriptive analyses (i.e,. focus on one variable)
- Descriptive analyses for continuous variables
- Descriptive analyses for discreet/categorical variables
-
Two-way descriptive analyses (relationship between two variables)
- Categorical by categorical
- Categorical by continuous
- Continuous by continuous
One-way descriptive analyses
- These are basically what they sound like – the focus is on single variables
- Descriptive analyses for continuous variables
- means, standard deviations, minima, maxima, counts
- Descriptive analyses for discreet/categorical variables
- counts, percentages
Continuous
Categorical / Count
Two-way descriptive analyses
Aims to capture relationships between variables
-
Categorical by categorical
- cross-tabs, percentages
-
Categorical by continuous
- means, standard deviations, etc. within categories
-
Continuous by continuous
- correlations, covariances, etc.
Categorical x Categorical
Code
soep_long %>%
select(Procedural__SID, Demographic__Sex, contains("_ever")) %>%
distinct() %>%
pivot_longer(
cols = contains("LifeEvent")
, names_to = "event"
, values_to = "occurred"
, values_drop_na = T
) %>%
mutate(Demographic__Sex = mapvalues(Demographic__Sex, c(1,2), c("Male", "Female"))
, occurred = mapvalues(occurred, c(0,1), c("No Event", "Event"))) %>%
group_by(event, occurred, Demographic__Sex) %>%
tally() %>%
group_by(event) %>%
mutate(perc = n/sum(n)*100) %>%
pivot_wider(
names_from = c(occurred)
, values_from = c(n, perc)
)
Categorical x Continuous
Set up the data
Code
soep_twoway <- soep_long %>%
filter(year == 2005) %>%
select(Procedural__SID, Big5__E:Big5__O) %>%
pivot_longer(
cols = contains("Big5")
, names_to = "trait"
, values_to = "value"
, values_drop_na = T
) %>%
left_join(
soep_long %>%
select(Procedural__SID, contains("_ever")) %>%
distinct() %>%
pivot_longer(
cols = contains("LifeEvent")
, names_to = "event"
, values_to = "occurred"
, values_drop_na = T
) %>%
mutate(occurred = mapvalues(occurred, c(0,1), c("No Event", "Event")))
)
Run the descriptives
Continuous x continuous
Here’s how I create more customizable heat maps in ggplot2
for those who would like a reference for themselves.
Code
r <- soep_long %>%
filter(year == 2005) %>%
select(Big5__E:Big5__O) %>%
cor(., use = "pairwise")
r[lower.tri(r, diag = T)] <- NA
vars <- rownames(r)
r %>%
data.frame() %>%
rownames_to_column("V1") %>%
pivot_longer(
cols = -V1
, names_to = "V2"
, values_to = "r"
) %>%
mutate(V1 = factor(V1, levels = vars)
, V2 = factor(V2, levels = rev(vars))) %>%
ggplot(aes(x = V1, y = V2, fill = r)) +
geom_raster() +
geom_text(aes(label = round(r, 2))) +
scale_fill_gradient2(
limits = c(-1,1)
, breaks = c(-1, -.5, 0, .5, 1)
, low = "blue", high = "red"
, mid = "white", na.value = "white") +
labs(
x = NULL
, y = NULL
, fill = "Zero-Order Correlation"
, title = "Zero-Order Correlations Among Variables"
) +
theme_classic() +
theme(
legend.position = "bottom"
, axis.text = element_text(face = "bold")
, axis.text.x = element_text(angle = 45, hjust = 1)
, plot.title = element_text(face = "bold", hjust = .5)
, plot.subtitle = element_text(face = "italic", hjust = .5)
, panel.background = element_rect(color = "black", size = 1)
)
Attributions
Parts of Part 1 of these slides was adapted from Ozan Jaquette’s EDUC 260A at UCLA.