Chapter 1 Workspace

1.1 Packages

library(knitr)              # creating tables
library(kableExtra)         # formatting and exporting tables
library(rio)                # importing html
library(readxl)             # read excel codebooks and documentation
library(psych)              # biscuit / biscwit
library(glmnet)             # elastic net regression
library(glmnetUtils)        # extension of basic elastic net with CV
library(caret)              # train and test for random forest
library(vip)                # variable importance
library(Amelia)             # multiple imputation (of time series)
library(lubridate)          # date wrangling
library(gtable)             # ggplot friendly tables
library(grid)               # ggplot friendly table rendering 
library(gridExtra)          # more helpful ggplot friendly table updates
library(plyr)               # data wranging
library(tidyverse)          # data wrangling
library(ggdist)             # distributional plots 
library(ggridges)           # more distributional plots 
library(cowplot)            # flexibly arrange multiple ggplot objects
library(tidymodels)         # tidy model workflow and selection
# library(modeltime)          # tidy models for time series
library(furrr)              # mapping many models in parallel 

1.2 Directory Path

res_path <- "https://github.com/emoriebeck/behavior-prediction/raw/main"
local_path <- "/Volumes/Emorie/projects/idio prediction"

1.3 Codebook

Each study has a separate codebook indexing matching, covariate, personality, and outcome variables. Moreover, these codebooks contain information about the original scale of the variable, any recoding of the variable (including binarizing outcomes, changing the scale, and removing missing data), reverse coding of scale variables, categories, etc.

# list of all codebook sheets
# ipcs_codebook <- import(file = sprintf("%s/01-codebooks/codebook.xlsx", res_path), which = 2) %>%
#   as_tibble()
ipcs_codebook <- sprintf("%s/01-codebooks/codebook_R1.xlsx", local_path) %>%
  readxl::read_xlsx(., sheet = "codebook")
ipcs_codebook
## # A tibble: 107 × 12
##    category trait facet   itemname  old_desc  scale  orig_scale_num description orig_itemname reverse_code long_trait long_name
##    <chr>    <chr> <chr>   <chr>     <chr>     <chr>  <chr>          <chr>       <chr>         <chr>        <chr>      <chr>    
##  1 BFI-2    E     scblty  Sociabil… Was outg… Liker… 1.             Is outgoin… E1            no           Extravers… Sociabil…
##  2 BFI-2    E     scblty  Sociabil… Was talk… Liker… 46.            Is talkati… E2            no           Extravers… Sociabil…
##  3 BFI-2    E     scblty  Sociabil… Tended t… Liker… 16r.           Tends to b… E3            yes          Extravers… Sociabil…
##  4 BFI-2    E     scblty  Sociabil… Was some… Liker… 31r.           Is sometim… E4            yes          Extravers… Sociabil…
##  5 BFI-2    E     assert  Assertiv… Had an a… Liker… 6.             Has an ass… E5            no           Extravers… Assertiv…
##  6 BFI-2    E     assert  Assertiv… Was domi… Liker… 21.            Is dominan… E6            no           Extravers… Assertiv…
##  7 BFI-2    E     assert  Assertiv… Found it… Liker… 36r.           Finds it h… E7            yes          Extravers… Assertiv…
##  8 BFI-2    E     assert  Assertiv… Preferre… Liker… 51r.           Prefers to… E8            yes          Extravers… Assertiv…
##  9 BFI-2    E     enerLev Energy L… Was full… Liker… 41.            Is full of… E9            no           Extravers… Energy L…
## 10 BFI-2    E     enerLev Energy L… Showed a… Liker… 56.            Shows a lo… E10           no           Extravers… Energy L…
## # … with 97 more rows
outcomes <- ipcs_codebook %>% filter(category == "outcome") %>% select(trait, long_name)

# ftrs <- import(file = sprintf("%s/01-codebooks/codebook.xlsx", res_path), which = 3) %>%
#   as_tibble()
ftrs <- sprintf("%s/01-codebooks/codebook_R1.xlsx", local_path) %>%
  readxl::read_xlsx(., sheet = "names")

1.3.1 Measures

Participants responded to a large battery of trait and ESM measures as part of the larger study. The present study focuses on ESM measures whose use we preregistered. A full list of the collected measures for the study can be found in supplementary codebooks in the online materials on the OSF and GitHub. The measures collected at each wave were identical. ESM measures were used to estimate idiographic personality prediction models.

1.3.1.1 ESM Measures

1.3.1.1.1 Personality

Personality was assessed using the full BFI-2 (Soto & John, 2017). The scale was administered using a planned missing data design (Revelle et al., 2016). We have previously demonstrated both the between- and within-person construct validity of assessing personality using planned missing designs using the BFI-2 (https://osf.io/pj9sy/). The planned missingness was done within each Big Five trait separately, with three items from each trait included at each timepoint (75% missingness). Each item was answered relative to what a participant was just doing on a 5-point Likert-like scale from 1 “disagree strongly” to 5 “agree strongly.” Items for each person at each assessment were determined by pulling 3 numbers (1 to 12) from a uniform distribution. The order of the resulting 15 items were then randomized before being displayed to participants.

ipcs_codebook %>% filter(category == "BFI-2")
## # A tibble: 60 × 12
##    category trait facet   itemname  old_desc  scale  orig_scale_num description orig_itemname reverse_code long_trait long_name
##    <chr>    <chr> <chr>   <chr>     <chr>     <chr>  <chr>          <chr>       <chr>         <chr>        <chr>      <chr>    
##  1 BFI-2    E     scblty  Sociabil… Was outg… Liker… 1.             Is outgoin… E1            no           Extravers… Sociabil…
##  2 BFI-2    E     scblty  Sociabil… Was talk… Liker… 46.            Is talkati… E2            no           Extravers… Sociabil…
##  3 BFI-2    E     scblty  Sociabil… Tended t… Liker… 16r.           Tends to b… E3            yes          Extravers… Sociabil…
##  4 BFI-2    E     scblty  Sociabil… Was some… Liker… 31r.           Is sometim… E4            yes          Extravers… Sociabil…
##  5 BFI-2    E     assert  Assertiv… Had an a… Liker… 6.             Has an ass… E5            no           Extravers… Assertiv…
##  6 BFI-2    E     assert  Assertiv… Was domi… Liker… 21.            Is dominan… E6            no           Extravers… Assertiv…
##  7 BFI-2    E     assert  Assertiv… Found it… Liker… 36r.           Finds it h… E7            yes          Extravers… Assertiv…
##  8 BFI-2    E     assert  Assertiv… Preferre… Liker… 51r.           Prefers to… E8            yes          Extravers… Assertiv…
##  9 BFI-2    E     enerLev Energy L… Was full… Liker… 41.            Is full of… E9            no           Extravers… Energy L…
## 10 BFI-2    E     enerLev Energy L… Showed a… Liker… 56.            Shows a lo… E10           no           Extravers… Energy L…
## # … with 50 more rows
1.3.1.1.2 Affect

Items capturing affect were initially pulled from the PANAS-X (Watson & Clark, 1994). In order to reduce redundancy, these were cross-referenced with the BFI-2 and duplicated items (e.g., “excited” were only asked once. Because we were not interested in scale score but in items, we further had research participants examine remaining items and asked them to indicate items that were not relevant to their experience. Finally, we added two “neutral” affect-related terms – goal-directed and purposeful. Each of these were rated on a 1 “disagree strongly” to 5 “agree strongly.”

ipcs_codebook %>% filter(category == "Affect")
## # A tibble: 10 × 12
##    category trait      facet itemname old_desc scale orig_scale_num description orig_itemname reverse_code long_trait long_name
##    <chr>    <chr>      <chr> <chr>    <chr>    <chr> <chr>          <chr>       <chr>         <chr>        <chr>      <chr>    
##  1 Affect   angry      angry angry    Angry    Like… <NA>           <NA>        A1            no           Angry      Negative 
##  2 Affect   afraid     afra… afraid   Afraid   Like… <NA>           <NA>        A3            no           Afraid     Negative 
##  3 Affect   happy      happy happy    Happy    Like… <NA>           <NA>        A5            no           Happy      Positive 
##  4 Affect   excited    exci… excited  Excited  Like… <NA>           <NA>        A7            no           Excited    Positive 
##  5 Affect   proud      proud proud    Proud    Like… <NA>           <NA>        A9            no           Proud      Positive 
##  6 Affect   guilty     guil… guilty   Guilty   Like… <NA>           <NA>        A10           no           Guilty     Negative 
##  7 Affect   attentive  atte… attenti… Attenti… Like… <NA>           <NA>        A11           no           Attentive  Positive 
##  8 Affect   content    cont… content  Content  Like… <NA>           <NA>        A12           no           Content    Positive 
##  9 Affect   purposeful purp… purpose… Purpose… Like… <NA>           <NA>        A13           no           Purposeful Neutral  
## 10 Affect   goaldir    goal… goaldir  Goal-di… Like… <NA>           <NA>        A14           no           Goal-dire… Neutral
1.3.1.1.3 Binary Situations

Binary situation indicators were derived by asking undergraduate research assistants to provide list of the common social, academic, and personal situations in which they tended to find themselves. From these, we derived a list of 19 unique situations. Separate items for arguing with or interacting with friends or relatives were composited in overall argument and interaction items. Participants checked a box for each event that occurred in the last hour (1 = occurred, 0 = did not occur).

ipcs_codebook %>% filter(category == "sit")
## # A tibble: 20 × 12
##    category trait      facet itemname old_desc scale orig_scale_num description orig_itemname reverse_code long_trait long_name
##    <chr>    <chr>      <chr> <chr>    <chr>    <chr> <chr>          <chr>       <chr>         <chr>        <chr>      <chr>    
##  1 sit      study      study study    Was stu… 0 = … <NA>           <NA>        sit_01        no           Studying   <NA>     
##  2 sit      argument   argu… argFrnd  Had an … 0 = … <NA>           <NA>        sit_02        no           Argument   Argument 
##  3 sit      argument   argu… argFam   Had an … 0 = … <NA>           <NA>        sit_03        no           Argument   Argument 
##  4 sit      interacted inte… IntFrnd  Interac… 0 = … <NA>           <NA>        sit_04        no           Interacted Interact…
##  5 sit      interacted inte… IntFam   Interac… 0 = … <NA>           <NA>        sit_05        no           Interacted Interact…
##  6 sit      lostSmthng lost… lostSmt… Lost so… 0 = … <NA>           <NA>        sit_06        no           Lost some… Lost som…
##  7 sit      late       late  late     Was lat… 0 = … <NA>           <NA>        sit_07        no           Late       Late     
##  8 sit      frgtSmthng frgt… frgtSmt… Forgot … 0 = … <NA>           <NA>        sit_08        no           Forgot so… Forgot s…
##  9 sit      brdSWk     brdS… brdSWk   Was bor… 0 = … <NA>           <NA>        sit_09        no           Bored wit… Bored wi…
## 10 sit      excSWk     excS… excSWk   Was exc… 0 = … <NA>           <NA>        sit_10        no           Excited a… <NA>     
## 11 sit      AnxSWk     AnxS… AnxSWk   Was anx… 0 = … <NA>           <NA>        sit_11        no           Anxious a… <NA>     
## 12 sit      tired      tired tired    Felt ti… 0 = … <NA>           <NA>        sit_12        no           Tired      Tired    
## 13 sit      sick       sick  sick     Felt si… 0 = … <NA>           <NA>        sit_13        no           Sick       Sick     
## 14 sit      sleeping   slee… sleeping Was sle… 0 = … <NA>           <NA>        sit_15        no           Sleeping   Sleeping 
## 15 sit      class      class class    Was in … 0 = … <NA>           <NA>        sit_16        no           In Class   In Class 
## 16 sit      music      music music    Was lis… 0 = … <NA>           <NA>        sit_17        no           Listening… Listenin…
## 17 sit      internet   inte… internet Was on … 0 = … <NA>           <NA>        sit_18        no           On the in… On the i…
## 18 sit      TV         TV    TV       Was wat… 0 = … <NA>           <NA>        sit_19        no           Watching … Watching…
## 19 sit      prcrst     prcr… prcrst   Procras… 0 = … <NA>           <NA>        sit_20        no           Procrasti… Procrast…
## 20 sit      lonely     lone… lonely   Felt lo… 0 = … <NA>           <NA>        sit_21        no           Lonely     Lonely
1.3.1.1.4 DIAMONDS Situation Features

Psychological features of situations were measured using the ultra brief version of the “Situational Eight” DIAMONDS (Duty, Intellect, Adversity, Mating, pOsitivity, Negativity, Deception, and Sociality) scale (S8-I; Rauthmann & Sherman, 2015). Items were measured on a 3-point scale from 1 “not at all” to 3 “totally.”

ipcs_codebook %>% filter(category == "S8-I")
## # A tibble: 8 × 12
##   category trait       facet itemname old_desc scale orig_scale_num description orig_itemname reverse_code long_trait long_name
##   <chr>    <chr>       <chr> <chr>    <chr>    <chr> <chr>          <chr>       <chr>         <chr>        <chr>      <chr>    
## 1 S8-I     Duty        Duty  Duty     Work ha… 1 (=… <NA>           <NA>        D1            no           Duty       Duty     
## 2 S8-I     Intellect   Inte… Intelle… Deep th… 1 (=… <NA>           <NA>        D2            no           Intellect  Intellect
## 3 S8-I     Adversity   Adve… Adversi… Somebod… 1 (=… <NA>           <NA>        D3            no           Adversity  Adversity
## 4 S8-I     Mating      Mati… Mating   Potenti… 1 (=… <NA>           <NA>        D4            no           Mating     Mating   
## 5 S8-I     pOsitivity  pOsi… pOsitiv… The sit… 1 (=… <NA>           <NA>        D5            no           pOsitivity pOsitivi…
## 6 S8-I     Negativity  Nega… Negativ… The sit… 1 (=… <NA>           <NA>        D6            no           Negativity Negativi…
## 7 S8-I     Deception   Dece… Decepti… Somebod… 1 (=… <NA>           <NA>        D7            no           Deception  Deception
## 8 S8-I     Sociability Soci… Sociabi… Social … 1 (=… <NA>           <NA>        D8            no           Sociality  Sociality
1.3.1.1.5 Timing Features

The final set of features were created from the time stamps collected with each survey based on approaches used in other studies of idiographic prediction (Fisher & Soyster, 2019; . To create these, we created time of day (4; morning, midday, evening, night) and day of the week dummy codes (7). Next, we create a cumulative time variable (in hours) from first beep (not used in analyses) that we used to create linear, quadratic, and cubic time trends (3) as well as 1 and 2 period sine and cosine functions across each 24 period (e.g., 2 period sine =  {cumulative time}_t and 1 period sine =  {cumulative time}_t).

1.3.2 Procedure

Participants in this study were drawn from a larger personality study. All responded to two types of surveys: trait and state (Experience Sampling Method; ESM) measures, for which they were paid separately. Participants completed three waves of trait measures and two waves of state measures. For the first two waves, trait surveys were collected immediately before beginning the ESM protocol.

1.3.2.1 Main Sample

For the main sample, participants were recruited from the psychology subject pool at Washington University in St. Louis. Participants were told that the study posted on the recruitment website was the first wave of a longer longitudinal study they would be offered the opportunity to take part in.

Participants were brought into the lab between October 2018 and December 2019, where a research assistant or the first author explained the study procedure to them and walked them through the consent procedure. If they consented, participants were led to a room where they could fill out a form to opt into the ESM portion of the study. They then completed baseline trait measures using the Qualtrics Survey Platform. After, the participants were debriefed, paid $10 in cash and, if they opted into the ESM portion of the study, the ESM survey procedure was explained to them.

Participants then received ESM surveys four times per day for two weeks (target n = 56). The survey platform was built by the first author using the jsPsych library (De Leeuw, 2015). Additional JavaScript controllers were written for the purpose of this study and are available on the first author’s GitHub. Start times were based on times that participants indicated they would like to receive their first survey based on their personal wake times. Surveys were sent every 4 hours, meaning that the surveys spanned a 12-hour period from the start time participants indicated. Participants received their first survey at their chosen time on the Monday following their in-lab session. They were compensated $.50 for each survey completed for a maximum of $28. To incentivize responding, participants who completed at least 50 surveys received a “bonus” for a total compensation of $30, which was distributed as an Amazon Gift Card.

1.3.3 Analytic Plan

The present study tested three methods of machine learning classification models, some of which have been used for idiographic prediction in other studies (Fisher & Soyster, 2019; Kaiser & Butter, 2020): (1) Elastic Net Regression (Friedman, Hastie, & Tibshirani, 2010), (2) The Best Items Scale that is Cross-validated, Correlation-weighted, Informative and Transparent (BISCWIT; Elleman, McDougald, Condon, & Revelle, 2020), and (3) Random Forest Models (Kim et al., 2019).

Because we have a large number of indicators to test, each of the methods used have variable selection features and, in some instances, other methods for reducing overfitting, as detailed below. To both reduce the number of indicators used in each test and to test which group of indicators are the most predictive of procrastination and loneliness, we will also test these in several sets: (1) Personality indicators (15), (2) Affective indicators (10), (3) Binary situation indicators (16), (4) DIAMONDS situation indicators (8), (5) Psychological indicators (personality + affect) (25), (6), Situation indicators (binary + DIAMONDS) (24), and (7) Full set (personality + affect + binary situations + DIAMONDS) (49). We will additionally test each of these with and without the 18 timing indicators, for a total set of 14 combinations of the 67 features.

In each of these methods, we used cumulative rolling origin forecast validation, which was comprised of the first 75% of the time series, and held out the remaining 25% of the data set for the test set. In the rolling origin forecast validation, we used the first one-third of the time series as the initial set, five observations as the validation set, and set skip to one, which roughly resulted in 10-15 rolling origin “folds.”

Out of sample prediction was tested based on classification error and area under the ROC (receive operating characteristic) curve (AUC). Classification error is a simple estimate of the percentage of the test sample that was correctly classified by the model. In addition, the AUC will capture the trade-off between sensitivity and specificity across a threshold. In the present study, we used an AUC threshold of .5, which indicates binary classification at chance levels. ROC visualizations plot 1 - specificity (i.e. false positive rate: false positives / (false positives + true negatives)) against sensitivity (i.e. true positive rate: true positives / (true positives + false positives)).

1.4 Demographics

1.4.0.1 Trait

participants <- googlesheets4::sheets_read("https://docs.google.com/spreadsheets/d/1r808gQ-LWfG98J9rvt_CRMHtmCFgtdcfThl0XA0HHbM/edit?usp=sharing", sheet = "ESM") %>%
  select(SID, Name, Email) %>%
  mutate(new = seq(1, n(), 1),
         new = ifelse(new < 10, paste("0", new, sep = ""), new))
1

old_names <- trait_codebook$`New #`

# wave 1 trait
baseline <- sprintf("%s/04-data/01-raw-data/baseline_05.07.20.csv", res_path) %>% 
  read_csv() %>%
  filter(!row_number() %in% c(1,2) & !is.na(SID) & SID %in% participants$SID) %>% 
  select(SID, StartDate, gender, YOB, race, ethnicity) %>%
  mutate(SID = mapvalues(SID, participants$SID, participants$new)) %>%
  mutate(wave = 1,
         gender = factor(gender, c(1,2), c("Male", "Female")),
         YOB = substr(YOB, nchar(YOB)-4+1, nchar(YOB)),
         race = mapvalues(race, 1:7, c(0,1,3,2,3,3,3)),
         ethnicity = ifelse(!is.na(ethnicity), 3, NA),
         race = ifelse(is.na(ethnicity), race, ifelse(ethnicity == 3, ethnicity)))  %>%
  select(-ethnicity)

save(baseline, 
     file = sprintf("%s/04-data/01-raw-data/cleaned_combined_2020-05-06.RData", res_path))
load(url(sprintf("%s/04-data/01-raw-data/cleaned_combined_2020-05-06.RData", res_path)))
dem <- baseline %>%
  select(SID:race) %>%
  mutate(age = year(ymd_hms(StartDate)) - as.numeric(YOB),
         StartDate = as.Date(ymd_hms(StartDate)),
         race = factor(race, 0:3, c("White", "Black", "Asian", "Other"))) %>%
  select(-YOB) 

dem %>% 
  summarize(n = length(unique(SID)),
            gender = sprintf("%i (%.2f%%)",sum(gender == "Female"), sum(gender == "Female")/n()*100),
            age = sprintf("%.2f (%.2f)", mean(age, na.rm = T), sd(age, na.rm = T)),
            white = sprintf("%i (%.2f%%)"
                            , sum(race == "White", na.rm = T)
                            , sum(race == "White", na.rm = T)/n()*100),
            black = sprintf("%i (%.2f%%)"
                            , sum(race == "Black", na.rm = T)
                            , sum(race == "Black", na.rm = T)/n()*100),
            asian = sprintf("%i (%.2f%%)"
                            , sum(race == "Asian", na.rm = T)
                            , sum(race == "Asian", na.rm = T)/n()*100),
            other = sprintf("%i (%.2f%%)"
                            , sum(race == "Other", na.rm = T)
                            , sum(race == "Other", na.rm = T)/n()*100),
            StartDate = sprintf("%s (%s - %s)", median(StartDate), 
                                min(StartDate), max(StartDate)))
## # A tibble: 1 × 8
##       n gender       age          white       black       asian       other       StartDate                           
##   <int> <chr>        <chr>        <chr>       <chr>       <chr>       <chr>       <chr>                               
## 1   208 154 (71.96%) 19.51 (1.27) 69 (32.24%) 34 (15.89%) 67 (31.31%) 30 (14.02%) 2019-03-29 (2018-10-17 - 2019-12-05)
dem %>%
  kable(., "html"
        , col.names = c("ID", "Start Date", "Gender", "Race/Ethnicity", "Age")
        , align = rep("c", 5)
        , caption = "<strong>Table S1</strong><br><em>Descriptive Statistics of Participants at Baseline<em>") %>%
  kable_styling(full_width = F) %>%
    scroll_box(height = "900px")
Table 1.1: Table S1
Descriptive Statistics of Participants at Baseline
ID Start Date Gender Race/Ethnicity Age
02 2018-10-17 Female White 18
01 2018-10-17 Female Black 19
03 2018-10-17 Female Asian 19
04 2018-10-18 Male Other 19
05 2018-10-18 Male White 19
06 2018-10-18 Female Asian 20
07 2018-10-18 Female Black 20
08 2018-10-18 Female Black 18
09 2018-10-18 Female 18
10 2018-10-19 Female 19
11 2018-10-19 Female Asian 18
12 2018-10-19 Female White 20
13 2018-10-19 Female White 18
14 2018-10-19 Female Black 19
16 2018-10-19 Female White 18
15 2018-10-19 Female 20
17 2018-10-19 Male Asian 18
18 2018-10-22 Female White 19
19 2018-10-22 Female Black 20
20 2018-10-22 Female Asian 18
21 2018-10-22 Female Asian 19
22 2018-10-22 Female Black 19
23 2018-10-22 Male White 21
24 2018-10-22 Male Black 20
25 2018-10-22 Female White 18
27 2018-10-23 Female 18
26 2018-10-23 Female 18
28 2018-10-23 Female Other 21
29 2018-10-23 Female Asian 19
30 2018-10-23 Female Asian 20
31 2018-10-24 Female White 18
32 2018-10-24 Female Black 20
33 2018-10-24 Female Asian 18
34 2018-10-24 Female Black 19
35 2018-10-26 Female Black 18
36 2018-10-29 Female Asian 21
37 2018-10-29 Male Other 18
38 2018-10-29 Male Asian 19
36 2018-10-29 Female Other 20
37 2018-10-29 Female Asian 18
41 2018-10-29 Male White 19
38 2018-10-29 Female Black 19
43 2018-10-29 Female White 18
44 2018-10-30 Female Asian 18
45 2018-11-01 Female 18
46 2018-11-01 Female Asian 22
48 2018-11-01 Female Asian 21
47 2018-11-01 Male Asian 23
49 2018-11-02 Female Asian 20
51 2018-11-02 Female White 20
50 2018-11-02 Female Other 19
52 2018-11-05 Male Other 21
53 2018-11-05 Female Asian 19
52 2018-11-05 Male Asian 21
53 2018-11-05 Female White 19
56 2018-11-05 Male Asian 18
58 2018-11-05 Female Asian 21
57 2018-11-05 Female Asian
59 2018-11-06 Female Asian 21
60 2018-11-06 Male White 20
61 2018-11-06 Male White 18
62 2018-11-06 Male Black 18
63 2018-11-06 Female White 20
64 2018-11-07 Female Other 21
65 2018-11-07 Male White 20
67 2018-11-07 Female Black 19
66 2018-11-07 Male White 20
68 2018-11-07 Female Asian 18
69 2018-11-07 Female Asian 18
70 2018-11-08 Female 19
72 2018-11-08 Female Other 18
71 2018-11-08 Female Other 19
74 2018-11-08 Female Other 22
73 2018-11-08 Female White 18
75 2018-11-08 Female Asian 19
76 2018-11-08 Female Black 20
77 2018-11-08 Male Black 21
79 2018-11-08 Male Asian 18
80 2018-11-08 Female White 21
82 2018-11-09 Female Black 20
81 2018-11-09 Female Asian 22
83 2018-11-09 Female White 18
84 2018-11-09 Female White 20
85 2018-11-14 Female White 18
86 2018-11-14 Female Asian 21
87 2018-11-14 Female Asian 21
89 2018-11-14 Female White 19
88 2018-11-14 Female Asian 18
90 2018-11-20 Female Other 21
91 2018-11-20 Female Black 21
93 2018-11-28 Female White 20
92 2018-11-28 Male Black 20
94 2018-11-28 Female Asian 21
95 2018-11-28 Female Black 21
96 2018-11-28 Female White 21
97 2018-11-28 Male Asian 19
98 2018-11-29 Female Asian 18
99 2018-11-29 Female Other 21
100 2018-11-29 Female Black 19
101 2018-11-29 Female White 18
102 2018-11-29 Female White 19
103 2019-03-15 Female White 20
104 2019-03-15 Female 22
106 2019-03-22 Female Asian 21
105 2019-03-22 Female White 20
107 2019-03-22 Female White 19
109 2019-03-29 Female White 19
108 2019-03-29 Female White 20
111 2019-04-05 Female White 19
110 2019-04-05 Male Other 22
112 2019-04-05 Female Black
113 2019-04-05 Female Asian 23
114 2019-04-05 Male White 20
116 2019-04-12 Female White 21
115 2019-04-12 Female White 19
118 2019-04-12 Female Asian 21
117 2019-04-12 Female White 20
119 2019-04-12 Male Asian 19
121 2019-04-12 Female White 20
122 2019-04-12 Male White 20
123 2019-04-12 Female Asian 20
124 2019-04-12 Female Black 23
126 2019-04-19 Male Other 22
125 2019-04-19 Female Asian 21
128 2019-04-19 Male Black 23
127 2019-04-19 Female White 21
129 2019-04-19 Female Other 20
130 2019-04-19 Female White 19
131 2019-04-19 Male White 20
133 2019-04-26 Male Asian 20
132 2019-04-26 Female White 21
134 2019-04-26 Female Asian
136 2019-09-11 Male Asian 20
135 2019-09-11 Male White 19
138 2019-09-13 Female White 19
137 2019-09-13 Female White 19
139 2019-09-17 Female Black 19
141 2019-09-18 Female White 21
142 2019-09-13 Male Asian 18
143 2019-09-19 Female 18
146 2019-09-19 Female Asian 20
148 2019-09-20 Female Black 20
147 2019-09-20 Female White 20
149 2019-09-20 Female White 22
150 2019-09-20 Female Asian 18
151 2019-09-20 Female Other 18
152 2019-09-20 Male White 18
153 2019-09-27 Female Asian 21
154 2019-09-27 Male White 18
155 2019-09-27 Male White 19
156 2019-09-27 Male White 20
157 2019-09-27 Female White 21
158 2019-09-27 Female Asian 21
159 2019-09-27 Female Other 18
160 2019-09-27 Female Asian 18
161 2019-09-27 Female White 18
162 2019-09-27 Female Black 19
164 2019-10-04 Female 18
163 2019-10-04 Male Black 19
165 2019-10-04 Female White 22
166 2019-10-04 Male Other 19
168 2019-10-04 Female Asian 19
167 2019-10-04 Male Asian 19
169 2019-10-18 Male Asian 21
170 2019-10-18 Female Asian 19
171 2019-10-18 Male Other 18
172 2019-10-18 Female Other 19
174 2019-10-18 Male White 19
173 2019-10-18 Female Asian 18
175 2019-10-28 Male Black 19
176 2019-10-28 Male Other 19
177 2019-10-30 Male Asian 21
179 2019-11-02 Male Other 21
181 2019-11-02 Male Black 21
180 2019-11-02 Female White 20
182 2019-11-03 Female Black 19
184 2019-11-03 Female Asian 20
183 2019-11-03 Female Black 21
185 2019-11-07 Female Other 19
186 2019-11-07 Female Asian 21
188 2019-11-08 Female Other 18
187 2019-11-08 Male White 20
190 2019-11-08 Male White 21
189 2019-11-08 Female Other 20
192 2019-11-08 Female Black 18
191 2019-11-08 Male Other 18
199 2019-11-11 Female Asian 19
200 2019-11-11 Female Asian 18
201 2019-11-11 Female Asian 19
202 2019-11-11 Male Asian 18
203 2019-11-14 Male White 18
204 2019-11-14 Female Other 20
205 2019-11-14 Female Asian 19
206 2019-11-15 Female White 19
197 2019-11-15 Female 18
207 2019-11-15 Female Other 18
193 2019-11-15 Male White 18
196 2019-11-15 Female White 21
195 2019-11-15 Male Black 21
197 2019-11-15 Male White 18
209 2019-11-20 Female Asian 19
210 2019-11-20 Female 18
211 2019-11-20 Female 19
212 2019-11-20 Male White 20
213 2019-11-22 Male Asian 19
214 2019-11-22 Female Other 19
216 2019-11-22 Female Asian 19
215 2019-11-22 Female White 19
217 2019-12-02 Male 19
218 2019-12-03 Male Asian 19
219 2019-12-05 Male White 19
220 2019-12-05 Female Other 20
221 2019-12-05 Female Asian 21
222 2019-12-05 Female Black 20