Racial Profiling in Connecticut?

Exploring the statistical research methods from the Institute for Municipal and Regional Policy Report

Story by the Connecticut Data Collaborative
September 22, 2015

O ver the past fifteen years, racial profiling has been recognized as an issue of national, state, and local importance. Members of the public have increasingly questioned whether police officers target individuals based on their race, ethnicity, age, gender or membership in a protected class. Nationally, disparities found in traffic stops have come under scrutiny by the public, policymakers, and civil rights groups.

The intention of this data story is to walk you through the statistical analysis conducted by the Institute for Municpal and Regional Policy and provide you with a better understanding of the following:

  1. Findings from the statewide analysis
  2. The strenghth and limitations of two statistical tests - Veil of Darkness and KPT Hit Rate
  3. The conclusions and next steps for continued analysis

Minority groups have historically expressed lower levels of trust and confidence in law enforcement. Conversely, while acknowledging that 'bad actors' do exist, law enforcement often feel as though legitimate police work can be mistakenly perceived as bias, or even overt racism.

In accordance with changes to the Alvin W. Penn Racial Profiling Prohibition Act (Public Act 99-198) made by the Connecticut General Assembly in 2012 and 2013, statewide data collection on traffic stops began on October 1, 2013. During the first year of data collection, data from approximately 595,000 traffic stops was recorded.

The data collected by Connecticut is the most detailed and comprehensive of any other state in the country. The analysis of that data by the Institute for Municipal and Regional Policy at Central Connecticut State University is the most sophisticated effort conducted on a statewide basis for all local police departments.

R esearchers want to be able to test whether, holding other factors constant, the race and/or ethnicity of a motorist increases the probability that they are stopped by the police.

Determining if profiling is taking place in a given department requires comparing stop activity against some reference data. A natural inclination would be to use the resident population of the town as a proxy for the driving population in order to compare the racial/ethnic composition of stopped motorists to the racial composition of the town BUT:

  • The population that lives in a town is different than the population that drives through a town;
  • There are seasonal variations in driving patterns;
  • Driving patterns change depending on the day of the week and also the time of day

AND it is impossible to obtain detailed enough data about the demographic makeup of the driving population to make any inference.

In a laboratory setting, randomized controlled experiments help scientists isolate cause and effect.1 The problem with creating a randomized control trial to test for racial bias in traffic stops is that it is both expensive and ethically troublesome. As a result, social scientists use research strategies that mimic the nature of a randomized control trail using data that are available.

Exploring the Veil of Darkness Test

T he statistical question being answered is:

Is there a significant disparity between stops occurring during the day of minority drivers when compared to Caucasian drivers?

The veil of darkness test (developed by Grogger and Ridgeway in 2006) compares traffic stops during daylight and evening hours. If a police officer is inclined to profile they are only able to do so during the day when they can perceive the race and/or ethnicity of the driver before they make a stop. Therefore, darkness establishes a 'natural experiment' that can be used to mimic a randomized control study.2

For example, the researchers can look at stops occurring at 6pm in December—when it is dark outside—versus stops at 6pm in July—when it is light outside—and see if there is a statistical difference. (Night stops are the control group and day stops are the treatment group – this is an example of a natural experiment)

Researchers wanted to compare times where the likelihood of the racial and ethnic driving population remained constant throughout the year. To do this they examined stops that occurred during the inter-twilight period (shown as civil twighlight in the figure).

Researchers want to control for or ‘hold factors constant’ in order to determine the effect of the treatment. If a factor that is related to the result is not controlled for, it could bias the results. In this analysis the following factors were held constant:

  • time of day (since traffic volume can vary with time)
  • day of week (again weekday volume is different than weekend volume)
  • daily volumetric measure of stops (to control for seasonal variation that might impact the proportion of minority drivers)

The results of the analysis provide a departmental average. The veil of darkness test provides no insight into the source of these disparities –specifically if particular officers are driving the results of whether they are the result of department-wide patterns. The test only identifies racial disparities that are large enough to affect the department level average.

4 Police Departments were identified with significant racial disparities.3

  1. Granby
  2. Groton Town
  3. State Police Troop C
  4. State Police Troop H

The Waterbury police department was added to the list when the data collected from that department showed signs of a statistically significant disparity under a more restrictive specification. Specifically, researchers were concerned that vehicular equipment violations (like headlights) might create a bias in the test.4

When these stops are removed, Waterbury shows racial disparity across several minority groups, however, the sample size is small so it will be interesting to see if a disparity persists as more data is collected.

Gray bar (represents the standard error) indicates that the TRUE coefficient could lie somewhere on the line. The circle size represents the sample size. As the sample size increases, the gray bar decreases and the confidence that the TRUE value of the coefficient was found.

Sample size affects reliability. In statistical analysis, the larger the sample size, the better. This is due to the 'law of large numbers.' When a large number of random variables with the same mean are averaged together, the large values balance the small values and their sample average is close to their common mean.

Granby has a small sample size so it will interesting to see if a disparity still persists as more traffic stop data are collected.

KPT Hit Rate

The statistical question being answered is:

Are the deviations from the observed data and expected data the result of chance (random) or are differences due to other factors (e.g. racial profiling)?

The KPT Hit Rate calculates the probability of a stop resulting in a hit (finding contraband) across different racial groups which analyzes post-stop data.

The KPT Hit Rate model (developed by Knowles, Persico, and Todd in 2001) assumes that in conducting vehicular searches, police maximize the likelihood of successfully finding contraband while drivers minimize their risk of getting caught. If motorists believe that they are more or less likely to be searched, they are assumed to change their behavior accordingly (carry contraband less or more frequently).

The KPT Hit Rate model is a more controversial test. Critics take issue that it does not reflect the full range of possible discriminatory behavior by police officers such as the length of time drivers are subject to searches. In addition, researchers were unable to estimate the effect for many departments since the sample size was too small. More stops produce a more robust finding when running this analysis. As more traffic stop data is collected it will be important to rerun this test. One year of traffic stop data did not produce large enough sample sizes for majority of the departments in the state.

  1. The X and Y coordinate of a point represent the proportion of stops resulting in contraband being found (e.g. hits) by race/ethnicity subgroup for each department in the state.
  2. The dotted line expresses the hypothesis that hit rates across groups will be the same. Points above show the potential for racial bias. Points below show little to no evidence of racial bias.

For most departments, points cluster around the expected equilibrium (represented by the dotted line in the figure).

Five departments were identified as having a statistically significant disparity in the hit rate of minority groups relative to their nonminority counterparts (rate at which minorities dutifully searched was higher when compared to Caucasian drivers).5

  • West Hartford has a disparity in the hit rate for Hispanic motorists signficant at the 99% level
  • State Police Troop I is being driven by the hit rate for Black motorists
  • State Police Troop C, F and Waterbury have a statistically significant disparity in the hit rate across all demographic groups.6 The results in this analysis are significant at the 99% level.

As previously mentioned, tests for taste-based discrimination are sensitive to the assumptions underlying the model. For example, in other states differences between gender and race of the officer have led to different observable biases. In addition, biases that exist in the aggregate might not exist when examined at a lower level of detail. Moreover, bias patterns can change overtime so might not exist in different time periods.

However, the KPT hit rate analysis provides good supporting evidence when viewed in conjunction with other tests and as more data is collected further refinements can be made to the model improving the applicability of the test for determining policy actions.

T he statistical evaluation of policing data is an important step towards developing a transparent dialogue between law enforcement and the public at large. The next step in the research by IMRP will be to examine whether individual officers are driving departmental disparities by looking at officer level data.

The Collaborative’s mission is to advocate for open data, promote data standards, and make it accessible. We view accessibility as not only being a resource for data but also making the data understandable and relevant to the user. In this project we provide users access to the open data and will continue to provide quarterly updates as new data are released. By creating this data story we wanted to make the rigorous statistical analysis accessible to a broad audience. We’d love to hear your feedback and comments.

This report was written by the Institute for Municipal and Regional Policy at Central Connecticut State University with the help of the Connecticut Economic Resource Center, Inc. (CERC). The authors from CERC applied the statistical tests known as the "Veil of Darkness," and "KPT Hit Rate."