A great deal of attention has been paid to the problems of carceral injustice and the increasing use of AI for things such as predictive policing. Much of this research has revealed that these digital technologies serve to recreate economic disparities, racism, and other forms of social discrimination while removing the stain of human agency toward a flawed ideal of objectivity. Less attention has been paid to the use of these digital technologies in pre-employment background checks. This essay examines the use of AI and algorithmic data analysis and the ways these technologies and procedures create a caste of humans who are barred from employment and rendered economically invalid. In the final analysis, AI and algorithmic data analysis in the service of pre-employment background checks reproduces Foucault’s human monster in a contemporary form, a human monster that bears the stigmata of digital unpredictability.
More than 90 percent of all new hires are subjected to some type of background check prior to employment. These background checks search criminal history and records, including non-convictions, debt history, credit ratings, and other data that can offer a picture of the financial health of a potential new hire.1 The idea behind background checks is to ensure the safety of employees and, in the case of schools and hospitals, students and patients. While many states have laws that limit both the reach and use of background checks, the practice of investigating a potential employee’s background is now standard and widespread. In a short piece in the journal Academe, Ann D. Springer explains that universities might be looking for information that would indicate a potential hire’s “character, general reputation, personal characteristics or mode of living.”2 A university may deem it important to determine exactly what kind of person they are considering, and this may include that person’s “character.”3 While the point of Springer’s article is to reveal the potential dangers of background checks, she also pins down one of the main issues in performing such checks: “What if an employee commits a crime or breaks the law? An employer who knew of such past bad acts may be held responsible for failing to act on that knowledge, even if future actions were and are difficult to predict.” Liability can consist of many things like the risk of theft in the case of people with a criminal history of crimes against property or people who are so financially unstable they pose a theft risk. Liability could also be physical danger from people who have a history of violent offenses. In terms of how to predict potential danger and liability, this has been elusive, and companies have generally decided to err on the side of caution and refuse to hire anyone whose background check reveals something that could be seen as dangerous. But prediction is the key to understanding how background checks function in contemporary culture.
In a sociological study on criminality and recidivism, Devah Pager explains that “there are currently over 12 million ex-felons in the United States, representing roughly 8 percent of the working-age population.”4 If we add the number of people convicted of violent misdemeanors, crimes that are specifically flagged in background check, the number of potentially unemployable people is staggering. Putting this into context with the ever-increasing use of background checks to screen potential hires, this population of ex-felons constitutes a caste of humans who are likely to be deemed unacceptable for hire by most employers. The background check presumably combs criminal histories and assembles this information in a report so employers can evaluate the risks of a potential new hire. Those deemed unacceptable are not considered beyond the initial screening.
Algorithmic data analysis and background checks are becoming more common. Uber now uses the background checking company Checkr to perform their checks. Checkr uses an algorithmic system to search and process information on potential drivers. They claim the use of algorithms and AI in background checks is to increase efficiency, reduce “friction,” which is to say the degree of work and difficulty, on obtaining information, and to remove the problem of human error in evaluating background check information. Checkr offers a guide to how information can legally be used in any given state. They also do international searches of things like global watchlists and criminal history abroad. Checkr provides a full screening of criminal history, education and employment verification, civil search, drug and health, and motor vehicle reports.5 According to Backgroundchecks.com, “Checkr’s system uses AI to perform more than a million background checks per month, including checks for new Uber drivers and annual repeat checks for existing drivers. The checks incorporate multiple searches, including county and national criminal history searches, sex offender registry checks, terrorist watchlist checks, driving record checks, and Social Security Number verifications.”6
An article in Qrius, formerly The Indian Economist, offers a glowing portrait of the potential for using AI algorithmic data analysis in pre-employment background checks. While the article ostensibly touts the efficiency of AI and algorithms, it nevertheless alludes to the predictive capacities of these systems by explaining how AI and algorithmic data analysis make it possible for recruiters and employers “to improve their understanding of the risk of negative behavior patterns.”7 These systems can not only reveal potential risks, but also calculate the level of risk: “AI can help recruiters make more informed decisions by telling them what a candidate’s risk level is.” Algorithmic data analysis offers a way of employers to utilize the same technology that is used for algorithmic policing and the forms of risk analysis utilized by national security agencies. Louise Amoore and Rita Raley point out that it sophisticated forms of algorithmic data analysis that guide military strategies regarding things like lethal drone strikes.8 Even as trade journals and sympathetic news media portray the use of algorithmic data analysis as a welcome and efficient technological improvement to pre-employment screening, a pre-employment background check service such as Checkr seems to take on darker implications.
With the use of algorithmic data analysis, Checkr can scour a person’s life for anything that could flag them as a potential danger or liability. What is more, algorithmic analysis eliminates the problem cited by Springer of the difficulty in predicting potential danger. In fact, algorithmic data analysis is designed to render everything predictable. The purpose of algorithmic data analysis is of course to make it possible to search and make sense of vast amounts of data that are beyond human comprehension, but the goal of this data analysis is to determine patterns and tendencies that can be harnessed and used. This is to say that the algorithm makes it possible to predict behavior based on previous behavior. Algorithms create what Tarleton Gillespie calls cycles of anticipation” in which the vast data collection from sites like Google and Facebook make it possible for businesses (and others) to “thoroughly know and predict their users.”9 It is the predictive capacity of algorithms that is important, and when the algorithmic data analysis is brought to things like employment background checks, the process takes on a new dimension. The background check does not simply offer an account of past crimes, financial problems, and potential character defects that could be problematic, it makes it possible for employers to predict future problems in a potential employee. Algorithms and “information systems produce “shadow bodies” by emphasizing some aspects of their subjects and overlooking others.9 These shadow bodies persist and proliferate through information systems.” The potential hire is both the person who currently exists and the shadow body who the algorithm creates who may exist in the future, which is to say the sum total of who this person is can never be much more than who they have been in the past. What we are, and what we will be, is only ever what we were.
Jackie Wang explores the problem of predicting criminal activity in the digital age to demonstrate that algorithmic and cyber technology has succeeded in re-framing the racism inherent in legal apparatuses and disperse this racist violence so as to obfuscate both its presence and its agents. Wang explains that practices like predictive analysis made possible by advanced forms of data mining and algorithmic analysis are being used to predict crimes before they happen. But Wang points out that these practices “are much more about constructing the future through the present management of subjects categorized as threat risks.”10 In constructing the future, AI and algorithmic analysis reconstructs the inherent inequities and racist tendencies that presently exist. Far from objective and disinterest computerized tools, these tools re-inscribe forms of racial and economic exclusions as they exist in the present into a form of the future that is nothing more than a projection of the past. What is more, “as these technologies of control are perfected, carcerality will bleed into society. In this case the distinction between inside and outside the prison will become blurrier.”11 The pre-employment background check that utilizes AI data analysis exists within this blurry region in which carcerality has begun to diffuse across society. AI and algorithms make it possible to search vast amounts of data for anything that could potentially serve as a red flag for liability and danger. The predictive capabilities believed to inhere in algorithms and data analysis all rely on the capacity for these systems to use relevant data to make these connections and predictions.
There is a vast and infinite sea of data available for AI and algorithms to make use of, but the data that matters for predictive policing and the data that matter for background checks are all of the specific order. The data that these data processing systems flag, process, and order are those bits of information that would indicate some type of danger, whether it be danger in the form of financial liability or danger in the form of a potential threat to life and body. Checkr, and virtually all background services, search specific records like court records. But with algorithmic analysis, they are able to comb anything on the web that might indicate the potential for danger. These processes are in excess of any individual agent since they operate in the realm of digital analysis and numbers. Once programmed and set in motion, it is the algorithm that determines threats and risks, not individuals or living human beings. As Manuel Abreu says, “algorithms escape the laws of cause and effect and operate in a fluid state of exception, encompassing the financial sector, the military-security nexus, and the entertainment industry.”12 Still, following Abreu’s analysis, the state of exception of the algorithm works with forms of data that are rigorously defined. While it is true that the algorithms generate profiles of people that present “reasonable suspicion,” the data that signal reasonable suspicion are of a specific kind. As Wang shows us, this specific kind of data may have more to do with the racisms engrained in society than with any kind of “objective” symptom of threat. This same data also operates as symptoms of other kinds of threats that are not dangerous or even criminal. In the case of background checks, the data that is flagged as symptomatic may be behavioral patterns that demonstrate a bad financial risk. Someone with bad credit history, for example, could potentially be a liability to an employer. In all, the data seized upon by algorithms in the service of background checks and policing are all symptomatic of a potential problem that is increasingly being understood as a present problem. There can be no future problem since the future is decided ahead of time as a mathematical probability based on the past.
All of these techniques amount to Deleuze’s technologies of the society of control in which algorithmic analysis and prediction which “substitutes for the individual or numerical body the code of a ‘dividual’ material to be controlled.”13 The dividual is the atomized and coded image of the individual who no longer matters as algorithmic analysis steps in to provide something more valid than the word of a human. The algorithmic rendering of who one is can be made more predictable based on specific data points that mark the dividual as a liability or a threat. One’s suitability for employment is determined by how one has adjusted one’s life to a societal system that demands that we “behave” according to economic and social standards that are determined by the financial sector more than any older system of morality and ethics.
This morality and ethics, and the endless reconstitution of the past as a mathematically rendered present lacking a future, is bound up with the technologies of control that come out of the debt relation that is threaded through contemporary life. Maurizio Lazzarato makes it clear that the debt relation underpins contemporary life since to engage the world means to engage economically, and to engage economically means to enter into the debt relation. As Lazzarato explains “(d)ebt… is the economic and subjective engine of the modern-day economy,” and this means we are all captured in this system as soon as we enter into the world of money and employment. (Lazzarrato. The Making of Indebted Man. p. 25.)) However, entering into the debt relation is not simply a matter of economics separate from domains like character, liability, and criminality. Debt is a technology of control specific to societies of control. The debt relation makes possible “specific relations of power that entail specific forms of production and control of subjectivity.”14 The purpose of the employment background check is to assess and evaluate precisely this “subjectivity,” a subjectivity that must adhere to standards of behavior and productivity that do not emit signs of risk and liability. Within this system, there is no central agency of discipline or control. Rather, the debt relation initiates a technology of control in which the “debtor is ‘free,’ but his actions, his behavior are confined to the limits defined by the debt he has entered into.”15 In technologies of control, “you are free insofar as you assume the way of life (consumption, work, public spending, taxes, etc.) compatible with reimbursement.”15 The practices of daily life like consumption, work, public spending, taxes, etc. are precisely the forms of data that are analyzed and assessed by algorithms and background checks, these and of course criminal history. What the background checks analyze and assess are the measure of one’s validity and to not measure up as valid necessarily renders one invalid.
In addition to the 12 million ex-felons in the United States, we can add the vast number of people who are increasingly determined to be invalid for failing to measure up to the standards of behavior inscribed by an economic system that prescribes our behavior. The background check and the algorithms that are increasingly a part of the process of background checks are not evaluating just any data points. These systems are written and programmed to hit upon and process very specific bits of information. These are signs that can be interpreted as symptoms. To be rendered invalid, to be assessed and determined to be a criminal or financial threat, demands that the digital systems know which signs to read as symptoms and which to reject. Wang harkens to the older paradigm of discipline in relation to digital technologies of control as she points out that, “(i)f Jeremy Bentham’s eighteenth-century design of the ‘panopticon’ is the architectural embodiment of Michel Foucault’s conception of disciplinary power, then algorithmic policing represents the inscription of disciplinary power across the entire terrain that is being policed.”16 This apparent return to Foucault raises the specter of other features of older paradigms which are buried in the contemporary societies of control. What are these data points, these signs that serve as symptoms if not the same forms of symptomatology of monstrosity that once gave rise to Foucault’s ideas of the abnormal? While algorithmic policing obfuscates rather than supplants the old mechanisms of discipline, the background check and the ways algorithmic analysis have come to influence the background check re-orients the old ideas of the abnormal and the human monster.
Foucault explains that the medicalization—the psychiatrization—of crime as it is expressed through mental illness demanded a sign that marked the criminal, the abnormal, as legible. The emergent discourse of psychiatry, criminology, and social hygiene demanded some kind of visible sign that could be read and understood as the mark of an abnormal, criminal type who is disposed to commit crimes. What was required was something that made it possible for psychiatrists to identify and properly isolate the abnormal and the criminal that stands out over and above the process of abnormal behavior: “What psychiatrists look for in order to demonstrate that they are dealing with someone who can be psychiatrized,… what they need, is not a process, but a permanent stigmata that brand the individual structurally.”17 The practitioners of social hygiene demanded a stigmata, or brand of criminality, of monstrosity, of abnormality that will always render visible and legible not just the criminal potential of the abnormal, but the physical sign of this criminal potential. These lines of thought led, of course, to nonsense like phrenology and criminality, but it also led to an entire scientific taxonomy of criminality that sought to determine features of criminal behavior in the criminal themselves that could allow scientists and law enforcement to predict criminal behavior before it occurs, and it is precisely these processes of prediction which has been thought to be made possible with background checks and algorithmic analysis. The problem of social hygiene is recast as behavioral determinations which adhere to the demands of the debt relation as a measure of one’s validity as a subject and a viable agent in modern life. The data points registered and flagged by the algorithm and the background check operate as the stigmata of abnormality far more efficiently and permanently than the old psychiatric methods. What is more, we have dispensed with the psychiatrist, or any other authority, as the digital mechanisms remove all human agency from the process of invalidating.
Algorithmic analysis and background checks operate as forms of social antiseptic that maintains the purity of the closed system of organizational integrity. Businesses can dispense with liability, both economic and criminal, by utilizing a digital system that reads the stigmata of danger before it can enter into the digital system that stands in for the organizational structure inhabited by living bodies. And they can do this without ever having to dirty themselves by examining pieces of information in criminal records, credit checks, and personality indexes. The algorithm performs these functions, and it is the algorithm that makes the determination of invalidity based on the assigned stigmata. The uncertainty that the criminal and economic invalid could introduce into the sterile integrity of the organization is neutralized. A criminal infraction, a default on a credit card or medical bill, an eviction—all these transgressions become stigmata of invalidity. There can be no risk of future liability or unpredictability since these stigmata render the invalid the present “being” of the past, and this stops the possibility of a future in its tracks. By reading the stigmata of past behavior the algorithm makes it possible to create an image of a present that forecloses the possibility of a future. These systems provide “the mastery of uncertainty” that “proceeds by way of the representation and memory storage of the past,” as Tiqqun explains.18 The algorithmically enhanced background check “aims at making living beings into mechanics, at mastering, programming, determining humans and their life, society and its “future.”19 These processes form part of the cybernetic hypothesis in which the problem is “no longer forecasting the future but reproducing the present.”20 The use of algorithms and background checks ferret out the “risky dividual” toward a “balanced society.”21 The algorithms and background checks, the cybernetic mechanisms, read the stigmata of risk and allow the neutral system to operate as the firewall against risk. Those humans, those living bodies who are now digital renderings in the forms of data points that signal the stigmata of invalidity, remain locked—captured– forever as invalid. Perhaps it makes more sense to use the old terms. These living human bodies that must nevertheless live, as the abnormals– these are the same human monsters. They are the same forms of madness and degeneration that operate as the “bearer of a number of dangers.” They are the same degenerates that operate as the “bearer of risks.”22 They are the same monsters who, since they pose the threats of uncertainty to a cybernetic system of absolute balance, emerge from nature but against nature. Only now they are the isolated dividuals captured within societies of control for whom the algorithm has determined as unpredictable invalids who resist predictability. Bearing the digital stigmata of monstrosity that signals their impending disturbance, these monsters can be safely monitored, tracked, and returned to disciplinary restraint as is economically expedient.
It is the digital realm that finds the invalid intolerable because the invalid present the type of unpredictability that is intolerable to digital systems. While companies, organizations, and universities advertise the justification that the background check is in the interest of safety, it is in fact the intolerable danger of the unpredictable that must be ferreted out by the background check. The primary reason for adding algorithmic technology to background checks can only be toward the elimination of unpredictability, otherwise a simple rap sheet would suffice. The reason a simple rap sheet is insufficient is that a human being must look a list of past offences and make a judgment call as to the likelihood of future danger, and this would only compound the levels of unpredictability with the addition of a secondary human consciousness. Above all else, the system must control, neutralize, and lockout any threats to absolute predictability. Thus, we have a caste of people who are determined to be invalid by a system that is no longer bound by human consciousness. Since no human makes this determination, the status of invalidity is the fault of the invalid who have only themselves to blame for their behavior, be it bad credit or a felony conviction. In the final analysis, we are left with a caste of untouchables who will forever remain both economically externalized, in that they are forbidden entry into economic viability, yet completely captured and internalized since they are digitally quantified and categorized. Their status as invalid is dependent on a detailed record of their failures and transgressions. It is the invalid who have taken over as the abnormal, the moral degenerates, and the human monsters.
Just as the process of abnormalizing individuals who resist the discipline of the factory so as to eliminate what cannot be disciplined, so the process of invalidating the contemporary abnormal serves to isolate and remove individuals who represent forms of unpredictability within the totalizing techniques of societies of control. There is no need to place these kinds of individuals in spaces of discipline and confinement since the mechanisms that isolate them are the very same mechanisms of current financial valorization. The forms of “biocapitalism”described by Marazzi which capture value in social processes themselves are themselves the mechanisms of capture which remove the invalid from the closed cybernetic system of financialization and the creation of indebted man.23 The technology that creates algorithms that drive financial surplus value in the form of the debt relation are also the technology that creates algorithms to remove threats to absolute predictability within global capital where there is no room for forms of flawed human judgment that may allow contagions that produce risk. The system works because it has removes all risk, and the population of invalids, abnormals, and monsters demonstrates that the system works. It is flawless, antiseptic, objective, and removed from the stain of human unpredictability from start to finish. Above all, AI and algorithmic data analysis provide absolute visibility and predictability for digital systems that cannot tolerate the unpredictable. Just as, during the Seventeenth Century and the time of “The Great Confinement,” “madness threatens modern man… with the return of the bleak world of beasts and things, to their unfettered freedom,” the contemporary technologies of control isolate and capture the bleak world of human fallibility and neutralizes the threat of its return to errable weakness.24
Image credit: Knowledge at Wharton.
- “Background Checks and AI.” It is useful to examine industry generated sources on background checks since these are the precise sources used as companies and other organizations design and implement background check processes and policies.
- Springer, Ann D. “Legal Watch: Background Checks: When the Past Isn’t Past.” Academe, vol. 89, no. 2, 2003, p. 110–110.
- In 2009, The University of Akron implemented a policy that would require potential new hires to submit to a DNA screening as a condition of employment. This came just before passage of the federal Genetic Information Nondiscrimination Act (GINA). In an essay in The Hastings Center Report details the ethical and legal ramifications of such a policy. While analysis of DNA screening begs obvious questions about the ethics of these kinds of privacy intrusions, the use of AI and algorithms can be performed using information outside the control and even knowledge of those being screened. See Callier, Shawneequa L., John Huss, and Eric T. Juengst. “GINA and Preemployment Criminal Background Checks.” The Hastings Center Report 40, no. 1 (2010): 15–19.
- “The Mark of a Criminal Record.” American Journal of Sociology, vol. 108, no. 5, 2003, p. 937–75.
- https://checkr.com/background-check
- Backgroundchecks.com.
- Qrius (formerly The Indian Economist). “How Artificial Intelligence is Improving Background Checks.” November 11, 2020. Accessed 8/ 18/2022.
- Amoore, Louise, and Rita Raley. “Securing with Algorithms: Knowledge, Decision, Sovereignty.” Security Dialogue 48, no. 1 (2017): 4.
- Gillespie. “The Relevance of Algorithms.”
- Wang, Jackie. Carceral Capitalism. p. 47.
- Wang. 39-40.
- Abreu. “Incalculable Loss.”
- Deleuze, Gilles. “Postscript on the Societies of Control.”
- Lazzarrato. p.30.
- Lazzarrato. p. 31.
- Wang. p. 243.
- Foucault, Michel. Abnormal. p. 297.
- The Cybernetic Hypothesis. p. 40.
- The Cybernetic Hypothesis. p. 41.
- The Cybernetic Hypothesis. p. 56.
- The Cybernetic Hypothesis. p. 75.
- Foucault. P. 118.
- Wang, Jackie. Carceral Capitalism. South Pasadena: Semiotext(e) Intervention Series 21, 2018.
- Foucault, Michel. Madness and Civilization. 83.
This content originally appeared on Dissident Voice and was authored by Mike Templeton.
Mike Templeton | Radio Free (2022-09-05T14:31:22+00:00) Background Checks, Algorithms, and the Re-making of the Abnormal. Retrieved from https://www.radiofree.org/2022/09/05/background-checks-algorithms-and-the-re-making-of-the-abnormal/
Please log in to upload a file.
There are no updates yet.
Click the Upload button above to add an update.