By Emnet Almedom, Nandita Sampath, Joanne Ma
In this report, we examine the use of algorithm-based risk assessments in the U.S. child welfare system, particularly through the example of the Allegheny Family Screening Tool (AFST). First, we conducted a literature review on the history of the child welfare system to uncover the system’s complexities and the values at play at various turning points in history. We then review the history of both analog and automated risk assessment tools in child welfare in order to engage with the new questions raised when delegating (partial) decision-making to a machine learning technology. This analysis revealed to us the systemic loss of privacy for poor families of color interfacing with government-run programs, especially when juxtaposed with the rights afforded to wealthy, white families. In the short-term, we recommend that child risk assessments be subject to greater regulatory standards, that the incentive structure for designing such tools be investigated, and that tool designers acknowledge their duty to leverage their power to protect the most vulnerable subjects of their creations. Ultimately, massive structural reform, such as a shifting of power to families most impacted and a society that meets material needs rather than punishes, is needed in order to address the root causes of child maltreatment and abuse.
Child Protective Services (CPS) is a United States government agency that is in charge of investigating and assessing reports of maltreatment in many states and cities, and also intervening to ensure that children are protected from further maltreatment. Generally, a mandated reporter (someone who, because of their profession, is legally required to report any suspicion of child abuse or neglect) will alert the local CPS agency of a potential situation. From there, a caseworker will take on the investigation of the family situation . Because of data accumulated over time, there are many areas of child protection where routine processes have been established. However, because of variability from case to case, involving a multitude of factors and factor interactions with each other, even the most skilled caseworker may not be able to correctly diagnose a situation. There are often false positives (a threat to a child is identified but the child has actually not been maltreated) and false negatives (no threat is identified but the child is indeed in danger). Many agencies have adopted assessment tools to assist with detecting harm to a child.
Because many counties are often overflowing with calls about potential child neglect, some have implemented the use of predictive-risk modelling algorithms for child safety with the intention of assessing more cases quickly. In this article, we will primarily be discussing the Allegheny Family Screening Tool (AFST) because of the availability of information regarding this tool, and because they were the first jurisdiction to use an automated assessment to assist with child safety risk screenings starting in 2016 . Since then, several U.S. child welfare agencies have begun using algorithm-based screening tools, such as Eckerd Connects’ Rapid Safety Feedback tool used in Illinois, Connecticut, Louisiana, Maine, Oklahoma and Tennessee. In this paper, we do not review each tool relative to the other, as there are many common features and implications for use regardless of type of tool. However, it is worth noting that the AFST is a tool owned and operated by the county, while Eckerd Connects’ tool is privately-owned and contracted out to government agencies. This difference in governance has huge implications for transparency and accountability; Illinois’ child welfare agency terminated their contract with Eckerd Connects due to failure to predict child fatalities and the private company’s failure to disclose details on the inner workings of the algorithm. Conversely, the AFST is drawing interest from counties in Colorado and California’s Department of Social Services as a role model for similar use of predictive analytics. Given these developments and the fact that one in three children in the United State is the subject of a child welfare investigation by age 18, we believe it is clear that the use of algorithms in child protective services is a growing trend to watch . However, an algorithm is only as good as the data that is fed into it, and algorithmic risk assessments pose some serious dilemmas regarding the biases in the decisions they make. In the coming sections we will discuss how this algorithm has been implemented into caseworkers’ workflow as well as the criticisms of its usage. We will then offer policy recommendations regarding the use of the algorithm.
History of U.S. Child Welfare System
Since the algorithm-based tool attempts to quantify the risk of child abuse and then trigger a human-based system response (i.e. an investigation), the context of how this U.S. child welfare system has historically developed becomes relevant. No tool, regardless of the perceived degree of separation from human judgement, can operate in a vacuum. Thus, we begin our analysis with a broad look at the child welfare system and its history, rather than simply the relationship between tool designer and its subjects. Our main objective is to provide background on the child welfare system’s main actors, history, past reforms, and the values embedded throughout. Through this context, we can better understand this era’s latest turning point that shifts some decision-making power from caseworkers to technology.
Early responses to child poverty included jailing children for vagrancy or warehousing them in poorhouses . It was not until the 19th century that the first child welfare organizations in the United States were created. Institution-based child welfare has its roots in charities such as New York’s Children’s Aid Society, which was founded in 1853 by Charles Loring Brace and still exists today. Under the argument that it would be more humane to place children in homes than in jail, Brace created the concept of “Orphan Trains” in which groups of children were sent to the frontier regions of the West away from cities.
The roots of Brace’s argument become more clear when accompanied by the fact that he referred to these children as “the dangerous classes.” In his 1872 essay “The Life of the Street Rats,” Brace wrote: “These boys and girls, it should be remembered, will soon form the great lower class of our city. They will influence elections; they may shape the policy of the city; they will assuredly, if unreclaimed, poison society all around them.” This idea of removing children from working-class families was not implemented to address the poverty endangering destitute children, but rather implicated their parents and created an alternative in “out-of-home placement.” Brace’s model created the foundation for modern-day foster care, a system that displaces children and separates families at alarming rates. This shift marked a turning point in which the act of child removal transitioned from imprisonment to out-of-home placement. This transition perhaps improves the physical conditions of child removal, but ignores the status quo and material conditions of families subject to the child welfare system.
The American conceptualization of child protective services has specific ties to chattel slavery and colonialism that cannot be ignored. During the bondage of Africans under chattel slavery, families were routinely broken up and members sold as commodities, while Black women had their reproductive capacities reduced to reproducing new free labor for slave owners. After Emancipation in the 1860s, Black children were “apprenticed” for cheap labor, which served as one of the many ways that subjugation endured beyond legalized enslavement. Since these times, the penal system and the child welfare system have been co-designed to be remarkably similar: both institutionalize Black families at rates disproportionate to the total population . In the case of Indigenous families, the child welfare system served as one extension of the state’s genocidal intent to destroy language, culture, and society since the beginning of settler colonialism of North America. Specifically, starting in the late 1870s, the Bureau of Indian Affairs (BIA) created nearly 100 boarding schools for Native American children to live completely immersed in white American culture. In the 1960s, in partnership with the Child Welfare League of America (CWLA), the BIA expanded their child removal efforts to adoption; 85 percent of these were adoptions by non-Native couples . The impacts of these roots are observable in the disproportionate number of Black and Indigenous children in the current child welfare system, particularly in foster care. In Minnesota, indigenous children are represented at 10 times their percentage of the child population, and in Alaska more than three times .
The Welfare System Today
Today’s child welfare system is a complex network of public and private entities, inclusive of government agencies, non-profit organizations, private foster homes, group homes, treatment facilities, schools, and law enforcement. As evidenced by the historical roots of child welfare, child protection services are predicated on the control of poor, minority, and (im)migrant families. In Wisconsin in 2016, a child living in a home with less than $15,000 in household income was six times as likely to be involved with the system as a child from a home with a higher household income . In 2011, a yearlong investigation by the Applied Research Center found that more than 5,000 children of undocumented parents were remanded into foster care when their parents were detained for deportation . Much remains unknown about the children impacted by the increase in family separations along the U.S.-Mexico border in 2017, but these numbers can only be expected to have grown. These disparities appear in national child welfare statistics, as seen below.
|Disproportionality in Child Welfare Involvement by Race/Ethnicity, 2014 Census Data |
|Race/Ethnicity||Percent of total child population||Percent of children identified by CPS as victims of neglect or abuse||Percent of children in foster care|
|American Indian/Alaska Native||0.9%||1.3%||2.4%|
|Black or African- American||13.8%||22.6%||24.3%|
|Native Hawaiian/Other Pacific Islander||0.2%||0.2%||0.2%|
|Hispanic (of any Race)||24.4%||24.0%||22.5%|
|Two or More Races||4.1%||4.7%||6.8%|
Complexities embedded in the U.S. child welfare system
Building on this understanding, we identify specific complexities within the child welfare system and illustrate the dangers of system stakeholders relying on data that mask the system’s complexities. We specifically address the broad definition of child welfare, the interaction between child welfare and policing, the lack of discretion and autonomy for caseworkers and parents, and the loss of privacy for poor families.
Broadened definition of child welfare : In the early 1970s, there was increasing public awareness of child abuse, and the Child Abuse Prevention and Treatment Act (CAPTA) was one result. CAPTA lumped child abuse and neglect into an umbrella category of child maltreatment. In the process, CAPTA obscured the relationship of race and poverty to allegations of neglect, which account for the vast majority of state interventions into families. CAPTA sets up a “treatment model” for child maltreatment, which essentially conflates the categories.
Interaction between child welfare and policing : Police are responsible for producing about one-fifth of all reports of child abuse and neglect investigated by local child welfare agencies. Low-level interactions with police often result in the initiation of a child welfare investigation. Because police contact is not randomly or equitably distributed across populations, policing has likely spillover consequences on racial inequities in child welfare outcomes.
Lack of discretion and alternatives for caseworkers: Caseworkers are taught to identify “risk factors,” including past family conflict and reliance on public services, to identify the threats within the only person over whom they have power: the parent. For instance, Lash recounts a case of suspected child abuse in 2013 involving a homeless mother and baby who were living doubled-up with a friend in New York City’s public housing (NYCHA) . A NYCHA home investigation led to the baby being examined at an emergency room. After the child was found to be healthy and well cared for, the caseworkers hesitated to close the case due to other “risk factors” that showed up in the system’s many years of data on the mother. She was previously a victim of domestic violence, she had only one relative in the city, and she had been homeless before. Lash notes that caseworkers know that many of the root causes of neglect and even abuse, which show up as risk factors in the child welfare paradigm, are outside of the parent’s control. However, the existing system as well as caseworker training offers few alternatives other than opening an investigation, providing parent-specific “in-home” interventions, or transitioning the child to “out-of-home care.” This has long been a critique of the child welfare system, even before algorithmic tools came into play.
Limited legal protection for parents and children: It is fundamental to understand the child welfare system as parallel to the legal system for both juveniles and adults. For the purpose of our analysis, we will focus on the legal rights afforded in child welfare proceedings. In 32 states, both children and parents have the categorical right to representation during proceedings. In the remaining 19 states, access to legal representation for children and parents is at the discretion of the court . However, this representation is often inadequate in comparison to the resources of the child welfare system. For instance, in 2000, Washington State was spending three times more on lawyers representing its child welfare system than on lawyers representing parents fighting that system .
Compliance as the key metric of progress: If a court requires a specific intervention, parents are then primarily judged based on their compliance, even if the services (ex. parenting class) have no connection to their most urgent needs (ex. childcare, housing) that led to the report of maltreatment. However, caseworkers focus on compliance with court-appointed interventions because it is often the only thing they can measure . There is a lack of autonomy embedded in the system response. This translates into limited metrics of progress (ex. parenting class attendance, physical presence in court), which then become easily translated into data points in a metric-based tool, without addressing any of the underlying challenges families face.
Loss of privacy for poor families: Underlying this limited legal protection and few options to customize care is a more overarching systemic issue on which we center our future analysis: a historical loss of privacy rights for poor families. Reproductive justice scholar Khiara Bridges has researched this phenomenon through the experience of pregnant mothers seeking prenatal care through state-provided Medicaid . In her ethnographic fieldwork, she observes caseworkers ask patients questions ranging from past sexual abuse to intimate partner violence, what they ate, how they make their money, and how their partner makes their money. In the process of accessing services entitled to them , poor families become “public families” and lose the privacy rights that are afforded to their wealthy counterparts. Bridges theorizes that poor families lose their privacy because their reliance on public assistance is thought to signify a “moral laxity” that makes “mistreatment and exploitation of [their] children sufficiently probable.” In this analysis, Bridges reveals a linkage to child protective systems: probability of harm is used to justify inquiry into a parent’s choices during any interaction with public systems, from public insurance to drug treatment. Again, from this complexity in the broader system, we identify a source of data for the algorithmic tool. By making disclosure a requirement for assistance, exorbitant amounts of data have been collected and found to correlate with outcomes of interest to the state. The ability to identify and code data that makes maltreatment “sufficiently probable” incentivizes a self-reinforcing cycle of surveillance.
We review this history and present context in order to better understand the impact that algorithmic tools have on the present and future of the child welfare system. Our goal is to approach this question under a post-colonial framework. A framework which, in the words of Lilly Irani and her co-authors, recognizes that “all design research and practice is culturally located and power laden.” In our review of the history, we see myriad power imbalances between the ruling class and the working class, parents and the system, and caseworkers and the system. The future of the child welfare system is unknown, but we can consider ongoing reforms for guidance. In 2013, the Journal of Indigenous Social Development pointed to the importance of decolonizing social work in order to recover from the reality that, just under 100 years ago, European colonies and former colonies encompassed 84 percent of the land in the world . This decolonization process will require a return to community self-determination within or as an alternative to colonial systems. Our question is in what ways is algorithmic risk assessment laden with the complexities of the status quo rather than supporting new values in the process of decolonizing systems like child protective services. In our historical analysis, we identify a reliance on immediately available and easily measurable data to come to decisions. Could putting effort into optimizing risk scores and mathematical definitions of fairness distract from other goals? Or could this work further uncover the system’s complexities?
With this grounding in the roots, complexities, and current realities of the American child welfare system, we can now move into examining the tools that are used within the system — from analog risk assessments to today’s automated decision systems.
History of Risk Assessments in the United States
There are several kinds of assessments that agencies in charge of child well-being use in order to assess the risk a child might be experiencing. For example, the variety of assessments include identifying “dysfunctional parent-child systems,” determining “threat of immediate harm and to identify steps needed to protect children,” and even looking at the “potential influences of substance use and substance use disorders for risks of maltreatment.” Generally, the assessments we found are done by hand, usually by the caseworker (and sometimes by parents or involved family members) and consist of standard “yes/no” questions where the “yes” is equal to one and a “no” is equal to zero. The final risk score is simply a sum of the total “yes’s.” There is generally a threshold for what constitutes a high versus medium versus low risk based on these simple answers.
It is important to note that all assessments contain different questions, assess different factors, and have different cutoffs for what constitutes a high likelihood that a child is being abused and/or neglected. Essentially, there is no foolproof method for correctly diagnosing a situation. Because of this lack of agreement and because each situation is nuanced, these assessments are only a part of what goes into the diagnosis of each case. A holistic approach involving data collected on the family by the local government, the caseworker’s interaction with the family (although in most cases, the caseworker does not interact with the family at all), and their intuition regarding the situation is also an essential component. Therefore, while the handwritten assessment is included as a part of the process, it is certainly not the only part and can be overridden by the caseworker’s judgement .
Current Child-Safety Risk Algorithms and How They Work
In 2014, the Department of Human Services (DHS) in Allegheny County, Pennsylvania began soliciting proposals to “better use data already available to us to improve decision-making through predictive-risk modeling.” Researchers from Auckland University were awarded this contract, and began looking at data through the county’s Data Warehouse, which is the central repository of social and human services data related to DHS clients. The data includes “service information received through DHS as well as many other publicly funded entities including the local housing authorities, the criminal justice system, and local school districts with which there are data-sharing agreements.”  Here is what the county says about the tool:
“The final product was named the Allegheny Family Screening Tool, and it uses information already contained in our data systems to inform call-screening decisions when allegations of maltreatment are received. A Family Screening Score is calculated by integrating and analyzing hundreds of data elements on each person added to the referral to generate an overall Family Screening Score. The score predicts the long-term likelihood of re-referral, if the referral is screened out without an investigation or home removal, if the referral is screened in for investigation.
The higher the score, the greater the chance for a future event (e.g. abuse, placement, re-referral), according to the algorithm. If the Family Screening Score is at the highest levels, meeting the threshold for ‘mandatory screen in’, the call must be investigated. In all other circumstances, however, the Score provides additional information to assist in the call-screening decision-making process. It does not replace clinical judgment. The Family Screening Score is only intended to inform call-screening decisions and is not used to make investigative or other child welfare decisions.” 
With the usage of publicly-available data like court records, social media, and information from the data warehouse, the screeners can then run the model. There are 131 indicators available in the county data that are correlated with child maltreatment. The model outputs a score that goes from 1 (lowest risk) to 20 (highest risk) by weighing “predictive variables” like “receiving county health or mental health treatment; being reported for drug or alcohol abuse; accessing supplemental nutrition assistance program benefits, cash welfare assistance, or Supplemental Security Income; living in a poor neighborhood; or interacting with the juvenile probation system.” Above a certain threshold, an investigation is automatically triggered .
Much of the criticism of the algorithm claims that it does not actually model child maltreatment, but instead models community and family court decisions. For example, the algorithm uses proxy variables to stand in for maltreatment which include re-referral (when a call to the hotline about a child was initially screened out but another call is made regarding the same child within two years) and child placement (when a call about a child results in the child being placed in foster care within two years). Also, the model does not take into account the fact that community members tend to call in about children from black families at a much higher rate . And, many of the variables in the algorithm that are used to predict neglect and abuse are about whether a family has taken advantage of public services like food stamps, county medical assistance, and Supplemental Security Income. This means that poorer families are penalized more harshly than families who may be getting similar assistance with finances or health but through private means .
Thankfully, the algorithm is currently being used alongside the judgement of a call screener. However, according to the New York Times, “call screeners and their supervisors will now be given less discretion to override the tool’s recommendations — to screen in the lowest-risk cases and screen out the highest-risk cases, based on their professional judgment.” It is concerning and perhaps telling that these tools are intended to entirely transition out call screeners. Importantly, though, according to the AFST website, “more than one-third of children classified as highest risk by the AFST were screened out by the intake manager.” It would make sense that the extreme cases of highest and lowest risk AFST classification would be easier for the algorithm to make a decision closer to a human judge’s than more nebulous cases in the middle, but even here, one-third of highest risk decisions were negated by the call-screeners. It is imperative that a human professional, or multiple, gives a second opinion independently of the tool to override the algorithm’s decision if necessary so long as this tool continues to be in use.
The subjectivity and complexity that we see in reviewing the history of risk assessments and its stakeholders leads us back to loss of privacy. We believe that part of why human judgement is so critical in this arena stems from the historical reliance on data collected by the government, rather pervasively, only from one segment of the population. This raises a concern for us on the data that is so foundational to the viability of automated risk assessments. Since families of color tend to be called in disproportionately, if a child is indeed removed from their family resulting from one of these calls, does this create a positive feedback loop for the algorithm that continues to target families of color? We believe this could be an issue because though white and more privileged families have issues with neglect or abuse, since they are called in less frequently, the algorithm might skew away from these families due to referral bias.
Recommendations & Key Takeaways
Create more checks in the system.
Due to the complexities of each case involving children and their caregivers, we do not think that solely using an algorithm constitutes a holistic and fair approach to making these decisions. Human judges should be present at every major decision point in the process in order to provide other perspectives that are not entirely based on numbers and historical data. The current “phasing-out” of call-screeners for lowest-risk and highest-risk cases is concerning, because even theoretically “easy” decisions can be edge cases that might need to be looked at more closely.
Set regulatory standards for using tainted data.
Our analysis ultimately led us to focus on the unequal distribution of privacy rights regarding family life. A history of collecting data only on the most marginalized serves as the foundation for the strong correlations between poverty and child maltreatment identified in today’s system. We are not reflecting on whether the correlations are accurate or inaccurate, but rather that the status quo has created a tainted and one-sided dataset to feed risk assessment technologies. In the vein of Frank Pasquale’s latest publication in the Columbia Law Review, we believe there are “data-informed duties in AI development.” He suggests that organizations “relying on faulty data can be required to compensate those harmed by that data use.” Though Pasquale focuses on industry uses of artificial intelligence, the spirit of his message could apply to the case of child welfare risk assessments which both cause families harm and feed off of the harm done to families outside of the child welfare system.
We believe that if machine learning is to continue to be used in social services, the history of the data must be considered . Through our literature review, we did not find evidence of regulation over the child welfare data used in machine learning technologies. At the time of writing, Pennsylvania’s statutes on Child Protective Services did not include any guidance on the use of machine learning or artificial intelligence. Searches for the words “automated” and “algorithm” revealed zero results. Even a search for “risk assessment” revealed limited State-level guidance on the data used in analog tools: “Each county agency shall implement a State-approved risk assessment process in performance of its duties under this subchapter.” Government agencies using algorithmic technology have a duty to transparently share regulatory guidelines, as well as a path forward, such as victim restitution, when a risk assessment tool leads to a destructive choice.
The creators of algorithmic tools matter.
The vendor and designer of the tool matters. Each stakeholder has different priorities — a state agency designing the tool has different incentives from a contracted private company designing the tool, both of which may have interests that do not match those of impacted children and families. It is important to keep in mind who controls the data depending on who created the system. For example, fitness trackers are often used in workplaces for health insurance purposes, but the vendors of the trackers are often the ones who own the data collected because it is stored on their servers. Local governments need to pay attention to whether the data they are inputting into models designed by third parties, as well as the outcomes of these models, will be stored on the company’s servers and therefore owned by the company itself. This has massive implications for privacy — much of this information is extremely sensitive and could have huge consequences if a third-party has access to it.
Designers can and should push back on this “new normal.”
Designers should commit to thinking critically about the power differentials inherent to their work and collectivize their power through codes of conduct. Virginia Eubanks has suggested a “Hippocratic oath for coders” that could promote a “do no harm” ethos within the data science community . She proposes two simple questions: “One is, does it increase the self-determination of poor and working people? And the second is, if the system was aimed at anyone but poor and working people, would it be tolerated?”  Another approach is in the Feminist Data Manifesto-No created by feminist data scholars to “refuse harmful data regimes and commit to new data futures.” Their approach seeks to embrace the politicized nature of data and commits to working with marginalized and “minoritized” people, rather than work about them . In considering these possibilities, we imagine what it might look like to place such levels of surveillance on wealthy, majority-white communities. The results strike us as dystopian and unimaginable — a dashboard tracking a parent’s every move, every choice, every mistake (see Figure 1 below). The figure is the result of using speculative design methods to reimagine caseworker portals when pushed to the extremes of privacy-harming practices. We show a risk prediction tool that explicitly utilizes metrics that do not provide meaningful representations of risk or harm, such as the age of mothers and invasive surveillance of families in their homes. When calculating risk, age has been weaponized particularly against “younger” or “older” mothers. We featured age as a risk factor in the dashboard interface to attempt to subvert risk assessment in ways that apply such calculations of risk to wealthier counterparts.
However, it is not lost on us that these technologies are already having nightmarish impacts on the communities who have always been under the surveillance of the child welfare system. The cost-cutting, science-advancing fantasy of the creators of the Allegheny Family Screening Tool (AFST) are the nightmares of families struggling under the weight of systemic oppression and racialized capitalism . We do not wish to understate the power dynamics embedded in design organizations, but we do wish to call out the power embedded in the skill of designing.
It is important to note that this tool is only a part of what is already a deeply flawed system, but it perpetuates the inequity within it. We should perhaps be thinking about ways to fundamentally change how we view child safety, including methods of prioritizing family rehabilitation, restorative justice, and family preservation, rather than automating the ability to predict intervention for family-child separation. Our historical analysis and values analysis reveals to us that this predictive power is predicated upon centuries of unjust economic and geopolitical conditions. Massive structural reform, including the centering of families most impacted, is needed in order to address the root causes of child maltreatment and abuse. Tools like AFST are only one part of the problem, but could exacerbate the situation if they become an accepted part of child welfare agencies throughout the country. Our intention is to further a more nuanced, values-centric conversation that helps designers and policymakers deeply interrogate their roles in the future of child welfare.
Emnet Almedom and Nandita Sampath are recent Master of Public Policy graduates from UC Berkeley’s Goldman School of Public Policy Master in the Class of 2020. Joanne Ma is a Master of Information Management and Systems at the UC Berkeley School of Information in the Class of 2022.
The views expressed in this article do not necessarily represent those of the Berkeley Public Policy Journal, the Goldman School of Public Policy, or UC Berkeley.
1. What is Child Protective Services? https://www.stopitnow.org/ohc-content/what-is-child-protective-services
2. Hurley, Dan. Can an Algorithm Tell When Kids are in Danger? The New York Times. Jan 2018.
3. Kim, Hyunil, et al. Lifetime Prevalence of Investigating Child Maltreatment Among US Children. American Journal of Public Health. Jan 2017.
4. Lash, Don. When the Welfare People Come: Race and Class in the U.S. Child Protection System. 2017. Pgs. 43-49.
5. Lash, Don. When the Welfare People Come: Race and Class in the U.S. Child Protection System. 2017. Page 70.
6. Lash, Don. When the Welfare People Come: Race and Class in the U.S. Child Protection System. 2017. Page 81.
7. Lash, Don. When the Welfare People Come: Race and Class in the U.S. Child Protection System. 2017. Page 79.
8. Lash, Don. When the Welfare People Come: Race and Class in the U.S. Child Protection System. 2017. Page 16.
9. Wessler, Seth Freed. “Thousands of Kids Lost from parents in U.S. Deportation System.” Colorlines. Nov 2011.
10. Racial Disproportionality and Disparity in Child Welfare. U.S. Department of Health and Human Services, Administration for Children and Families, Children’s Bureau. Nov 2016.
11. Lash, Don. When the Welfare People Come: Race and Class in the U.S. Child Protection System. 2017. Page 66.
12. Edwards, Frank. Family Surveillance: Police and the Reporting of Child Abuse and Neglect. The Russell Sage Foundation Journal of the Social Sciences. Feb 2019.
13. Lash, Don. When the Welfare People Come: Race and Class in the U.S. Child Protection System. 2017. Pages 10-12.
14. National Coalition for a Civic Right to Counsel Status Map. Accessed December 2019.
15. Blustain, Rachel. Defending the Family: The Need for Legal Representation in Child-Welfare Proceedings. The Nation. Jan 2018.
16. Lash, Don. When the Welfare People Come: Race and Class in the U.S. Child Protection System. 2017. Page 67.
17. Bridges, Khiara M. Privacy Rights and Public Families. Harvard Journal of Law and Gender, Vol. 34. 2011. Page 113.
18. Medicaid is quite literally an “entitlement” program, which means that anyone who meets the eligibility rules has the right to enroll in Medicaid coverage.
19. Bridges, Khiara M. Privacy Rights and Public Families. Harvard Journal of Law and Gender, Vol. 34. 2011. Pages 163-164.
20. Irani, Lilly et al. Postcolonial Computing: A Lens on Design and Development. CHI 2010.
21. Tamburro, Andrea. Including Decolonization in Social Work Education and Practice. Journal of Indigenous Social Development, Vol. 2. Sept 2013.
22. Examples of Safety and Risk Assessments for Use by Child Welfare Staff. National Center on Substance Abuse and Child Welfare. 2018.
23. Mickelson, Nicole. Assessing Risk: A Comparison of Tools for Child Welfare Practice with Indigenous Families. Center for Advanced Studies in Child Welfare. Jan 2018.
24. Pecora, Pater. Safety and Risk Assessment Frameworks: Overview and Implications for child maltreatment and fatalities. Child Welfare. 2013
25. The Allegheny Family Screening Tool. Allegheny County. 2019.
26. The Allegheny Family Screening Tool. Accessed December 2019.
27. Eubanks, Virginia. A Child Abuse Prediction Model Fails Poor Families. Wired. Jan 2018.
28. Roberts, Dorothy et al. Black Families Matter: How the Child Welfare System Punishes Poor Families of Color. The Appeal. Mar 2018.
29. Eubanks, Virginia. A Child Abuse Prediction Model Fails Poor Families. Wired. Jan 2018.
30. Hurley, Dan. Can an Algorithm Tell When Kids are in Danger? The New York Times. Jan 2018.
32. Pasquale, Frank. Data-Informed Duties in AI Development. Columbia Law Review, Vol. 119. Page 1920.
34. Note we are not suggesting whether machine learning should or should not be used in social services, but rather providing reflection on the cases in which it is used.
35. State of Pennsylvania Consolidated Statutes, Title 23, Chapter 63: Child Protective Services. Accessed Dec 2019.
36. Bogle, Ariel. “The digital poorhouse: coders need a Hippocratic oath to protect disadvantaged people.” ABC Science. Jan 29, 2018.
38. Cifor, Marika and Garcia, Patricia. Data Manifest-No. Accessed Dec 2019.
39. Language paraphrased from Ruha Benjamin. “And one of the things we have to come to grips with is how the nightmares that many people are forced to endure are the underside of an elite fantasy about efficiency, profit, and social control.” How Race and Technology ‘Shape Each Other.’ Oct 18, 2019.