The AI Debate in Oregon’s Child Welfare: Upstream vs. Downstream Solutions

Takkeem Morgan
3 min readApr 24, 2023

--

Artificial Intelligence (AI) has increasingly been integrated into various aspects of our lives, and the child welfare system is no exception. Recently, Oregon’s Department of Human Services (DHS) decided to discontinue the use of an algorithm that had been employed in the child welfare system, sparking a conversation around the ethical implications and effectiveness of AI-based solutions. Here I’ll explore the events that led to this decision and discuss the differences between upstream and downstream solutions in the context of child welfare.

Oregon’s AI Algorithm in Child Welfare

Child welfare officials in Oregon had been using an AI algorithm to help them decide which families should be investigated by social workers. However, concerns were raised about the potential for racial bias, transparency, and reliability in the use of such technology. In response, DHS announced that the algorithm would be discontinued, with the agency opting for a new process that aims to make more racially equitable decisions.

The decision to discontinue the algorithm came after an Associated Press review found that a similar algorithmic tool in Pennsylvania (my home state) had flagged a disproportionate number of Black children for “mandatory” neglect investigations. As a result, Oregon officials decided to replace the algorithm and develop a new screening process.

Upstream vs. Downstream Solutions

When discussing the use of AI in child welfare, it’s essential to differentiate between upstream and downstream solutions. Upstream solutions involve addressing problems at their root cause, focusing on preventive measures that can help avoid negative outcomes. In contrast, downstream solutions focus on addressing problems after they have occurred, managing the consequences rather than preventing them.

AI algorithms in child welfare, like the one used in Oregon, can be seen as downstream solutions. They are often used to identify high-risk situations and help officials decide which families require intervention. While this approach has its merits, it is likely to reinforce existing biases and disparities, as the algorithm is reliant on the data it has been trained on.

On the other hand, upstream solutions might involve using AI to educate parents about available resources, such as the child tax credit, or to provide guidance on preventive measures that can help avoid negative outcomes in the first place. By focusing on prevention and addressing root causes, upstream solutions can help reduce the need for downstream interventions and potentially lead to more equitable outcomes.

The debate surrounding the use of AI in Oregon’s child welfare system highlights the importance of carefully considering the potential consequences and ethical implications of integrating AI-based tools in such sensitive areas. While AI algorithms can be powerful tools, their effectiveness relies heavily on the quality and representativeness of the data they use. That’s precisely why my team and I are focused on ensuring that young people and parents who have first hand experience are part of the development of these solutions. Furthermore By focusing on upstream solutions, we can work towards a more equitable and preventative approach to child welfare, leveraging AI’s potential to empower parents and create positive change.

As parents, staying informed about these developments and engaging in conversations around AI and child welfare can help shape a more inclusive and just future for our children.

Discussion: What are some upstream ideas for how AI, and large language models like ChatGpt can be used in and around child welfare to empower parents and address key drivers of vulnerability such as poverty, education, and access to critical resources?

Please comment below.

--

--

Takkeem Morgan
Takkeem Morgan

Written by Takkeem Morgan

I am working to bring world class innovation and ingenuity into the child welfare ecosystem .

No responses yet