top of page
Writer's pictureFlávia Fonseca

What Can Go Wrong in AI Decision-Making for Good?

Updated: Nov 6

automated decision-making in AI
Photo by Elimende Inagella on Unplash

We often hear about AI’s potential for harm in large-scale scenarios, such as creating destructive technologies, facilitating scams, or even posing an existential threat to humanity. But what about the immediate, real-world impacts that are already happening today?


Imagine an AI system designed to recommend the right opioid prescription. At first glance, it sounds beneficial—helping to prevent overdose and reduce the risk of addiction. What could go wrong in a project with such positive intentions?


This is not just hypothetical; it’s happening in the U.S. today. One of these tools, called Narx Score, is used to regulate controlled substance prescriptions. Similar to a credit score, it generates an score and a dosage estimation based on a patient’s prescription history and other data. Amid an opioid epidemic, this tool initially seems like a valuable support for healthcare providers. However, patients with chronic conditions are now experiencing restricted access to necessary medication, enduring untreated pain because the AI system flags their dosage as dangerously high in its effort to curb opioid misuse.


Take, for example, a case from 2022 in Fort Wayne, Indiana. A patient awaiting hip replacement surgery was left in pain because her Narx Score was too high for her to receive pain relief. This woman had suffered from chronic pain for years, and the system’s automated scoring decision blocked her needed care.


Why is this happening? A few reasons include:


  • Lack of transparency and explanation: Without clear explanations, no one understands why certain decisions are made. It’s an opaque process.

  • Overtrust in AI: Many people place blind faith in AI, even when there’s evidence of its limitations and biases.

  • Fear among doctors: Physicians may be afraid of being red-flagged if they continue to prescribe doses they feel are appropriate, despite what the system recommends.

  • Automated decision-making by AI: Without a human to have the final say or flag when an AI decision is wrong, people lose their agency.


From a technical perspective, it’s clear that this algorithm was not designed with chronic pain patients in mind. Research finds that, to estimate patients’ risk scores, factors such as the following are considered:


  • The distance a patient travels to reach a physician or pharmacy;

  • The number of specialists consulted within a specific timeframe;

  • The payment method used to purchase medicines;

  • The number of prescribers;

  • Whether the patient has a history of sexual abuse, other traumatizing events, or a criminal history, among other factors.


In aiming to identify potential drug addiction, chronic pain patients have been disproportionately impacted. Specifically, less advantaged patients face disadvantages due to criteria such as travel distance and criminal history. Moreover, factors like sexual abuse or other traumatic events serve as proxies that directly affect women.


It is possible to check whether the rules created within a machine learning algorithm are harmful if are better than guessing, and checklists like this one can be helpful. Research provides recommendations for preventing situations like this, but it seems that these measures were not a priority for certain AI tools, such as Narx score.


Furthermore, without disclosing the dataset used, how it was labeled, or how it was collected, the algorithm operates as a black box, preventing outside understanding or intervention.


So, the question remains: How much good and harm is this project actually doing? Are they only measuring its positive impacts while ignoring the negative outcomes?


When an AI tool benefits some people over others, we can’t simply look at one side of the coin. Sometimes, consciously or unconsciously, we create metrics and priorities that highlight positive results, while becoming blind to the harm the tool might be causing. The good news is: we can do things differently, and I invite you to join this necessity.

1 view0 comments

Innovafy

bottom of page