Thursday, February 29, 2024

Artificial Intelligence: A Powerful Diagnostic Tool or A Biased Privacy Invader?


What is artificial intelligence (AI)? While we might think of robots or recent developments like ChatGPT, artificial intelligence has been around us for decades and is seemingly here to stay.  If the use of AI is so widespread, then why is one of the biggest industries in the United States on the fence about expanding the use of artificial intelligence? 

Healthcare comprises almost 20% of this country’s GDP, or gross domestic product. The US spends the highest amount of money per capita on healthcare in the world, and yet there is a vast shortage of healthcare professionals in the field. Could the answer to this growing shortage be AI? AI is now used in the healthcare field in cautious amounts.  It can be used to read and interpret diagnostic tests like x-rays, formulate treatment plans based on diagnoses, answer some basic patient questions, schedule appointments, and more.  However, the next step for AI in healthcare lies in risk prediction. 

Pancreatic cancer is an elusive chronic disease.  It is very difficult to diagnose, and there is no real cure.  Only about 5% of people diagnosed at any stage survive five years after their diagnosis.  It is widely recognized that the main “treatment” for pancreatic cancer is early detection, but this proves very difficult because of the location of the pancreas in the abdomen.  However, AI provides a possible solution to this issue of detection.  AI is already used in cancer screenings, such as reading mammograms and CT scans, but the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) is taking prevention a step further.  

The MIT CSAIL team’s unique approach in using AI lies in the enormous amounts of data that is collected via partnerships with electronic health record companies.  They are able to obtain access to 5 million patient records from variable populations, locations, and demographics across the country, which according to Kai Jia, a PhD student and MIT CSAIL member, “surpasses the scale of most prior research in the field” (Gordon, 2024).  They created two programs, PrismNN and PrismLR which work together to provide risk scores based on the data in electronic medical records such as age, sex, race, medications, and medical history.  PrismNN uses a more advanced approach by analyzing patterns in data via artificial neural networks, while PrismLR uses logistic regression. The goal for these Prism systems is to be incorporated in healthcare settings and assist healthcare professionals in identifying high-risk patients earlier, thereby improving patient outcomes without adding stress to the already overworked physician.

While the use of programs like Prism are very promising, there is significant pushback to the use of AI in this manner.  In Joe Vukov’s interview with Blake Lemoine, Lemoine explains his worries about AI bias and sentience.  There is mounting evidence for possible racial and gender bias in AI algorithms.  People of color and those with diverse gender identities have a long history of being discriminated against in healthcare, from being ignored to not receiving proper care.  Lemoine claims that AI learns from these social patterns and histories, and can incorporate this knowledge into its algorithms. Could this lead to even more discrimination and mistreatment of already marginalized groups by AI? I think this is something that those developing these programs need to be aware of, or racism and sexism in healthcare will be perpetuated. Another opposition to the use of AI in this manner is on the basis of privacy.  Data must be collected from patients and shared to train AI. Could this be happening without our knowledge? 

AI is a rapidly developing and growing tool for almost any industry, including healthcare, and while there are numerous benefits to allowing AI to help doctors and other healthcare professionals in the diagnosing and screening process, there are ethical concerns to the availability and use of this personal data.


References

“Appleseeds to Apples: Catholicism and the Next ChatGPT.” Nexus Journal, www.hanknexusjournal.com/appleseedstoapples. Accessed 29 Feb. 2024.

Gordon, Rachel. “New Hope for Early Pancreatic Cancer Intervention via AI-Based Risk Prediction.” MIT News | Massachusetts Institute of Technology, 18 Jan. 2024, news.mit.edu/2024/new-hope-early-pancreatic-cancer-intervention-ai-based-risk-prediction-0118.





No comments:

Post a Comment