Social Media As a Bioethics Issue

0
1403
swisscognitive.ch

Health Editor’s Note: Social media plays a large part in all of our lives….some okay and some no so okay. You can look up a type of shoe and find a picture of that shoe and where you can buy it on the side of your computer screen for many weeks to come. AI is here and we should make sure to use it to help and not to harm….Carol

Hastings Center Report, January-February 2019

Social media as a bioethics issue: several articles examine concerns raised by integrating social media platforms and artificial intelligence into medical practice, research, and public health.

Social Media, E-Health, and Medical Ethics



Mélanie Terrasse, Moti Gorin, and Dominic Sisti

Given the profound influence of social media and emerging evidence of its effects on human behavior and health, bioethicists have an important role to play in the development of professional standards of conduct for health professionals using social media and in the design of online systems themselves. The authors examine several ethical issues: the impact of social networking sites on the doctor-patient relationship, the development of e-health platforms to deliver care, the use of online data and algorithms to inform health research, and the broader public health consequences of widespread social media use. The authors also make recommendations for addressing bias and other ethical challenges.

Mélanie Terrasse is a PhD candidate in sociology and social policy at Princeton University, Moti Gorin is an assistant professor of philosophy at Colorado State University, and Dominic Sisti is assistant professor in medical ethics and health policy at the University of Pennsylvania.

Several articles respond to Terrasse, Gorin, and Sisti. In“Welcoming the Intel-ethicist,” John Banja argues that two concerns—that digital platforms will diminish the therapeutic value of medicine and that artificial intelligence algorithms will increase errors and unfair decision-making—may be exaggerated. Patients are already adapting to AI systems that serve particular medical uses, such as screening for diabetic retinopathy, and health care providers, just like AI tools, are prone to making errors and acting on biases when diagnosing conditions and recommending treatments. With ethical oversight, AI systems can learn from their mistakes, too, Banja writes.

In “Deep Ethical Learning: Taking the Interplay of Human and Artificial Intelligence Seriously,” Anita Ho writes that while the use of AI technologies can pose ethical issues like perpetuating human biases, it would be irresponsible to not employ them in ways where they can improve care, such as using electronic monitoring systems to track the long-term care of seniors. AI tools should be part of a larger effort for health care quality improvement while administrators and developers monitor and address the ethical problems that can arise from their use.

In “Ethical Use of Social Media Data: Beyond the Clinical Context,” Catherine M. Hammack argues that the use of social media and other digital tools in research poses new and distinct challenges, in part because the law offers less protection of individual privacy in research than in clinical care. One relevant risk is that privacy settings on digital platforms may not prevent companies from sharing data with third parties or using it for marketing and product development and other types of research.

Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability

Alex John London

While many AI systems can provide highly accurate diagnoses and other predictions critical for medical care, the reasoning by which they arrived at their findings can be inscrutable, leading some commentators to question whether health care practitioners should trust these systems. But knowing how a machine arrived at its decision is less relevant than its ability to produce accurate results, writes London. Opaque clinical judgment and uncertainty about how decisions are made are commonplace in many non-AI aspects of health care, and it would be misguided to devalue an AI system simply because its decision-making process is inaccessible. London is a Clara L. West professor of ethics and philosophy at Carnegie Mellon University, where he directs the Center for Ethics and Policy.

The Hastings Center is a nonpartisan bioethics research institution dedicated to bioethics and the public interest since 1969. The Center is a pioneer in collaborative interdisciplinary research and dialogue on the ethical and social impact of advances in health care and the life sciences. The Center draws on a worldwide network of experts to frame and examine issues that inform professional practice, public conversation, and social policy. Learn more about The Hastings Center at www.thehastingscenter.org.
Follow us on Twitter at hastingscenter

ATTENTION READERS

We See The World From All Sides and Want YOU To Be Fully Informed
In fact, intentional disinformation is a disgraceful scourge in media today. So to assuage any possible errant incorrect information posted herein, we strongly encourage you to seek corroboration from other non-VT sources before forming an educated opinion.

About VT - Policies & Disclosures - Comment Policy
Due to the nature of uncensored content posted by VT's fully independent international writers, VT cannot guarantee absolute validity. All content is owned by the author exclusively. Expressed opinions are NOT necessarily the views of VT, other authors, affiliates, advertisers, sponsors, partners, or technicians. Some content may be satirical in nature. All images are the full responsibility of the article author and NOT VT.