Unfortunately, naturally race enters into the discussion. Whether it is Bay Area nerds coding up healthcare infrastructure or that nurse adding a little something extra about that patient who was being just a bit too snooty for their liking, implicit and explicit biases enter our systems. What is at least one effective way to root some of it out of a system?
Class and economic inequity often enter into this sorts of discussions with the implication that since The Rich are so rich they will have access to the latest and greatest in healthcare and therefore that is what will give them yet another advantage in this life and that is why they will outlive us, The Meek & Mild Discussers of Bioethics. I contend there is scant evidence to suggest that the latest and greatest in most medical enterprises does anything to help those quite literally on the bleeding edge. How will artificial intelligence in medicine help the rich and can we ensure it will also help the not-rich too?
A black box is presented to you. You are told it has exceedingly intelligent insides, which you cannot see. The box says you have a 48% chance of catastrophe. How do you respond?
Can you torture a computer?
Does the rate at which a theoretical artificial intelligence operate warp the ethical calculus we must program into it? Put differently, how ought we human beings program computers to act ethically that operate at thousands to billions to trillions of times faster than any human being or group of human beings ever could hope to if they each lived a hundred lifetimes. Put succinctly, how do we consider ethics at different time scales?
Among the regulatory considerations of a practical implementation of medical artificial intelligence include contextual biases, data import/integrity/privacy/security, explanation of information “learned” from data obtained, exportation of information to “others”, sampling skews, training set biases, trade secrets, and the uncertainty of the American healthcare system (mostly from Minssen et al. 2020). Of these, which do you think poses the greatest practical ethical hurdle and why?
What makes something “artificial”? Is it the same in every case as “man made”? What respect do humans have to things which are artificial that are not human made?
By what mean(s) can a non-artificial intelligence determine an “artificial intelligence”?
In what topsy turvy world could an “artificial intelligence” be held culpable for a crime? Would an A.I.’s “programmers”/”creators” take the responsibility? When/Could such designers be relieved of that responsibility?
When the human race encounters its first honest-to-goodness “alien” lifeform, do you think it will be of an “artificial” or “organic”/”natural” composition?
Do moral actions require conscious creatures?
Is there any way to unhitch the wagon of “artificial intelligence” from advertising or are we destined to see ever more personalized ads that seem targeted at our ever more personalized problems/desires/hopes/plans?
Follow on, is there a “good” pharmaceutical advertisement?
Just between us, now that you’re scanning this far down in the question sheet, how much of this artificial intelligence stuff do you think is just the new “hocus pocus”, the new “snake oil”, the new “set and forget” solution to all your life’s problems? How much of it do you think will end up living up to the hype?
To what degree of granularity should the sewage company be able to tell – via new and improved artificial intelligence-enhanced “surveillance testing” methods – what the neighborhood has been eating?