047. Artificial intelligence

A discussion on what we know machines know.


Questions to ponder

  1. Unfortunately, naturally race enters into the discussion. Whether it is Bay Area nerds coding up healthcare infrastructure or that nurse adding a little something extra about that patient who was being just a bit too snooty for their liking, implicit and explicit biases enter our systems. What is at least one effective way to root some of it out of a system?
  2. Class and economic inequity often enter into this sorts of discussions with the implication that since The Rich are so rich they will have access to the latest and greatest in healthcare and therefore that is what will give them yet another advantage in this life and that is why they will outlive us, The Meek & Mild Discussers of Bioethics. I contend there is scant evidence to suggest that the latest and greatest in most medical enterprises does anything to help those quite literally on the bleeding edge. How will artificial intelligence in medicine help the rich and can we ensure it will also help the not-rich too?
  3. A black box is presented to you. You are told it has exceedingly intelligent insides, which you cannot see. The box says you have a 48% chance of catastrophe. How do you respond?
  4. Can you torture a computer?
  5. Does the rate at which a theoretical artificial intelligence operate warp the ethical calculus we must program into it? Put differently, how ought we human beings program computers to act ethically that operate at thousands to billions to trillions of times faster than any human being or group of human beings ever could hope to if they each lived a hundred lifetimes. Put succinctly, how do we consider ethics at different time scales?
  6. Among the regulatory considerations of a practical implementation of medical artificial intelligence include contextual biases, data import/integrity/privacy/security, explanation of information “learned” from data obtained, exportation of information to “others”, sampling skews, training set biases, trade secrets, and the uncertainty of the American healthcare system (mostly from Minssen et al. 2020). Of these, which do you think poses the greatest practical ethical hurdle and why?
  7. What makes something “artificial”? Is it the same in every case as “man made”? What respect do humans have to things which are artificial that are not human made?
  8. By what mean(s) can a non-artificial intelligence determine an “artificial intelligence”?
  9. In what topsy turvy world could an “artificial intelligence” be held culpable for a crime? Would an A.I.’s “programmers”/”creators” take the responsibility? When/Could such designers be relieved of that responsibility?
  10. When the human race encounters its first honest-to-goodness “alien” lifeform, do you think it will be of an “artificial” or “organic”/”natural” composition?
  11. Do moral actions require conscious creatures?
  12. Is there any way to unhitch the wagon of “artificial intelligence” from advertising or are we destined to see ever more personalized ads that seem targeted at our ever more personalized problems/desires/hopes/plans?
  13. Follow on, is there a “good” pharmaceutical advertisement?
  14. Just between us, now that you’re scanning this far down in the question sheet, how much of this artificial intelligence stuff do you think is just the new “hocus pocus”, the new “snake oil”, the new “set and forget” solution to all your life’s problems? How much of it do you think will end up living up to the hype?
  15. To what degree of granularity should the sewage company be able to tell – via new and improved artificial intelligence-enhanced “surveillance testing” methods – what the neighborhood has been eating?

Readings to consider