Broussard had also recently recovered from breast cancer, and after reading the fine print of her electronic medical records, she realized that an AI had played a part in her diagnosis—something that was increasingly common. That discovery led him to run his own experiment to learn more about how good AI can be in cancer diagnostics.
We sat down to talk about his findings, as well as the problems with police use of the technology, the limits of “AI fairness,” and the solutions he sees for some of the challenges posed by AI. The conversation has been edited for clarity and length.
I was struck by a personal story you shared in the book about AI as part of your own cancer diagnosis. Can you tell our readers what you did and what you learned from that experience?
At the beginning of the pandemic, I was diagnosed with breast cancer. I wasn’t just stuck inside because the world was closed; I was also stuck inside because I had major surgery. As I was reviewing my chart one day, I noticed that one of my scans said, This scan is read by an AI. I thought, Why did AI read my mammogram? No one mentioned it to me. It’s in some obscure part of my electronic medical record. I was really curious about the state of the art in AI-based cancer detection, so I did an experiment to see if I could reproduce my results. I took my own mammograms and ran them through an open-source AI to see if it could detect my cancer. What I discovered was that I had a lot of misconceptions about how AI works in cancer diagnosis, which I explore in the book.
[Once Broussard got the code working, AI did ultimately predict that her own mammogram showed cancer. Her surgeon, however, said the use of the technology was entirely unnecessary for her diagnosis, since human doctors already had a clear and precise reading of her images.]
One of the things I realized, as a cancer patient, is that the doctors and nurses and health care workers who supported me during my diagnosis and recovery were wonderful and invaluable. I don’t want some kind of sterile, computational future where you go and get your mammogram and then a little red box says It’s probably cancer. That’s not really a future anyone wants when we’re talking about a life-threatening disease, but there aren’t that many AI researchers out there who have their own mammograms.
You’ll sometimes hear that when AI bias is sufficiently “fixed,” the technology can become more ubiquitous. You write that this argument is problematic. Why?
One of the big issues I have with this argument is this idea that AI will somehow reach its full potential, and that’s the goal everyone should be striving for. AI is just math. I don’t think everything in the world should be governed by mathematics. Computers are really good at solving math problems. But they are not very good at solving social issues, but they are applied to social problems. This kind of imagined endgame of Oh, we’ll just use AI for everything is not a future I am talking about.
You also wrote about facial recognition. I recently heard an argument that the movement to ban facial recognition (especially in policing) discourages efforts to make the technology fairer or more accurate. What do you think about that?
I definitely fall into the camp of people who don’t support the use of facial recognition in policing. I understand that it discourages people who really want to use it, but one of the things I did while researching the book was to dive deep into the history of policing technology, and what I found was not encouraging.
I started with a very good book Black Software through [NYU professor of Media, Culture, and Communication] Charlton McIlwain, and he wrote about IBM wanting to sell a lot of their new computers at the same time as the so-called War on Poverty in the 1960s. We have people who really want to sell machines looking around for a problem to apply them to, but they don’t understand the social problem. Fast-forward to today—we are still living with the disastrous consequences of decisions made in the past.