We need to design distrust into AI systems to make them safer

Link: https://www.technologyreview.com/2021/05/13/1024874/ai-ayanna-howard-trust-robots/

Excerpt:

Since that experiment, have you seen this phenomenon replicated in the real world?

Every time I see a Tesla accident. Especially the earlier ones. I was like, “Yep, there it is.” People are trusting these systems too much. And I remember after the very first one, what did they do? They were like, now you’re required to hold the steering wheel for something like five-second increments. If you don’t have your hand on the wheel, the system will deactivate.

But, you know, they never came and talked to me or my group, because that’s not going to work. And why that doesn’t work is because it’s very easy to game the system. If you’re looking at your cell phone and then you hear the beep, you just put your hand up, right? It’s subconscious. You’re still not paying attention. And it’s because you think the system’s okay and that you can still do whatever it was you were doing—reading a book, watching TV, or looking at your phone. So it doesn’t work because they did not increase the level of risk or uncertainty, or disbelief, or mistrust. They didn’t increase that enough for someone to re-engage.

Author(s): Karen Hao

Publication Date: 13 May 2021

Publication Site: MIT Tech Review

Error-riddled data sets are warping our sense of how good AI really is

Link: https://www.technologyreview.com/2021/04/01/1021619/ai-data-errors-warp-machine-learning-progress/

Paper link: https://arxiv.org/pdf/2103.14749.pdf

Graphic:

Excerpt:

Yes, but: In recent years, studies have found that these data sets can contain serious flaws. ImageNet, for example, contains racist and sexist labels as well as photos of people’s faces obtained without consent. The latest study now looks at another problem: many of the labels are just flat-out wrong. A mushroom is labeled a spoon, a frog is labeled a cat, and a high note from Ariana Grande is labeled a whistle. The ImageNet test set has an estimated label error rate of 5.8%. Meanwhile, the test set for QuickDraw, a compilation of hand drawings, has an estimated error rate of 10.1%.

Author(s): Karen Hao

Publication Date: 1 April 2021

Publication Site: MIT Tech Review