The AI Now Institute is a research institute dedicated to understanding the social implications of AI technologies. They are doing a great favor to our society: filling the gap of ethical concerns around the massively approaching Fourth Industrial Revolution.
Recently I read their annual report of 2018. Here are my highlights…
Who is responsible when AI systems harm us?
The AI accountability gap is growing. The technology scandals of 2018 have shown that the gap between those who develop and profit from AI—and those most likely to suffer the consequences of its negative effects—is growing larger, not smaller.
Civil rights concerns
Earlier this year, it was revealed that Pearson, a major AI-education vendor, inserted “social-psychological interventions” into one of its commercial learning software programs to test how 9,000 students would respond. They did this without the consent or knowledge of students, parents, or teachers. This psychological testing on unknowing populations, especially young people in the education system, raises significant ethical and privacy concerns.
Power asymmetries between companies and the people
There are huge power asymmetries between companies and the people they serve, which currently favors those who develop and profit from AI systems at the expense of the populations most likely to be harmed.
Moreover, there is a stark cultural divide between the engineering cohort responsible for technical research, and the vastly diverse populations where AI systems are deployed. These gaps are producing growing concern about bias, discrimination, due process, liability, and overall responsibility for harm.
Their full report: https://ainowinstitute.org/AI_Now_2018_Report.pdf