It Can Only be Attributed to Human Error: Artificial Intelligence and Civic Responsibility
By Michael Fiorito, MDS
Leading artificial-intelligence researchers gathered this week for the prestigious Neural Information Processing Systems conference. In addition to discussing the latest research, there were also talks around the need for civic mindedness to check and restrain the power of AI.
The keynote speaker, Kate Crawford from Microsoft spoke before a crowd of 8,000 researchers. She urged attendees to start considering and finding ways to mitigate, accidental or intentional harms caused by their creations. “Amongst the very real excitement about what we can do there are also some really concerning problems arising,” Crawford said. Crawford is also a co-founder of the AI Now Institute at NYU, which studies social implications of artificial intelligence.
In one study, researchers found that image-processing algorithms both learned and amplified unintended gender stereotypes. Imagine the potential issues that could occur in criminal justice and finance, for example.
Others have noted that researchers need to remain cognizant of legal barriers, such as the Civil Rights and Genetic Information Nondiscrimination acts. One concern is that even when machine-learning systems are programmed to be blind to race or gender, for example, they may use other signals in data such as the location of a person’s home as a proxy for it.
One intention is to make the people building AI better reflect humanity. Like computer science as a whole, machine learning skews towards the white, male, and western. There is now a conscious effort to incorporate more women and minorities in AI research.
Hanna Wallach, co-chair of NIPS, co-founder of Women in Machine Learning, and a researcher at Microsoft, says those diversity efforts both help individuals and make AI technology better. “If you have a diversity of perspectives and background you might be more likely to check for bias against different groups.”
Ultimately, companies, civic-society groups, citizens, and governments all need to engage with the issue.
Towards the end of her talk, Crawford suggested civil disobedience could shape the uses of AI. She told today’s AI engineers to consider the lines they don’t want their technology to cross. “Are there some things we just shouldn’t build?” she asked.
Take Back Your Day
Learn how the latest technologies can free up your time so you can focus on your business