“People seem ready for artificial intelligence (AI) in just about every aspect of their lives,” says Aleksander Mądry, associate professor in the Department of Electrical Engineering and Computer Science and leader of the Center for Deployable Machine Learning. “But what drives my work is the question: Is AI ready for us?”
Mądry and his group seek to uncover the key principles underlying the modern machine learning at the forefront of current AI use, with a particular focus on system robustness and reliability. “Many of the intended applications for AI, such as analysis of health care data or autonomous driving, are high-stakes situations—if you make a mistake, human lives are endangered,” Mądry observes. “We need the systems that use AI to be dependable and tamper-resistant.”
Building such systems, however, is a challenge. “Until recently,” Mądry points out, “our focus was on getting machine learning algorithms to work at all. Now that this goal has been largely achieved, the attention shifts to questions of resilience and worst-case performance.”
AI systems are extremely brittle, so even a small perturbation in the input data can dramatically degrade performance. Mądry and his team “want to grasp how much reliability and robustness we can build into machine learning models,” he says, “and what other features we might need to sacrifice to get there.”
Mądry notes that there are many societal and ethical implications of AI, citing, for example, problems that have emerged in resume screening by AI technologies. “Companies thought it would make hiring impartial, but it reinforced the biases and negatives that they were trying to avoid,” he says. “One resume screening tool came to the conclusion that the two most important factors predicting job performance are being named Jared and playing lacrosse in high school!”
Mądry emphasizes that researchers and engineers need to be mindful of confidentiality and of preventing the malicious use of data. Ultimately, he wants AI to be more accessible. “Right now, the average person can’t use AI without special training,” he remarks, “and this does not need to be the case. It’s time to develop AI 2.0—AI that is truly ready to be broadly and safely deployed in the real world.”
This story was originally published in January 2020.