Visit

Good Code is a weekly podcast about ethics in our digital world. We look at ways in which our increasingly digital societies could go terribly wrong, and speak with those trying to prevent that. Each week, host Chine Labbé engages with a different expert on the ethical dilemmas raised by our ever-more pervasive digital technologies. Good Code is a dynamic collaboration between the Digital Life Initiative at Cornell Tech and journalist Chine Labbé.

Follow @goodcodepodcast on Twitter,  Facebook, and Instagram.

On this episode:

Even though hot right now, classes on ethics in data science are not new. Computer ethics classes first appeared in the 1970s. In his new class, Solon Barocas wants to bring aspiring computer scientists to not just think of ways to mitigate their work, but also to reject certain applications of machine learning, and to learn to say no.

Data Scientists are in high demand nowadays. As a result, they have a fair amount of power, Barocas says.

But it’s not enough to train students in ethics and expect them to go and change the world, he adds. To exert their power, young computer scientists need to have role models: important people in the field that they can point to when saying no.

You can listen to this episode on iTunesSpotifySoundCloudStitcherGoogle PlayTuneInYouTube, and on all of your favorite podcast platforms.

We talked about:

  • Solon Barocas taught his Cornell University class on ethics in data science in the fall of 2017 for the first time. Here is the syllabus and class description. As you can see, the last session is dedicated to “rejecting certain applications of machine learning.”
  • Similar classes are being taught in several universities around the country, including at Harvard, the MIT and Stanford. Read this New York Times article about the burgeoning of these classes.
  • Cornell University’s class focuses on fairness and non-discrimination in machine learning, which is Barocas’ main area of research. He co-founded the Association for Computing Machinery (ACM) Conference on Fairness, Accountability, and Transparency (FAT).
  • In this episode, Barocas mentions a study that claimed to be able to use Machine Learning to infer someone’s sexual orientation. Read about it in TechCrunch. He also talks about AI systems used to predict recidivism. Read about it in Forbes.
  • Barocas refers to an article by Kevin Haggerty that he assigns in his class. It shows how anti-CCTV advocates actually helped the people who supported the tool by bringing the debate away from the original question – should this technology exist? – to a question of efficiency  – how do you improve the tool? Read this piece here.
  • Our guest also talks about HireVue. This start-up does AI-powered analyses of video interviews of job candidates. For Barocas, the technology is based on an “unreasonable belief” that true characteristics of a person can be inferred from the way they look or present themselves.
  • Grace Hopper was an American programmer. She was also a United States Navy Admiral. She reportedly said: “It’s easier to ask forgiveness than it is to get permission”. Many articles on ethics in the field quote her on this, and they ask: can ethics and technology really be partners?
  • Big tech companies are all trying to find the best ways to tackle ethics in their field. Many have joined the Partnership on AI, created in late 2016 by a group of scientists at Apple, Amazon, DeepMind, Google, Facebook, IBM, and Microsoft to bring researchers, companies and non-profits to share best practices. Several companies have also developed or announced they were working on tools to detect bias in AI: IBM did, Google did, and so did Microsoft.
  • We ask Solon Barocas about a two-year grant from the National Science Foundation that he’s been awarded with his Cornell University colleague Karen Levy. Read about it here. Their mission: “assess the state, structure, and substance of data ethics in both educational and industrial contexts.”

Read More:

  • Craig Newmark, the founder of Craigslist, has given a lot of money to journalistic ventures recently, including investigative newsroom The Markup (now in turmoil after its co-founder and editor-in-chief Julia Angwin was fired), and a journalism program at CUNY. Last October, he’s announced that he would now help fund a competition for ethics in computer science. Other sponsors of the competition are Omidyar Network, Mozilla and Schmidt Futures. We look forward to seeing what comes out of it!
  • In this episode, Solon Barocas underlines how crucial it is for leaders in the field to send a signal to young computer scientists, by saying no to certain applications of machine learning. Yoshua Bengio, a pioneer in Artificial Intelligence and co-recipient of the 2018 ACM Turing Award (sometimes called the Nobel Prize of Computing), is sending such a signal. He has said on many occasions that he stands against killer robots.