The Ethical Dimension of Artificial Intelligence
Artificial intelligence (AI) is a rising field in the technology world that aims to teach machines how to learn, or ‘think,’ for themselves. Often, when we think of AI we imagine the voice-automated system JARVIS from the Iron Man movies or the 2001 Steven Spielberg movie. In reality, AI looks quite different, and chances are you have already seen it.
Canada, surprisingly, is a global leader in AI. Montreal has the highest concentration of researchers and students studying AI in the world, while Toronto has the highest concentration of AI start-ups. According to Ashley Casovan, the executive director of non-profit AI Global and former Director of Data Architecture and Innovation for the Government of Canada, Canada frequently uses AI for everyday tasks. For instance, let’s say you’re trying to figure out how to file your taxes while you’re on a train home. When you visit a webpage, a chatbot may pop up that will explain the process for you, while your Canadian Pacific Railway train uses its sensors to detect potential blockages on tracks and responds accordingly. Both of these technologies employ machine learning, a technique that trains computers using manually labelled data to respond to new information. The more the program responds to new information that is not labelled, the more it ‘learns.’ The Canadian government is using this in increasingly innovative ways: using predictive analytics, Canadian scientists were able to identify Zika virus patterns to help mitigate the spread of the virus. In addition, Canadian health services are employing predictive analytics to aid in suicide prevention.
However, the frequency in which the Canadian government employs AI is worrying for some. Fears of governments using AI to infringe on private freedoms are very real, as some countries, such as China, have begun to use facial recognition software for police surveillance. Furthermore, people are rapidly losing confidence in social media platforms and Internet security, often citing the absence of human intervention in the decisions that algorithms make as the cause. Furthermore, 54% of North Americans express concern for their online privacy, and the non-consensual use of personal data by social media companies and federal governments do little to ease these fears. While more Canadians are more concerned about their online security due to threats posed by internet companies, at least 59% fear for their personal information being used by their own government.
More and more people are worried about how their online information is used, perhaps in light of the Cambridge Analytica scandal that implicated Facebook in selling the personal data of millions of users. Furthermore, Russian interference in the 2016 US Presidential Election undoubtedly had an effect on the confidence many users have in their social media platforms of choice. Therefore, it is understandable that many people are hesitant to embrace AI and the idea of inhuman machines processing their information. However, there is little risk that computers will enslave us all. Rather, the prevalence of AI may serve to damage society in other ways, such as the propagation of increased bias.
As AI systems are created by humans, there is often the possibility of an inherent bias in the program itself, either through the data on which it is trained or the application of the program. As most computer programmers are white men, a lack of diversity in AI may serve to reaffirm the presence of gender and racial bias in places where it’s prevalent. Various organizations have been formed to address this problem, such as AI4ALL, which aims to encourage underrepresented demographics, such as people of colour and women, to pursue careers in AI.
As AI quickly integrates itself into society, the necessity for a comprehensive ethical code arises. According to Casovan, while Canada does have a government policy on responsible AI, it is difficult to enforce as the implementation of the policy often needs to be case-specific. Furthermore, there is little authority that restricts what companies can and cannot do, and evidently even less so for the government itself. Casovan thus proposes a solution: the creation of ethical models that are agile, inclusive, collaborative, and open-sourced to best provide companies with the resources to create ethical AI.
Montreal recently hosted the RE-WORK Deep Learning and Responsible AI Summit from October 24 to 25, a conference for AI-related industry professionals, from computer scientists to journalists to policymakers. During the summit, our team had the opportunity to interview AI professionals about their point of view on potential ethical concerns, such as McGill AI. This student association started in 2017 and aims to bridge the gap between undergraduate students and learning about AI. To do this, McGill AI offers students opportunities to learn about AI machine learning. With annual workshops, bootcamps, courses, lectures and more, students have the opportunity to work in groups on an idea related to AI in order to build functional prototypes.
“This is also a way for first-year undergraduate students to learn about machine learning and network with companies,” Jenny Long, a representative of McGill AI, tells the MIR. Thanks to its company crawls, McGill AI also make contacts between students and companies who do research on AI, or professors who give their advice on specific topics. McGill AI also organizes workshops that target general audience with their initiative “Machine Learning 101,” which aims at giving a general feel of what machine learning is and demystifying it. Likewise, McGill is already reaching students for potential initiatives, like reading groups for ethics in AI.
“Ethical issues are clearly one of the trending topics at the moment,” Long affirmed, “As a society, I don’t think we see any concerns specifically for the bootcamps but we do hope to make them more accessible in general.”
Students interested in artificial intelligence, machine learning, and data science should also consider attending the Centre for Social and Cultural Data Science Expo on January 21, 2020. Hosted by McGill University in New Residence Hall, the Expo will host a variety of talks about the uses of data science in computer science, politics, and other fields.
With many opportunities for students to get involved in AI and machine learning, Canada is evidently working to maintain its status as being a leader in AI. However, those interested in the exciting prospects that AI proposes must also consider its ethical dimensions. AI is serving to reconcile the technological world with the political and social spheres, and can therefore not be chiefly concerned with technological progression. AI researchers must evidently be concerned with the applications of such technology, and what it means for future generations.
Edited by Alec Regino.