Research
The Responsible Autonomous & Intelligent System Ethics (RAISE) lab at McGill University, directed by Dr. AJung Moon, studies how autonomous intelligent systems interact with and influence human decisions and behaviour. We explore ways to maximize the value of robots and other intelligent machines while minimizing their risks to society. Through research in AI ethics and collaborative robotics, we aim to provide technology developers with practical tools for responsible innovation. Bridging knowledge between engineering and ethics, our studies also empower policymakers and business leaders to make informed decisions in the design and deployment of intelligent systems.
Project Team: Jin Guo, AJung Moon, Karyn Moffatt, Jutta Treviranus
Project Abstract: When AI systems are adopted in critical applications, any failure can pose serious risks to the health, safety, and well-being of users or other related stakeholders. Accurate estimation of the severity of the risks and thorough planning for mitigating them are indispensable but extremely challenging. This is especially true for marginalized and minority communities. Considering such a gap, we aim to accelerate the design of inclusive AI systems through a concrete case study of an accessible form of payment for the elderly population. As a first step, we plan to identify the barriers preventing marginalized users, in particular the elderly population, from participating in the design process of such AI systems. The outcomes from this research contribute to improving the existing co-design practices toward the development of inclusive AI, and inform the construction of a shared set of vocabulary that technologists and policymakers can use to prevent harms and risks AI can bring to the minority and marginalized stakeholders of the technology at design time.
Project Team: Joceyln Wong, Jocelyn Maclure, AJung Moon
Project Overview: The increase in regulatory activities in AI has led to a rise in auditing frameworks to hold designers of machine learning systems accountable, namely the increased number of AI ethics consultancies offering audit services. Unlike traditional engineering domains, the AI industry has yet to develop a standard for conducting meaningful and effective audits of ML systems. This results in reports that could provide a false sense of security for those requesting audits, independent of the quality and rigour of the audit conducted for a system or company. This project aims to hold the AI ethics auditors responsible and accelerate the processes of standardising AI ethics audits by investigating what constitutes a “thoughtful” report.
Project Team: Jin Guo, AJung Moon, Karyn Moffatt, Jutta Treviranus
Project Abstract: The general context of the study is to use the robot as a shopping assistant (like a roboticized shopping cart) to help pick up and carry items with the goal of examining the existence of stigma while using it both from the shoppers’ and external observers’ points of view. The end goal is to help people with disabilities (e.g. blind and visually impaired (BVI) population) to overcome barriers with shopping tasks such as navigation, item identification, and pick up.
01. ROBOTS THAT INTERACT WITH PEOPLE
Robot Signatures In Our Behaviors
May 2020 – April 2025
Investigator(s): AJung Moon
Given the influence of autonomous intelligent systems on our decisions and actions, how can we design the systems to maximize benefits while protecting the users from potential harm? We investigate the influence robots have on people to enable technologists to make appropriate decisions in the designing of interactive robotic systems and policymakers to make evidence-based technology policy for the deployment of these systems in our society.
Funding Sources: Natural Sciences and Engineering Research Council of Canada (NSERC), Discovery Grants Program
Social Hierarchy in Mobile Robotic Telepresence Use
May 2021 – Present
Investigator(s): Cheng Lin, Jimin Rhim, AJung Moon
Mobile Robotic Telepresence (MRP) systems—devices typically characterized by a videoconference system mounted on a mobile robotic base (often called “Skype on wheels”)—have been adopted and studied in an increasing number of settings this past decade (e.g., office, education, elderly care, long-distance relationships, and academic conference). However, little work has investigated what social norms govern human-MRP interactions. Do MRP pilots and co-located humans expect the same norms, and if not, how do we address these norm conflicts? For MRPs to successfully increase the accessibility of the spaces they are used in, MRP designers and organizations considering MRPs must understand and address these questions.
This project aims to study the social hierarchy expected by the remotely-located MRP pilot and co-located human during human-MRP interactions. A better understanding of these social norms and the factors that influence them may guide future MRP designs and future decisions to adopt MRPs in organizations. We also hope to contribute to a broader discussion of how social norm conflicts may arise when embodied technology (i.e., robots) mediate social interactions, and what research methods we can use to measure such effects. You can view the paper, poster and presentation for the RO-MAN 2021 Workshop on Robot Behavior Adaptation to Human Social Norm for more details on this project.
Funding Sources: Natural Sciences and Engineering Research Council of Canada (NSERC)
02. INTEGRATING ETHICS INTO AI SYSTEMS
AI Ethics Frameworks Case Study
May 2020 – August 2020
Investigator(s): Vivian Qiang, AJung Moon
Project Description: Current AI ethics principles also lack applicable guidelines and enforcement mechanisms. While numerous ethical frameworks have been developed to promote responsible innovation in AI technology, we have yet to explore the applications and effectiveness of these frameworks. This project aims to evaluate the efficacy of existing AI ethics frameworks by applying their check-lists and recommendations to start ups’ AI-powered products. Through these case studies, we will discover and analyze the ethics issues raised by the frameworks to create practical and actionable solutions in collaboration with the companies. Through publicly conducting a comprehensive review of a company’s ethical issues, this project will help business leaders and researchers in AI technology to recognize and manage risks associated with their products and services. Furthermore, as government officials develop policies to regulate the fast-growing field of AI, this research will help determine which existing ethical guidelines are most effective in discovering and mitigating ethical risks.
Funding Sources: McGill University Arts Research Internship Award
An Investigation of Ethical Risk
May 2020 – August 2020
Investigator(s): Jake Chanenson, Shalaleh Rismani, AJung Moon
Project Description: Over the past several years there has been a groundswell of interest — from both academia and mainstream discourse — in both the potential and real harms that narrow autonomous/intelligent systems (A/IS) represents. Despite the numerous attempts to identify and mitigate ethical harms/risks, it is unknown how much direct attention ethical risk is getting in the AI ethics discourse in academia and industry. Moreover, it is unknown if there is a widely accepted definition for ethical risk–which is crucial given that the field of AI ethics is interdisciplinary so a common set of definitions is a must. This project seeks to find answers to both of those unknowns through a scoping review of the existing literature.
Funding Sources: The Lang Center for Civic & Social Responsibility’s Social Impact Summer Scholarship
Can We Measure the Ethics of AI Systems?
January 2020 – January 2021
Investigator(s): Shalaleh Rismani
Over the past few years, numerous AI organizations have either developed and/or adapted AI ethics principles. While the notion of ethical AI has been heavily emphasized in the development and deployment of AI, we have yet to establish a systematic understanding of how an AI system’s adherence to the existing AI ethics principles are and should be assessed. This notion has led to various forms of ethics washing or ethics bashing by various actors within the larger tech community. By understanding the gaps in how we are evaluating AI systems for their adherence to AI ethics principles we can move towards making the implementation of the AI ethics principles concrete.
Funding Sources: NSERC, McGill Vadasz Scholarship, McGill Engineering Doctoral Award
Building an Adaptive Bilingual AI Competency Framework with Machine Learning
Jan. 2020 – Dec. 2021
Investigator(s): Ivan Ivanov (Principal Applicant), Sandi Mark, AJung Moon, Shalaleh Rismani, Laurent Charlin, Hugo Larochelle
This project develops and validates a bilingual AI competency ontology by using machine learning algorithms to analyze job postings from Montreal AI-companies and course frameworks from local educational institutions. The seamless and accurate competency data exchange in a standardized language afforded by the ontology will provide the necessary data infrastructure for much faster closing of the training-occupation competency gaps opening due to the large-scale changes in the job and skill demands of the new AI industries
Funding Sources: Pôle montréalais d’enseignement supérieur en intelligence artificielle (December 2019 – December 2021), AI competency framework projects
03. RETAIL INNOVATION LAB
Data Science for Socially Responsible Food Choices
April 2020 – March 2022
Investigator(s): Saibal Ray (Principal investigator), AJung Moon, Cohen Maxime, James Clark
In this research program, we investigate the use of AI techniques, involving data, models, behavioral analysis and decision-making algorithms, to efficiently provide higher convenience for retail customers while prioritizing social responsibility. In particular, the research objective of the multi-disciplinary team is to study, implement, and validate systems for guiding customers to make healthy food choices in a convenience store setting, while being cognizant of privacy concerns, both online and in a physical store environment. The creation of the digital infrastructure and decision support systems that encourage people and organizations to make health-promoting choices should result in a healthier population and reduce the costs of chronic diseases to the healthcare system. These systems should also foster the competitiveness of organizations operating in the agri-food and digital technology sectors.
Funding Sources: IVADO, Fundamental Research Project Grant