11/07/2024 | Press release | Distributed by Public on 11/07/2024 12:24
As part of course work for StFX computer science professor Dr. Jacob Levman's Social Issues in the Information Age class, StFX student Kevin Saville built Artificial Intelligence (AI) technology that can predict the outcome of a traffic stop, the first step towards building a robocop or an assistive device that would assist a traffic officer in not discriminating against a citizen they've pulled over due to race or gender.
Now, Dr. Levman and Mr. Saville, of Cambridge, ON, have had a resulting article from the work published in the journal Information. The article can be found HERE.
"When I first saw Kevin's findings, I was struck by how original the analysis was, and how it complements existing knowledge on biases in traffic stop outcomes, and does so with AI, which is very popular in the news right now. This combines the popular topic of AI with the burgeoning issue of whether AI is discriminating against people or not. I think a lot of the population will find this work interesting," says Dr. Levman, who had tasked his students to choose a dataset to base their AI class project on, for which the application has real world social implications.
Mr. Saville, now a fourth year student completing a Bachelor of Arts degree with a double major in mathematics and computer science, says while researching datasets and ideas for his project, he came across the 'Stanford Open Policing Project,' an initiative that aims to improve transparency and accountability in policing by collecting and analyzing data on police stops and searches across the United States.
"It inspired me to consider how machine learning could be used in a similar fashion."
Dr. Jacob Levman (left) and Kevin Saville, with an online copy of their article in the journal, Information.What's his reaction to seeing his course work published in a journal?
"Thrilled, it feels incredible to see not only the finished project of your hard work, but that there are others who feel this is something interesting enough to be showcased. I feel very honoured and thankful for the opportunity."
Mr. Saville says the experience has impacted the way he is approaching new projects. "I'd love to have more research opportunities in the future and now with some experience on what it takes and the process, I'm looking forward to working on whatever comes next."
Mr. Saville completed the project in the 2024 winter term in a course that makes use of Dr. Levman's lab's research technology that facilitates the creation of high quality AI technology for focused applications.
Dr. Levman says it was his first time teaching CSCI 215, a second year and unusual discussion-based course for a computer science department. The course has no pre-requisites, so anyone can take it, including students who have not completed a single course in computer programming.
"Fortunately, my research lab had created software to make the creation of AI technology easy, towards empowering clinicians and medical professionals, who might be computer-phobic, towards creating their own high-quality AI that will help them to improve the standard of patient care for those that they treat," Dr. Levman says.
"We added features to the software so that someone with no programming abilities could create their own AI by assembling the data they need to train it in a template spreadsheet file. We introduced this software in the 215 class, so students could experience themselves how easy it is to create AI technology with our software, merely by collecting the appropriate data to train the learning machines with."
Mr. Saville proposed to build AI technology as would normally be done, with all measurements available in the dataset, as well as alternative versions that force the AI to be blind to the individual who has been pulled over's race and/or gender.
Dr. Levman says Mr. Saville's work demonstrated that there is a clear effect on predictive performance/accuracy of the AI based on whether race or gender is included in the model.
"This means that our AI analysis provides support for existing findings in traditional statistical analyses of this issue, which demonstrated clear biases in outcomes from traffic stops (was the person arrested, ticketed, warned, etc.) based on race and/or gender. Our analysis also demonstrates that AI can easily reproduce biases of traffic officers by relying on interpreting their choices as ground truth.
"We then demonstrate that we can mitigate this bias by forcing the machine to not know a citizen's race or gender and training the AI model on a very large sample size of 600,000 traffic stop outcomes. Thus, we can make AI technology that is prevented from discriminating against citizens in a major way, while still making similarly accurate predictions (computer scientists are often concerned with creating the most accurate learning technologies as possible)," he says.
Dr. Levman says it is expected that one day either traffic stop officers will have assistive devices that include AI technology that help guide them to prevent discrimination, violent escalations, etc., or that the traffic stop officer will be replaced entirely by a 'robocop'. In either situation, it is imperative that the AI technology not discriminate against citizens for any reason, with race and gender being the most obvious potential sources for that discrimination. "The approach Kevin took, blinding the AI to a citizen's race and gender status, is a method that is expected to be particularly effective in preventing the machine from reproducing any race- and gender-based discrimination that is embedded in the ground 'truth' predictions the machine makes."