A Canadian Ministry Employee Leaked AI-Filtered Immigration Scores—And Sparked a National Inquiry
A zip file, a brief note, and an odd spreadsheet containing information from immigration applications marked with “risk levels” were all sent anonymously. It wasn’t the presence of a scoring system that was startling. The notes indicated that the scores were produced by an AI model discreetly integrated into the ministry’s internal systems rather than by a human officer.
Initially written off as an employee’s attempt to humiliate Immigration, Refugees and Citizenship Canada (IRCC), the leak soon took on a new meaning. The documents were confirmed by journalists. Attorneys noticed odd trends. Days later, opposition MPs were demanding a parliamentary investigation.
| Detail | Description |
|---|---|
| Incident | AI-filtered immigration score data leaked by a Canadian Ministry employee |
| System Involved | IRCC’s Chinook & automated triage tools |
| Impact | Triggered a national inquiry into algorithmic bias and transparency |
| Key Concerns | Discriminatory flagging, lack of human review, violation of fairness norms |
| Public Response | Legal experts, migrant rights groups, and privacy watchdogs demanded reform |
| External Reference | Global News – Canada’s AI Use in Immigration |
Although this leak wasn’t the beginning of Canada’s AI-immigration scandal, it was the first indisputable spark. The IRCC has relied on programs like Chinook, which are meant to speed up the processing of large numbers of visa applications, for many years. The government justified the action as practical. They claimed that without sacrificing equity, automation could handle the startling backlog of almost a million applications.
But when it became evident how these tools worked, that assumption fell apart. Candidates weren’t just waiting in line. They were being arranged. labeled as invisible risks. Some were marked as “standard,” while others were marked as “at-risk”—a term that has no precise meaning but has far-reaching implications.
Names with common South Asian surnames were more likely to be in the “at-risk” column in the leaked data. Certain nations’ applications were grouped near the bottom of the priority stack. The wait times for some applicants with almost the same qualifications varied by several months. Those who applied could not see the differences, but the pattern of the leaked file made them obvious.
The first reaction from the ministry was cautious denial. It stated that the AI tools only support officers and do not make decisions. However, some contend that framing is deceptive. According to Petra Molnar, a researcher at the University of Toronto’s Citizen Lab, “you’ve already been disadvantaged if a machine flags you as high-risk before a human ever sees your file.”
She is not the only one who thinks that. Already wary of a growing number of denials without apparent explanation, immigration attorneys started to suspect that a covert triage system was influencing decisions long before cases ever reached an officer’s desk. “We used to say every file is assessed on its merits,” one attorney informed me. “Now, it feels like they’re assessed on metadata.”
The government maintains that these tools are protections against mass applications and fraud. However, it’s challenging to assess safeguards that aren’t publicly documented, as proponents of transparency note. They contend that there is no set criteria for what qualifies an applicant as “at-risk.” No separate audit trail. Additionally, there is no simple way to challenge a score that applicants were unaware existed.
These questions are the focus of the nationwide investigation, which was formally started two weeks after the leak. Under the direction of former Supreme Court Justice Diane Koenig, the commission’s broad mandate includes investigating procedural fairness, algorithmic bias, and the wider ramifications of AI governance in public systems.
Political lines have been shattered by the debate inside Parliament. For efficiency, some conservatives have praised the use of digital tools. People from both parties are concerned that automation has surpassed policy. “It’s not just immigration,” MP Jaida Marceau stated at one hearing. “This is about the kind of nation we become when we let invisible systems determine who is entitled to what.”
When she said that, I recall pausing—not because it was dramatic, but because it was so obvious.
The migrant communities have been particularly affected emotionally by the scandal. Although they have long criticized opaque procedures, organizations like the Migrant Workers Alliance for Change now have evidence. Not only of prejudice, but also of organization. The notion that an algorithm could subtly determine your value as a newcomer has hurt feelings.
Additionally, it has rekindled old grievances. Thousands were left in limbo after the abrupt termination of the Home Care Worker Immigration Pilots in late 2025. Many had spent months getting ready, hiring consultants, and gathering paperwork, only to have the program shut down just hours after it started. For those impacted, the AI controversy validates their suspicions: the system is functioning flawlessly, just as it was subtly altered.
Technologists have also joined the battle. Some contend that by eliminating human bias, AI can improve justice. However, that was not the case in this instance. Anita Dhillon, a software ethicist, stated that the issue is not that machines are making poor choices. “It’s that, under the pretense of objectivity, they are making biased decisions more quickly.”
The IRCC has responded by promising to temporarily halt AI triage for all caregiver and humanitarian streams. Some say it’s a small gesture considering the extent of the harm. Others see it as the beginning of a reckoning, a time when institutions are forced by public trust to pause, reevaluate, and recognize the human cost of optimization.
The identity of the employee who leaked the information is still unknown. It’s unclear whether their actions were motivated by fear, annoyance, or conscience. It’s more obvious that their choice—silent and dangerous—has brought attention to one of the most important public governance experiments.
The investigation may not produce a report for months. However, the discussion has already shifted as a result of the questions posed. The public wants to know who wrote the code and who is responsible if it fails, if AI is going to have an impact on who gets to start a life in Canada.