Hiring is one of the most time-consuming parts of running a business. It is also one of the riskiest.
So it is no surprise that employers are turning to AI to speed things up. Resume screening tools. Automated shortlists. Chatbots that pre-screen candidates. Applicant tracking systems that rank people before a human ever looks at an application.
While it can streamline workflows, it might also create legal exposure for many employers.
If you use AI in hiring, or plan to, there are new rules you need to understand, especially in Ontario. And even if you outsource hiring to software or recruiters, the risk still lands on you.
How AI Helps with Hiring Decisions
Many common hiring platforms now use AI or algorithmic decision-making behind the scenes. That includes tools that:
- Screen resumes
- Rank candidates
- Filter applicants based on keywords
- Assess video interviews
- Predict “job fit”
In many cases, employers do not control how these systems are trained or what data they rely on. They just see the output.
That is how legal problems can result.
Using AI Does Not Help You Avoid Your Human Rights Obligations
Canadian employment law does not care whether discrimination was intentional, unintentional, or based on a machine’s output.
If a hiring process screens out candidates in a way that disproportionately impacts protected groups, that can still lead to a human rights complaint.
AI tools often rely on historical data. If past hiring decisions reflected bias, the AI may simply repeat it. Faster and at scale.
However, one thing is clear: employers cannot hide behind technology. If your hiring process discriminates, you are ultimately responsible.
The Ontario Disclosure Rule
Under the Working for Workers Four Act, 2024, employers now have a new, explicit obligation when using AI in hiring.
Effective January 1, 2026, employers must disclose the use of artificial intelligence in job postings if AI is used to screen, assess, or select applicants, though there are some specific exceptions (consult your lawyer for details!).
The law requires employers to include a statement in publicly advertised job postings disclosing the use of AI. It applies even if:
- The AI is provided by a third party
- The AI only assists part of the process
- A human still makes the final decision (which should always be the case anyways)
What Employers Get Wrong About Disclosure
Disclosure does not remove liability. It does not fix bias. It does not protect you if the tool itself is flawed.
What disclosure does is provide transparency.
Once applicants know AI is involved, they may ask:
- How does the tool work?
- What data does it rely on?
- Does it disadvantage certain groups?
If you cannot answer those questions, that is a problem.
Bias Claims Are No Longer Hypothetical
We are already seeing lawsuits in the U.S. tied to algorithmic hiring tools. Canadian lawyers are watching closely.
The risk is not limited to obvious discrimination. AI tools can unintentionally screen out candidates based on:
- Race
- Gender
- Other protected human rights grounds
- Proxies for human rights grounds, such as languages spoken
An employer may have never intended to discriminate, but that does not stop a claim, nor can it be a defence against one.
Outsourcing Hiring Does Not Outsource Liability
A common mistake employers make is assuming liability shifts to the software provider. It does not.
If you use an AI-powered hiring platform, you are still responsible for ensuring your hiring process complies with:
- Human rights legislation
- Employment standards
- Privacy obligations
Before using any AI hiring tool, employers should be asking:
- What data trained this system?
- Has it been audited for bias?
- Can decisions be explained?
- Can humans override outcomes?
- Does it promote equity and diversity?
If the answers are vague, that is a red flag.
Human Oversight Is Not Optional
One of the most important safeguards employers can implement is meaningful human review.
That means:
- Humans understand how the tool works
- Humans review outputs critically
- Humans can intervene
- Humans make the final decision
- Humans remain accountable
Blind reliance on rankings or scores is risky and ill-advised. Courts expect employers to exercise judgment, not defer to technology.
The Bottom Line for Employers
AI can make hiring faster. It can also make mistakes faster.
Ontario employers now face a clear regulatory obligation to disclose AI use in hiring. Beyond that, human rights exposure remains ever present.
If you are using AI in hiring and you are not sure whether you are compliant, be sure to consult your lawyer.
Employers should review their hiring tools, update job posting practices, and put guardrails in place.
If you want help reviewing AI hiring practices, drafting compliant disclosures, or building policies that actually work in real workplaces, contact us, call us, or schedule an appointment with us. Fixing this after a complaint is far more expensive than getting it right now.
