skip to Main Content

AI in the Hiring Process – Legislative Changes and Risks for Employers to Consider

Proposed new job posting requirements regarding AI disclosure

The Ontario government recently introduced Bill 149 – Working for Workers Four Act, 2023 which includes planned amendments to the rules regarding job postings in the Employment Standards Act. The planned amendments include a requirement that employers disclose the use of artificial intelligence (“AI”) in the hiring process. The specific language proposed for this amendment in Bill 149 is as follows:

AI in the Hiring Process - Legislative Changes and Risks for Employers to Consider

Every employer who advertises a publicly advertised job posting and who uses artificial intelligence to screen, assess or select applicants for the position shall include in the posting a statement disclosing the use of the artificial intelligence. Regardless of whether or not the bill is passed, the suggested amendment is noteworthy simply for the fact that it’s early (at least in the employment law sphere) Canadian legislation regarding AI.  It represents an acknowledgement of the potential risks for employees and employers that will need to continue to be assessed, and it has been prepared in line with the province’s stated priorities from its Trustworthy Artificial Intelligence (AI) Framework (the “Framework”).

As part of the Framework, the province has prepared Beta Principles for Ethical Use of AI, which state that AI use should be:

  1. Transparent and explainable
  2. Good and fair
  3. Safe
  4. Accountable and responsible
  5. Human-centric
  6. Sensible and appropriate

The proposed disclosure obligation checks the transparency box, but it is relatively toothless as far as legislative obligations go.  If the majority of the employers in a given field are using AI in their hiring process, this provision won’t give employees very much choice or protection regardless of any personal privacy and fairness concerns they may have.  The province does have to start somewhere though and this proposed amendment suggests the potential for a trend.

What does this mean for employers?

This AI disclosure obligation will very likely be a requirement for employers soon. It also wouldn’t be entirely surprising to see greater efforts made to produce legislative protections in line with the beta principles, so it’s worthwhile for employers to start considering how this may impact their hiring processes.

If you don’t know why an employer would be interested in using AI in their hiring process in the first place, then the simple answer is efficiency.  Bringing on a new employee is expensive and time-consuming.  AI can immensely reduce the resource drain of the hiring process simply by sorting through applications and providing a far smaller list of suitable candidates.  Many employers won’t care how the AI does it if they end up with a great new hire after an AI hiring tool narrows a field of one hundred candidates down to five top prospects in a matter of seconds.

But they should care.  The use of AI in the hiring process can create unintended consequences and risk.  AI hiring systems are not perfect and the potential for unintended discrimination should not be discounted.  Several years ago, reports arose that a well-known tech company scrapped the development of an AI hiring tool after discovering that it was discriminating against female candidates.  The reports indicated that the company trained its AI using applications submitted for past job postings.  These applications were predominantly from men and the AI subsequently taught itself to prefer masculine language – resulting in an unintended, but serious, gender bias.  If you’re an employer intending to use an AI hiring tool then you absolutely should be enquiring how that tool works.  The potential for systemic issues is too significant to ignore.  Aside from the serious ethical issues, a failure by an employer to take proper care in the use of an AI hiring tool could have serious and costly legal consequences.  Picture this scenario: a large-scale employer uses an AI hiring tool to efficiently produce lists of qualified applicants for internal job postings.  The company uses this tool for many years before it is discovered that a flaw in the algorithm resulted in a specific class of people being graded less favourably and recommended for promotion less as a result.  The intention won’t matter if the consequence is significant discrimination.  Can you imagine the class action lawsuit?

Employers should also ensure they inquire about the privacy of the data that is provided to any AI hiring tool.  What happens to all the data – including identifying personal information from applicants – that gets uploaded to the system?  Does that data stay private or is it used to further train the AI hiring tool to be more effective for any organization it’s used by?  These are all important questions to be asked before adopting AI into your hiring process. 

What then can an employer be doing to protect their organization in the long run?  The first step is to implement a robust AI policy that can inform organizational decisions about AI.  There will always be some risk of unintended consequences, but if there is thoughtful intention in the process, you will be much less likely to rush past those key considerations when assessing the implementation of new AI into your hiring process.  

Mitigate risks and stay compliant! Secure your AI template now here or get in touch with our team here if you have further questions or would like to discuss.

Back To Top