Across all sectors and industries, artificial intelligence is rapidly transforming the hiring process, enticing hiring managers with the promise of streamlining recruitment, eliminating inefficiencies and uncovering hidden talent that is not apparent to the human eye. From scanning resumes to conducting virtual interviews, AI tools are beginning to play a central role in how companies identify and evaluate job candidates. However, while these technologies can offer considerable benefits, they also bring significant risks, predominantly around bias, transparency and accountability. So when the question inevitably arises, either in a board meeting or a weekly staffing video conference, hiring managers must be prepared to answer: How can our financial institution legally and ethically make AI work in our hiring processes?
How AI Tools Are Used in Hiring
AI-powered hiring tools use artificial intelligence, including machine learning and natural language processing (NLP), to streamline, automate and enhance parts of the recruitment process. Specifically, these NLPs and machine learning algorithms are trained on an institution’s historical data to identify patterns associated with successful employees.
These tools can analyze thousands of resumes in seconds and then provide a score and rank candidates based on how well their qualifications match a job description. They can also weed out unqualified candidates by using chatbots and/or resume screening. Some platforms go further, using NLP to evaluate communication skills in video interviews or to predict job performance based on facial expressions and tone of voice. These tools can also recommend individuals for promotion or internal mobility. Employers see AI as a way to reduce human error and make the hiring process more efficient for all involved.
Potential Pitfalls
Despite these advancements, AI in hiring is far from risk-free. One of the most pressing concerns is the potential for algorithmic bias. AI systems learn from historical data, which may reflect past implicit or explicit discriminatory practices, such as favoring candidates from certain schools, neighborhoods or demographic groups. If these biases are embedded in the training data, the AI will likely perpetuate them. For example, in 2018, Amazon infamously ditched its AI job recruiting tool after it became apparent that it was biased against women. Amazon attempted to train its algorithms to rate resumes based on patterns in past applicant history, but because women were historically absent from that dataset, the algorithm believed men were preferable and poorly rated female resumes. This scenario is not the result of a technical glitch or the storyline of a sci-fi movie where robots are taking on a discriminatory personality of their own, but rather an extension of historical inequalities in the hiring process and one that many companies have sought to change.
Another critical issue is transparency. Many AI systems used in hiring lack transparency, making it difficult to understand the rationale behind their decisions and resulting in many referring to the systems as “black boxes.” Employers may not fully understand AI or how the algorithm is weighing different factors, making it difficult to justify or explain hiring decisions. This lack of transparency becomes especially problematic when certain candidates are rejected or when legal questions arise regarding discrimination or equality.
Legal compliance is also evolving quickly. Jurisdictions across the U.S. and abroad are beginning to regulate the use of AI in employment. New York City’s Local Law 144, for example, requires employers to audit and disclose the use of automated employment decision tools to ensure they do not produce disparate impacts based on race or gender. The European Union’s AI Act takes it further, classifying certain employment-related AI systems as “high risk” and subjecting them to strict oversight.
AI also raises questions of accountability. If an algorithm makes an unlawful decision that results in the rejection of a qualified candidate for discriminatory reasons, who is responsible? The software vendor? The HR department? The company leadership? Without clear accountability structures and a justice system slow to catch up with the fast pace of AI, employees and applicants may find it difficult to seek recourse.
How Financial Institutions Can Implement AI Tools in Hiring
Despite these risks, AI will likely remain a central fixture in hiring. Organizations seeking to use these tools must take a cautious and informed approach and focus on careful oversight. To mitigate bias and ensure fairness, institutions should conduct regular audits, use AI only as a decision-support tool and maintain human oversight throughout the hiring process.
Ultimately, transparency is key. Candidates should be made aware when AI is involved in a hiring process and offered explanations or alternatives where possible. Compliance with federal, state and local employment and data privacy laws is essential, as is ensuring that AI systems evaluate only job-relevant criteria. By building an internal framework and ensuing policies and monitoring tools are adapted over time, financial institutions can successfully harness the efficiency of AI while still upholding their obligations and ethical standards.
Information in this article is provided for general information purposes only and does not constitute legal advice or an opinion of any kind. You should consult with legal counsel for advice on your institution’s specific legal issues.

Joey K. Wright, Attorney, Amundsen Davis LLC
As an attorney with nearly a decade of experience, Joey uses her knowledge and voice to make a difference for her clients and their businesses. She thoughtfully represents employers facing a variety of employment issues, including hiring and firing, discrimination and harassment, compensation, and discipline.
Email Joey at JWright@AmundsenDavisLaw.com.
Amundsen Davis LLC is a Diamond Associate Member of the Indiana Bankers Association.