The Bias Detection Tool by Symphony Talent leverages artificial intelligence (AI) to identify and mitigate biased language in job descriptions and requirements, ensuring they are inclusive and free from gender, race, and other biases. This case study focuses on the user experience (UX) design and functionality of the AI-powered tool, which is primarily used in the job details section of HR tech products.
Old UI
Web Application
Problem Statement
Background
In today's competitive job market, companies strive to attract diverse talent to foster innovation and creativity within their teams. However, many job descriptions unintentionally contain biased language that can deter qualified candidates from underrepresented groups. This unintentional bias can stem from gender-specific terms, age-related language, and other forms of subtle discrimination.
Context and Purpose
Purpose: To provide a bias-free job description and requirement section in job postings.
Target Users: HR professionals, recruiters, and hiring managers.
Use Case: Reviewing job descriptions to detect and correct biased language before posting using AI technology.
The Problem
Despite efforts to create inclusive workplaces, many organizations struggle to ensure their job descriptions are free from biased language. This challenge is compounded by the volume of job postings and the subjective nature of detecting bias, making it difficult for HR professionals and hiring managers to consistently identify and eliminate biased terms.
Key Issues
Unintentional Bias: Job descriptions often contain language that unintentionally excludes certain groups, such as gender-specific terms like "salesman" or age-related phrases.
Manual Detection Challenges: Manually reviewing job descriptions for bias is time-consuming and prone to human error, leading to inconsistent results.
Impact on Diversity: Biased job descriptions can discourage diverse candidates from applying, limiting the talent pool and impacting the company's diversity and inclusion goals.
Lack of Awareness: HR professionals and hiring managers may not always be aware of the biased language in their job postings, further perpetuating the issue.
Objective
To address above mentioned challenges, there is a need for an automated solution that leverages AI to detect and suggest corrections for biased language in job descriptions. This solution should:
Identify Biased Terms: Accurately detect biased language in job descriptions using AI algorithms.
Provide Inclusive Alternatives: Suggest neutral and inclusive terms to replace biased language.
Enhance Usability: Offer an intuitive and user-friendly interface for HR professionals and hiring managers.
Ensure Consistency: Provide consistent and reliable bias detection across all job postings.
Goals
Improve Diversity and Inclusion: By eliminating biased language, the tool aims to attract a broader and more diverse pool of candidates.
Increase Efficiency: Automate the bias detection process to save time and reduce the burden on HR professionals and hiring managers.
Enhance Awareness: Educate users on the presence of biased language and promote the use of inclusive terms.
Foster Fair Hiring Practices: Ensure that job descriptions are fair and equitable, supporting the organization's diversity and inclusion objectives.
My Role
I spearheaded the design of this AI-powered bias detection tool, working closely with a fellow designer, the product owner, and the front-end development team. I also collaborated with the Director of Innovation to ensure the tool's functionality and effectiveness.
Research and Discovery
Initial Research:
To design an effective AI-powered bias detection tool, I began with comprehensive research to understand the scope and impact of biased language in job descriptions. This involved:
Literature Review: Studying existing research on bias in job descriptions and its effects on diversity and inclusion.
Competitive Analysis: Analyzing similar tools(Omdena, Envisioning.io, Quillbot) in the market to identify strengths, weaknesses, and opportunities for improvement.
User Interviews: Conducting interviews with HR professionals, recruiters, and hiring managers to gather insights on their challenges with creating unbiased job descriptions and their needs for a bias detection tool.
Stakeholder Meetings: Engaging with key stakeholders, including the product owner, Director of Innovation, and front-end development team, to align on goals and expectations.
Key Findings
From the research phase, several critical insights emerged:
Prevalence of Unintentional Bias: Many job descriptions contained unintentional biased language that could deter diverse candidates.
Need for Automation: HR professionals expressed a need for an automated solution to save time and ensure consistency in detecting and correcting biased language.
Importance of User Education: There was a significant need to educate users on the impact of biased language and the importance of using inclusive terms.
Integration with Existing Systems: The tool needed to seamlessly integrate with existing HR systems and workflows to ensure adoption and ease of use.
Personas and User Journeys
Based on the findings, I developed personas and user journeys to guide the design process. Key personas included:
HR Professional: Responsible for creating and reviewing job descriptions.
Recruiter: Focused on attracting a diverse pool of candidates.
Hiring Manager: Ensures job descriptions are fair and inclusive before posting.
Requirements Gathering
Collaborating with the product owner and the Director of Innovation, I outlined the functional and non-functional requirements for the tool.
Key requirements included:
User Flow
I outlined the flow from one section to another, ensuring users can easily navigate the tool.
This included the steps from typing/pasting a job description for reviewing and implementing AI-generated suggestions.
Design
I created wireframes to visualize the basic layout and structure of the tool.
These wireframes focused on positioning key elements such as content type selection, filters, analysis section, and action buttons.
Low-Fidelity Mockups
Once the basic structure and flow were established through wireframes, I proceeded to create low-fidelity mockups.
These mockups provided a more detailed visualization of the tool while still allowing for flexibility in design adjustments.
Simplified Visuals: The low-fidelity mockups were kept simple, focusing on functionality and user interaction rather than detailed visual design. This allowed for quick iterations based on feedback.
Key Components:
Content Type Selection: A dropdown menu for selecting the type of content to analyze.
Filters: Additional dropdown menus for applying filters.
Analysis Section: A clear area where the job description is displayed, with biased terms highlighted and AI-generated suggestions shown.
Severity Indicators: Icons or labels indicating the severity of detected biases.
Feedback Mechanism: Thumbs up/down icons for users to provide feedback on the suggestions.
Action Buttons: Clear buttons for accepting, load new set, or clearing suggestions.
User Testing
Feedback Collection: Shared the low-fidelity mockups with key stakeholders, including the product owner, front-end development team, and Director of Innovation, to gather feedback.
Iterative Improvement: Based on the feedback, I made necessary adjustments to the design. This iterative process ensured that the design met user needs and expectations before moving on to high-fidelity mockups and final implementation.
Usability and Accessibility
Ease of Use: The tool is straightforward, with clear instructions and an easy-to-navigate interface.
Accessibility: Considered adding tooltips and help icons to explain filter functions and severity levels.
Final Designs
Settings Page: The BIAS detection tool is located on the settings page.
Landing Page
Intuitive Design: The layout is clean, with a clear separation of functionalities.
Filters: Multiple filters to narrow down specific areas or types of biases to check.
Highlight and Suggestions
Bias Detection: Biased terms are highlighted in the text for easy identification.
Severity Levels: Each detected bias has a severity level (e.g., severe) indicating the urgency of correction.
Bias Types: The tool categorizes the type of bias detected, making it easier for users to understand the context of the bias.
Suggestion Box: Provides alternatives to biased words (e.g., "salesman" to "salesperson" or "sales executive").
Provide Feedback: Users can like or dislike the suggestions, providing valuable feedback for continuous improvement of the tool.
Video
Conclusion
By developing an AI-powered bias detection tool for job descriptions, organizations can create a more inclusive and diverse workplace. This tool will help HR professionals and hiring managers identify and correct biased language, ensuring that job postings are welcoming to all candidates, regardless of gender, age, or other characteristics.
The UX design is intuitive, allowing users to easily identify and correct biased language. Future enhancements could include more detailed filter explanations, improved accessibility features, and more granular feedback options.
View live prototype