Research

Hiring with Algorithmic Fairness Constraints (Job Market Paper)

Prasanna Parasurama and Panos Ipeirotis

This study is motivated by the observation that firms are adopting various diversity policies to increase their workforce diversity. As hiring becomes increasingly aided by algorithms, some of these diversity policies are being inscribed as algorithmic fairness constraints in hiring systems (e.g., in LinkedIn Recruiter). At the same time, algorithms rarely work in isolation; almost always, a human decision-maker (such as a hiring manager) makes the ultimate hiring decision based on recommendations from the algorithm. Therefore, the effectiveness of these fairness constraints depends on how they interact with the human decision-maker. When these diversity policies/fairness constraints do not work as well as intended, the conventional wisdom is to attribute the ineffectiveness to the human decision-maker’s inherent bias. In this paper, we seek to understand what other factors contribute to the (in)effectiveness of these fairness constraints. To do so, we first develop a theoretical model of hiring with fairness constraints involving an algorithmic recommender and an unbiased human decision-maker. We theoretically show that the human decision-maker need not be biased for the fairness constraint to be ineffective. The effectiveness of the fairness constraint depends on (1) the size of the applicant pool, (2) the predictive power of the algorithmic recommender, and (3) the correlation between the algorithmic recommender’s and the human decision-maker’s assessment criteria. Interestingly, we show that the more correlated they are, the less effective the fairness constraint becomes. We then estimate the model parameters and empirically test our theoretical predictions using hiring data from IT firms. We show via counterfactual policy simulation that the fairness constraint can improve the gender diversity of hires by a modest amount; however, there will be a high level of heterogeneity in the effectiveness across job types. These findings offer several practical implications for the design and implementation of fairness constraints in algorithmic hiring systems.

Role of Online Talent Sourcing in Occupational Gender Segregation

Prasanna Parasurama, Anindya Ghose, Panos Ipeirotis

The underrepresentation of women in IT is well documented. Studies have considered different factors that lead to this underrepresentation: demand-side factors such as discrimination and supply-side factors such as self-selection in job applications. However, the extant literature has narrowly focused on the traditional applicant-initiated hiring process (who applies to what jobs; who receives an interview once they apply). With the advent of professional networking sites such as LinkedIn, it is becoming increasingly common for recruiters to initiate the hiring process by inviting passive candidates to apply/interview. Unlike the traditional hiring process, this type of sourcing on LinkedIn is inherently facilitated/constrained by LinkedIn’s recommendation algorithm. To understand whether this contributes to occupational segregation, we study demand-side and supply-side choices in hiring on LinkedIn (who is contacted on LinkedIn; who responds). One of the main challenges in studying hiring choices on LinkedIn with HR data is that researchers often only have access to candidates that were sourced on LinkedIn but not the choice-set of candidates that an employer considers on LinkedIn. We address this challenge using a dataset containing the universe of public LinkedIn profiles and matching this with hiring data from IT firms to create the choice set. Our results suggest that LinkedIn does not contribute to the existing occupational segregation in IT and can, in fact, help mitigate it. We find that women were more likely to be contacted on LinkedIn compared to men and equally or more likely to receive an interview and offer once contacted. If contacted on LinkedIn, they are also more likely to respond to recruiter invitations, suggesting that online talent sourcing can be a viable strategy to increase diversity in tech.

Gendered Information in Resumes and its Role in Hiring Bias

Prasanna Parasurama, Joāo Sedoc, Anindya Ghose

In this study, we ask whether men and women applying to the same job write their resumes differently – and if so, what role this plays in hiring bias. Resumes are an integral part of hiring, where applicants often engage in impression management techniques. For example, a female applicant might choose to highlight masculine characteristics or downplay feminine hobbies when applying to a male-typed IT job. On the one hand, this can increase her chances of getting hired since masculine characteristics align with the gender type of the job. But on the other hand, it can create a backlash effect since those same masculine characteristics are incongruent with female gender stereotypes, decreasing her chances. We build on a long line of literature on norm violation and study how gender incongruence in resumes (i.e., female applicants with masculine characteristics and male applicants with feminine characteristics) affects callback rates. We build a predictive model of gender incongruence, where we train a deep-learning model on anonymized resumes to learn the gender of candidates. Using this model, we develop a measure for gender-incongruence – i.e., a measure of how much the self-presented gender characteristics in the resume deviate from the self-reported gender of the candidate. Using this measure along with historical hiring data from technology firms, we test whether applicants whose resume gender characteristics deviate from their actual gender are less likely to receive a callback. We find three main results: (1) there is a significant amount of gendered information in resumes – even among anonymized applicants with similar job-relevant characteristics, our model can learn to distinguish between genders with a high degree of accuracy. (2) Women who exhibit masculine characteristics in the resume are less likely to receive a callback after controlling for job-relevant characteristics. (3) Extant dictionary-based (LIWC) methods underestimate this effect.

Designing Fair Resume Screening Algorithms

Prasanna Parasurama and Joāo Sedoc

Advances in language models have fundamentally changed the nature of many natural language tasks. In resume screening, for example, simple keyword-based matching has been replaced by sophisticated NLP models, promising higher quality matches and increased efficiency. At the same time, the black-box nature of these models has raised concerns about the potential for bias in downstream algorithmic hiring applications. For example, in 2018, Amazon came under fire for its resume screening tool that was reportedly biased against women. The model had learned through historical hiring data that men were more likely to be hired and therefore rated male resumes higher than female resumes. Although candidate gender was not explicitly included in the model, it learned to discriminate between male and female resumes based on the proxies of gender such as hobbies, lexicon choice, style of writing, etc. To address this challenge, we propose two debiasing methods for algorithmic resume screening and experimentally evaluate the effectiveness of our proposed methods on a large hiring dataset from U.S technology firms. Our first method relies on a gender obfuscation technique, where we iteratively remove features from resumes that are predictive of gender while preserving job-relevant features. Our second method relies on an adversarial debiasing technique, where we train a screening algorithm to be good at predicting the applicant’s hiring outcome but poor at predicting the applicant’s gender. Preliminary results show that removing names from resumes (a practice that is currently being used in the industry) only minimally reduces bias. Our proposed debiasing methods can fully eliminate bias with only a small accuracy tradeoff.