Menu Close

Can we prevent age bias in AI technologies?

We hear a lot about the power of AI to transform the business landscape into the future, but increasingly instances of bias, including the potential for age bias, are emerging.  These biases are unconsciously built into algorithms. An added complication is that the tech sector generally lacks diversity, especially in relation to older workers and women.  Not surprisingly, AI throws up novel challenges which understandably create fears about its widespread use, including what it might mean for people’s jobs in the long term and the impact of unidentified biases on people’s working lives. 

It’s encouraging to note that these concerns appear to have had an impact. There is growing awareness about the need to better educate people about AI and how it works but also the necessity to address  the potential for bias. 

Among leaders in the field, there is an emphasis on understanding AI as an augmentation to human decision making rather than technology that overrides it.  There is also a call for ethical AI, which incorporates the requirement for organisations to address bias in their systems and responsibly manage AI security.  Big tech companies are addressing concerns through the co-creation of a group called Fairness, Accountability and Transparency in Machine Learning. This group is made up of a community of researchers and practitioners who want to ensure non-discriminatory processes in the development of machine learning.   

Bias can be related to how data is coded, collected, selected or used to train an algorithm, reinforcing existing social biases around gender, age, ethnicity and sexuality.  For example, an AI system that reviews job applications by drawing on a company’s historical data, could discriminate against women and older workers if the majority of employees have historically been younger and male. Although it’s worth noting that human screening of job applicants is not always non-discriminatory either.

Various AI technologies already in use have had to be turned off, modified and fine-tuned as a result of identified biases.  For example, Amazon turned off its job screening AI system because it was found to discriminate against women.  LinkedIn had to modify its system because it recommended male variations of women’s names but not vice versa e.g. it would provide a prompt for Andrew in the case of a search for Anthea but not Anthea in a search for Andrew. 

While AI offers significant potential to tackle complex business problems and help drive business growth, the effectiveness of these technologies relies to large degree, on the people who write the algorithms and those tasked to deploy it.

Two things seem critical. Diversity among those who develop algorithms and the establishment of diverse teams to identify and address bias.  This reinforces the message that the human element is essential to AI.

What do you think? Make a comment below

Concerned about issues impacting experienced professionals?
Join War on Wasted Talent to help drive change

Spread the word

Leave a Reply

Your email address will not be published. Required fields are marked *