[A]rtificially Sex[I]st

Can artificial intelligence be sexist? What about racist? In a previous article “HELP! The Robots are Coming!” we boasted about the numerous benefits of using artificial intelligence in the workplace. We even used cute references to Will Smith’s film I, Robot to help embed the point that there may come a time when it’s People v. Robots. Finally, we made the point that AI technology needed individuals to create the programs it will run on and that you can train your workers to do just that. While all of this is still true, no system is perfect and you can’t escape discrimination or other legal issues. What do I mean? Take a look at Amazon:

Recently, Amazon had to stop a recruiting project because it unintentionally discriminated against women. Amazon spent years developing software that would automate the recruiting process. The software that was developed was designed to select the best candidates from hundreds of incoming applications by checking them against keywords that were critical to the job. The software was fed and trained with successful applications from existing employees. This may come as a shock to you but the technology industry is full of males, and so is the majority of Amazon’s workforce. The software recognized that the majority of the successful applications from existing employees were that of males and concluded that men were better suited, resulting in applications from women being filtered out.

Based on the type of data that was entered, the same algorithm used by Amazon could just as easily discriminate by race. By feeding it certain data such as individuals with longer commutes to work are more likely to quit or that individuals with certain socio-economic characteristics or individuals that have a criminal background are more likely to quit or be disciplined. Although you had the intent to prevent turnover in your company and strengthen your workforce, the algorithm could potentially discriminate against an entire group of applicants.

Computers aren’t racist or sexist, but they can be. We all are a bit biased and when we feed machines data that reflects our own prejudices, they mimic them. Don’t believe me? Turn on Netflix and compare your Netflix home screen to the home screen of one of your friends, co-workers, or even your significant other. Looks different doesn’t it? Based on your biases (i.e. movies you’ve watched and liked or disliked), Netflix customizes what it thinks you want to see and shields the things you don’t. Although this may help streamline the process, there may be a country western film (no offense to anyone that likes country westerns) out there that would suit my entertainment needs for that night that I would have never had the opportunity to watch. The same is true for potential employment screening software.

That being said, it is important to keep in mind the potential risks when creating the algorithms for these programs. A simple way to avoid potential discrimination issues in the future is to do test runs of the algorithm or software to ensure there is no discrimination. Another way is to continuously update the algorithm with constant streams of data to help eliminate some biases so that it can run at peak efficiency. Remember, you’re only as strong as your weakest link (or in this case your most biased employee)!

Search this Blog

Media Contact

Recent Posts

Jump to Page

Necessary Cookies

Necessary cookies enable core functionality such as security, network management, and accessibility. You may disable these by changing your browser settings, but this may affect how the website functions.

Analytical Cookies

Analytical cookies help us improve our website by collecting and reporting information on its usage. We access and process information from these cookies at an aggregate level.