First, understanding an “algorithm”
According to the Oxford Dictionary, the definitions of ALGORITHM and LAW are very similar.
Algorithm
“A process or set of rules to be followed in calculations or other problem-solving operations, especially by a computer.”
Law
“The system of rules . . . regulating the actions of its members and which . . . may [be] enforce[d] by the imposition of penalties.” The only difference, really, is that in a democracy, citizens have a right to set the laws that govern them. Not so with algorithms – yet.
Letting math break the law
Democracies the world over have enacted laws to protect human rights. When people in a corporation make decisions that breach those laws, there is generally a price to pay. But when a person writes an algorithm that breaks the law and a corporation then profits from that algorithm, the damage gets mostly shrugged off. Ergo, breaking the law using math can be reasonably profitable. For now. As Cathy O’Neil so clearly highlights in her book, “Weapons of Math Destruction“, algorithms now determine many aspects of our lives. Pointing out that these math equations come with hidden and not so hidden biases, she exposes the myriad ways that math can undermine people’s individuality and hard work based on biased assumptions baked into the code. Algorithms now:
- run company hiring practices,
- decide who gets a loan,
- determine how student papers are graded,
- decide who gets accepted to the most prestigious schools,
- decide what sentence to give to someone convicted of a crime,
- determine who should be cropped out of photos (spoiler alert: brown-skinned people),
- and the list goes on.
Yes, there are lots of great algorithms that are beneficial, too, but not one algorithm, good or bad, is required to function transparently in order for the public to see how their lives are being affected and whether the math is abiding by the laws of the land.
Algorithms are opinions embedded in code.
Bad math hurts real people
Why does transparency matter? Bad math extends into the real world with real consequences. Yet when it is discovered that an algorithm has broken the law, as in the case of Amazon’s “secret AI recruiting tool that showed bias against women,” regulators seldom offer up more than a mild shrug. It will be the companies, themselves, who improve the situation and it starts with requirements: require that products are developed and tested to ensure they respect diversity and human rights legislation. In Amazon’s case, its algorithm-driven recruiting system “was not rating candidates for software developer jobs and other technical posts in a gender-neutral way.” Many female engineers lost out on great job opportunities and, equally, Amazon lost out on a huge set of engineers who simply didn’t fit the math because they were female. When it came to female engineers, Amazon’s recruiting engine learned to “penalize resumes including the word ‘women’s’ until the company discovered the problem.” As a result, too many women engineers had their resumes shoved to the bottom of a very large pile. Hiring managers simply never saw them. The public can look at the overwhelming number of male engineers at Amazon and not realize that a faulty bit of math made sure that women were seldom hired. How many women’s careers were harmed because the math evaluating their resumes was biased in a way that breached their jurisdiction’s human rights codes? Bias gets baked into lots of things – including hand sanitizing dispensers, it turns out.
Can a hand sanitizing dispenser be racist?
Yes it can. And soap dispensers, too. A friend’s husband, a surgeon in private practice, came home and said, “I think my hand sanitizers are racist.”
The new hand sanitizing dispensers that the surgeon recently purchased work fine for light-skinned people. But for his darker-skinned staff and patients, the machine will not dispense any sanitizer. (The doctor offered us this video as an example, using a dark glove instead of asking his staff to yet again put their hands under a dispenser that won’t work for them.) This issue has been going on for years. In a 2017 video, purportedly from a soap dispenser in a men’s washroom at Facebook, we are shown that the machine simply does not provide soap for darker-skinned people. To get some soap, the man has to place a paper towel under the machine’s sensor.
Twitter crops out black faces
In September 2020, a number of Twitter users noticed that the company’s algorithms were racist, as the math routines cropped out black faces, favoring white faces in their users’ feeds. One user, Colin Madland, looked for this racial bias and quickly found it. Why was he specifically looking? Because earlier he’d noticed that a black colleague had been algorithmically erased from a Zoom conference so he began to check other services. Of course, Twitter apologized and claimed they had tested these math rules.
Racist and sexist algorithms are not new
Do you recall Microsoft’s robot twitterer, “TayTweets” in 2016? After going through a “learning” exercise, TayTweets spewed out unbelievably racist and sexist tweets, some of which were captured in a video that was featured on Youtube as a warning to the dangers of AI – however, the Tay Tweets were so nasty that if you link to the evidence on Youtube, you simply get this:
Convenient, in a way, that the evidence of the dangers of AI has been likely hidden by . . . AI. Bad math has been impacting our lives for years, with the burden falling mostly to people of colour, religious and ethnic minorities, and, of course, to women.
Companies don’t set out to break laws (usually)
Algorithms that breach human rights legislation are algorithms that have a glitch.
In cases where math is breaking the law, there are missing criteria, or coding requirements, that no one told the programmers to follow. So they didn’t.
Having worked shoulder to shoulder with many programmers, I haven’t met one who would purposefully write a racist or sexist set of math rules to determine mortgage qualifications or evaluate resumes. At the same time, I haven’t worked at any company that mandated their code be tested for this sort of programmed bias. In fact, there seems to be some magical thinking around this issue: if math or a robot is making a decision, it must be de facto neutral because, after all, it is a machine, not human. Time and again, decision-makers keep forgetting that it is us, flawed humans, who code the machines. There is no magic. And, yes, when people argue, “No. We’re now so advanced that programs are now being written by other programs, not people,” they forget that the original programs were coded by, yes, flawed humans. If a company wishes to be truly diverse not only in its hiring practices but in what it produces, establishing proper requirements that oversee the development and testing of non-racist, non-sexist products is key. Had these sorts setting out the criteria to which an algorithm must be tested to prove that it is not harming people or society would have helped Twitter properly test and fix the racist algorithm before offering it to their users. Amazon may have hired some incredibly gifted female engineers had they first taken the time to ensure the data sets they were using to score candidates did not already contain implicit bias in favour of male engineers. These regulations might have helped Microsoft executives have a second thought about using Twitter users to “train” its twitter bot. Had the company been expected to first prove (test) that the product they were delivering to the public passed certain hurdles that reflected human rights, more robust verifications would have taken place. Instead, as The New York Times reported, by “learning” from Twitter users, Microsoft’s twitter bot quickly became a “racist jerk.” Good leaders always strive to produce products and services that respect the law. It’s caught a lot of us off guard, but one way of helping companies respect diversity is to explicitly bake requirements into their company’s development processes. That way, they can ensure their HR departments won’t be tossing aside resumes from people who don’t look like the programmers who built the hiring algorithm and they won’t be building like dispensers that act like a “racist jerk” and withhold hand sanitizer based on the color of a person’s skin.
Key Point
Product development requirements should be enriched to reflect genuine diversity and to align with human rights legislation.