Bias in algorithms
Last week a colleague of mine shared the news that the UK Home Office has agreed to scrap its controversial ‘visa-streaming’ immigration algorithm after a successful legal challenge.
🚨Breaking news🚨
— JCWI (@JCWI_UK) August 4, 2020
We've got the Home Office to stop using its racist algorithm to sift visa applications!
The algorithm gave “speedy boarding” to white people – the Home Office has been forced to scrap it after we & @foxglovelegal launched legal action https://t.co/qKSr6gEkGQ
So what did it do? Essentially it classified applications according to a traffic light system. “Green light” applicants were fast-tracked through visa applications and “Red light” applicants were held to an increased level of scrutiny. Once assigned by the algorithm, the classification played a major role in the outcome of the visa application.
So why the legal challenge? Surely a supervised classification algorithm with a low error rate when compared with historic decisions and human-assessed outcomes for the same application is a good thing? Not when an algorithm perpetuates institutional bias and sets up a toxic feedback loop of reinforced prejudice.
In practice this meant that the traffic light system was highly correlated to whether an applicant was from a “suspect” country. Applications from these countries received more scrutiny, experienced more delay, and were more likely to be declined. The algorithm “learned” from decisions such as these, reinforcing the strength of this feature in future predictions.
“The Home Office’s own independent review of the Windrush scandal, found that it was oblivious to the racist assumptions and systems it operates. This streaming tool took decades of institutionally racist practices, such as targeting particular nationalities for immigration raids, and turned them into software. The immigration system needs to be rebuilt from the ground up to monitor for such bias and to root it out.” Chai Patel, Legal Policy Director of JCWI
Whilst the upheld legal challenge is good news, it is a chilling reminder of the new challenges in the data era. Or is it an old challenge reframed? With all these new machine learning tools we simply have the ability to do what we always did but at scale, much more efficiently and much quicker. Sounds like all our private and public institutions could do with an algorithm audit.