top of page

ADCU launches legal action against Uber’s “unfair facial recognition dismissal” of a driver

Updated: Oct 6, 2021

Image credit : ADCU/Pixabay remixed

In what the App Drivers & Couriers Union (ADCU) are calling a “global first for the gig economy“, the Union has launched legal action against Uber over the “unfair dismissal” of a driver and a courier after the company’s facial recognition system failed to identify them. The case is also being supported by the Equality & Human Rights Commission (EHRC) and Worker Info Exchange.

Since March 2020, Uber has introduced a facial recognition system, incorporating use of Microsoft’s FACE API. Workers are prompted to provide a real time selfie and face dismissal if the system fails to match the selfie with a stored reference photo. In turn, private hire drivers who have been dismissed also faced automatic revocation of their private hire driver and vehicle licences by Transport for London.

In September 2020, the Westminster Magistrates' Court renewed Uber’s licence for London but set a condition that it must “maintain appropriate systems, processes and procedures to confirm that a driver using the app is an individual licensed by TfL and permitted by ULL to use the app”. This condition facilitated the introduction of “harmful“ facial recognition systems.

Lawyers for the Union will argue that facial recognition systems, including those operated by Uber are inherently faulty and generate particularly poor accuracy results when used with people of colour.

The work of the US National Institute of Standards and Technology (NIST) Face Recognition Vendor Test which indicates that faces classified in NIST’s database as African American or Asian were 10-100 times more likely to be misidentified than those classified as white.

A 2020 report by the Alan Turing Institute explored the discriminatory effect of facial software recognition and noted in particular the conclusion of the Equality and Human Rights Commission in which it expressed disappointment in the Government’s continued usage of facial recognition software despite evidence that the software is susceptible to error on the grounds of skin colour.

A 2018 study conducted by the Massachusetts Institute of Technology which concluded that three facial recognition programmes (including the Microsoft software introduced by the Respondent) produced errors at a rate of 0.8% for men with light skin whereas the error rate increased to 20-34% for women with dark skin.

Such is the discriminatory impact of the software that large organisations have revisited their use of the same:

  • Amazon has announced a moratorium on the sale of facial recognition software;

  • IBM has discontinued its facial recognition business over concerns that it was implicated in racial profiling;

  • Axon, a maker of body cameras for the police in the United States, has refused to deploy facial recognition in its cameras due to concerns as to its accuracy (and the racial consequences of any such inaccuracies);

  • Microsoft has withdrawn sales of its software to US police departments in the wake of the Black Lives Matter movement.

The case has been filed at the Central London Employment Tribunal on behalf of Pa Edrissa Manjang, a former UberEats courier and Imran Javaid Raja, a former Uber private hire driver. The union has launched a Crowdjustice campaign to help fund the case:

Yaseen Aslam, President of ADCU, said:Last year Uber made a big claim that it was an anti-racist company and challenged all who tolerate racism to delete the app. But rather than root out racism Uber has bedded it into its systems and workers face discrimination daily as a result.“

James Farrar, General Secretary of ADCU and Director of Worker Info Exchange, said: “To secure renewal of their license in London, Uber introduced a flawed facial recognition technology which they knew would generate unacceptable failure rates when used against a workforce mainly composed of people of colour. Uber then doubled down on the problem by not implementing appropriate safeguards to ensure appropriate human review of algorithmic decision making.”

Paul Jennings, Partner, Bates Wells, said: “It is clear that artificial intelligence and automated decision making can have a discriminatory impact. The consequences, in the context of deciding people’s access to work, can be devastating. These cases are enormously important. AI is rapidly becoming prevalent in all aspects of employment and important principles will be established by the courts when determining these disputes.”


Subscribe to our newsletter. Receive all the latest news

Thanks for subscribing!

LTDA Post.gif
bottom of page