LTDA: Removing the human element from the decision-making process poses many dangers to passengers
Artificial Intelligence is coming we are told, and in some instances it is already here. It will have many uses in the modern world – many of them good, such as in the diagnosis of serious illnesses. Some of them frankly idiotic – to use AI to stop an ex from sharing nude pictures of you online, entails sharing nudes with Facebook itself. Awkward enough perhaps, until you learn that Facebook says a real live human will have to check them out.
Artificial intelligence, it seems, has a hard time telling the difference between a real naked body and a painting.
Some uses for AI are bad, which is where, as usual Uber comes in.
Uber’s latest wheeze is the creation of AI technology that will be able to detect whether a passenger is drunk or not, and thus refuse them a ride.
Currently, passengers can put themselves at the mercy of an Uber driver at any time of the day or night. Anybody can, whether they are smashed off their faces or not, unless of course they have a guide dog and then frequently they are left at the side of the road as their driver speeds off into the distance.
Uber drivers, of course, can only guess whether their prospective passenger is sober or not, depending on whether
they are themselves: sober, not blind or completely brain dead.
Uber wants to end this game of chance. Helpfully, this will allow predators the chance to know first hand how likely their passenger is to be easy prey.
Uber’s patent never says that it is targeting drunk passengers, but this is clearly intended as one of the uses. The patent itself refers to people who are “tired,” although it’s unclear how this would be useful, unless of course you were hoping your passenger might doze off, so you could take them on detour around the M25 to boost your coffers.
The technology analyses a user’s behaviour based on previous uses and patterns. This could include how someone holds their phone, makes excessive typos, and even if someone is walking down a street near a nightclub; slowly, quickly or staggering.
If a user exhibits abnormal behaviour, then the Uber app could alert the driver that their potential passenger is “in an unusual state,” or only allow drivers with experience of dealing with drunken passengers to pick them up.
Someone in an “unusual state” could also be blocked from joining a carpool or just flat out denied a ride.
Obviously, there many potential pitfalls, and the data that Uber wants to people to sign away the rights to is quite extensive. Do users want companies to know what angle they are holding their phone? Or how many typos they’re making? Do users want companies to gather intimate personal data to make inferences about their mental and physical state?
Clearly, telling drivers which passengers could be drunk poses a danger to people and could encourage predatory behaviour.
Drivers could become approved to pick up drunk passengers by racking up a positive track record, only to assault someone at a later date.
What if the AI misjudges someone’s physical state and prevents someone who is sober from rightfully hailing an Uber? Does this mean that in order to get an Uber at specific dates, times, and locations, that one must try to appear upright and proper?
Removing the ability of drivers to make the decision in person also removes the human element which also risks leaving a vulnerable person at the mercy of the night.
If Uber really cared about it’s passengers as it claims, then perhaps it could allow this technology to operate in reverse. It could show potential passengers whether their drivers were behaving erratically and careering all over the place, but then perhaps then they would get no passengers at all.