Geomaticians

London Underground Is Testing Real-Rime AI Surveillance Tools To Spot Crime

London Underground Is Testing Real-Rime AI Surveillance Tools To Spot Crime
Thousands of people using the London Underground had their movements, behavior, and body language watched by AI surveillance software designed to see if they were committing crimes or were in unsafe situations, new documents obtained by WIRED reveal. The machine learning software was combined with live CCTV footage to detect aggressive behavior, guns or knives being brandished, as well as looking for people falling onto tube tracks or dodging fares.
From October 2022 until the end of September 2023, Transport for London (TfL), which operates the city’s Tube and bus network, tested 11 algorithms to monitor people passing through Willesden Green Tube station, in the northwest of the city. The proof of concept trial is the first time the transport body has combined AI and live video footage to generate alerts that are sent to frontline staff. More than 44,000 alerts were issued during the test, with 19,000 being delivered to station staff in real time.
In the trial at Willesden Green—a station that had 25,000 visitors per day before the Covid-19 pandemic—the AI system was set up to detect potential safety incidents to allow staff to help people in need, but also partly focused on criminal and antisocial behavior. Three documents provided to WIRED detail how AI models were used to detect wheelchairs, prams, vaping, people accessing unauthorized areas, or putting themselves in danger by getting close to the edge of the train platforms.
Privacy experts who reviewed the documents question the accuracy of object detection algorithms. They also say it is not clear how many people knew about the trial, and warn that such surveillance systems could easily be expanded in the future to include more sophisticated detection systems or face recognition software that attempts to identify specific individuals. “While this trial did not involve facial recognition, the use of AI in a public space to identify behaviors, analyze body language, and infer protected characteristics raises many of the same scientific, ethical, legal, and societal questions raised by facial recognition technologies,” says Michael Birtwistle, associate director at the independent research institute the Ada Lovelace Institute.