At stations and along rail lines, cameras help to recognise conditions that could potentially hamper operations. These can range from heavy snowfall to a fallen branch on the track. These cameras can also help find out how many people use stairs or lifts, or congregate in a particular area. A vast number of images have to be stored to determine, for example, whether service providers have cleared the snow. The problem is that data protection law prohibits using images or videos in which individuals can be identified and also forbids storing media of this kind for analysis purposes. However, the image and video analysis team from vsion.ai (website currently in German only) at DB Systel, headed by Dr. Holger Oberender and Sarah Möhle, have developed a solution that tackles this issue. Automated image recognition using artificial intelligence (AI) allows huge volumes of data to be processed. This enables decision processes based on image analysis to be accelerated and enhanced. What’s more, people can be automatically rendered unrecognisable.
The idea was developed in the Skydeck Accelerator in September 2017 and resulted in the creation of an internal start-up, which is part of the ZERO.ONE.DATA organisation. In future, other possible applications and value-added services that leverage AI to automatically analyse images and video will be developed here for the whole Group.
Ideally prepared for next winter
Weather cameras are one such development and were first deployed to perform automatic analysis this winter. During pilot operation at 22 stations, the cameras were used in conjunction with AI to recognise whether there was snow on the platforms and call in the snow-clearing services. The cameras capture an image every 15 minutes and transfer the photos via the mobile phone network to DB Systel’s cloud. Here, a self-programming and self-training system reliably checks whether there is snow on the ground.
The cameras were formerly installed at the end of the platform, minimising the likelihood of inadvertently capturing images of individuals. But this also meant it was not possible to check the entire platform for snow, making it impossible to reliably gauge the quality of work performed by the snow-clearing services. Where 22 stations are involved, this is not a major issue. But as many as 500 stations are to be equipped with cameras by next winter. This is why the AI has been trained to recognise not just snow, but also people. If a person is recognised, the system uses a soft-focus filter to automatically render them unrecognisable. Thanks to this anonymisation, it is now possible to set up the cameras to cover the entire platform.
Different modules, one goal
“We use neural networks that operate in the same way as the human brain,” says Dr. Holger Oberender. Various modules available on the market were used to develop these networks. The basic system is open source, with a great deal currently coming from Google. There are even trained models that are already capable of recognising people. These parts of the programme are then adapted to the specific circumstances, optimised and trained further. The AI is trained using images showing people in a wide range of situations, including on beaches, at shopping centres and on the street. The 10,000 images used are soon gathered in this way. From a general point of view, this already works very well. But to optimise the system for deployment in stations, “we are continuing to train the AI with images captured by the very cameras used there,” says Holger Oberender, who adds: “We’re fine-tuning and refining the pre-trained models, thereby making recognition more accurate.”
But that’s not all the system has to offer. Service providers can use photos taken using the winter maintenance app to prove they have done their job properly. These images vary considerably as they are not captured from one fixed perspective and are taken by different employees. It is impossible to predict which section of the platform will be visible in the photo. Although this makes the analysis more complex for the AI, even these images can be anonymised.
But despite the high rate of success, there will always occasionally be images containing a recognisable individual. This is where humans come into play as the final control mechanism. The images from the weather cameras are uploaded to an internal portal. If the authorised employee finds a photo in which someone is recognisable, the image is reported and deleted.
First pixelate, then count
Anonymisation also plays a role in quite different areas. In one use case, which is being jointly tested with Swiss federal railways, AI determines how many people go up or down a staircase. To enable the associated video material to be used and archived in line with data protection law, it was first anonymised. “In this case, we pixelated the entire video and not just the area where people are,” explains Holger Oberender. The AI model was trained using not people but pixelated people. However, there was still sufficient information to serve the intended purpose – in this case, counting people.
Automatic anonymisation of individuals in videos using self-programmed AI is currently being tested on archive footage and is not yet being deployed live in the field. But initial experience shows the results to be better than those provided by many solutions available on the market. And it makes no difference whether the photos or videos come from the Group’s own cameras or whether passengers or employees take them on their smartphones. AI-assisted image analysis anonymises people in the images, allowing the evaluation to be performed. This gives a tantalising glimpse of other application areas where intelligent image and video analysis could be used in the future.