Quantifying street safety opens avenues for AI-assisted urban planning

Urban revitalization is getting an update that combines crowdsourcing and machine learning.

Theories in urban planning that infer a neighborhood’s safety based on certain visual characteristics have just received support from a research team from MIT, University of Trento and the Bruno Kessler Foundation, who have developed a system that assigns safety scores to images of city streets.
The work stems from a database of images that MIT Media Lab was gathering for years around several major cities. These images have now been scored based on how safe they look, how affluent, how lively, and so on.
Adjusted for factors such as population density and distance from city centers, the correlation between perceived safety and visitation rates was strong, but it was particularly strong for women and people over 50. The correlation was negative for people under 30, which means that males in their 20s were actually more likely to visit neighborhoods generally perceived to be unsafe than to visit neighborhoods perceived to be safe.
César Hidalgo, one of the senior authors of the paper, has noted that their work is connected to two urban planning theories – the defensible-space theory of Oscar Newman, and the eyes-on-the-street theory of Jane Jacobs.
Jacobs’ theory, Hidalgo says, is that neighborhoods in which residents can continuously keep track of street activity tend to be safer; a corollary is that buildings with street-facing windows tend to create a sense of safety, since they imply the possibility of surveillance. Newman’s theory is an elaboration on Jacobs’, suggesting that architectural features that demarcate public and private spaces, such as flights of stairs leading up to apartment entryways or archways separating plazas from the surrounding streets, foster the sense that crossing a threshold will bring on closer scrutiny.
Researchers have identified features that align with these theories, confirming that buildings with street-facing windows appear to increase people’s sense of safety, and that in general, upkeep seems to matter more than distinctive architectural features.
Hidalgo’s group launched its project to quantify the emotional effects of urban images in 2011, with a website that presents volunteers with pairs of images and asks them to select the one that ranks higher according to some criterion, such as safety or liveliness. On the basis of these comparisons, the researchers’ system assigns each image a score on each criterion.
So far, volunteers have performed more than 1.4 million comparisons, but that’s still not nearly enough to provide scores for all the images in the researchers’ database. For instance, the images in the data sets for Rome and Milan were captured every 100 meters or so. And the database includes images from 53 cities.
So three years ago, the researchers began using the scores generated by human comparisons to train a machine-learning system that would assign scores to the remaining images. “That’s ultimately how you’re able to take this type of research to scale,” Hidalgo says. “You can never scale by crowdsourcing, simply because you’d have to have all of the Internet clicking on images for you.”
To determine which features of visual scenes correlated with perceptions of safety, the researchers designed an algorithm that selectively blocked out apparently continuous sections of images — sections that appear to have clear boundaries. The algorithm then recorded the changes to the scores assigned the images by the machine-learning system.