This is the third and final part of my case study about information design in public transportation. Previous thoughts: Part 1. Information design in public transportation — Part 2. Information design in public transportation
Paris has always been small but densely populated, with the population packed within the city walls since the Middle Ages. As late as 1610 it was possible to walk from one side of the city to the other in about thirty minutes. As the city grows, the development of urban transport follows and the city has now become a major railway, highway and air-transport hub which generates important population flows. The Paris Metro serves 5.23 million passengers daily and is the second busiest metro system in Europe after Moscow.
92 minutes is the average time people in the Paris area spend on public transport. Traffic congestion only adds to the many problems faced by commuters— resulting in overcrowded platforms, full trains, and accumulated delays due to the time it takes these large amounts of people to get on and off the trains.
As transportation infrastructure is pushed to its limit, demand for an alternative means of transport has emerged and pressure to optimize the existing system continues to grow.
Recently, the RATP (Paris’ public transport operator) asked the public to come up with ideas on how to improve the commuting experience and the group received 2,196 suggestions. This number was narrowed down to 15, and only 3 suggestions will be put into action.
Well, I also have a suggestion.
And this solution only requires the understanding of data and how the information is encoded…
… and a better understanding of transit maps.
The schematic map is a simplified and geometric representation of a city that helps travelers quickly understand the urban network infrastructure and navigate through it. However, my first assumption implies that transit maps are not effective planning tools anymore. Remember?
Paris has drastically evolved since the 1930s and we cannot efficiently apply Beck’s principles anymore. The city’s public transportation system is now mixed with other information such as regional trains, bus lines, night bus services, airports, touristic venues… The result ends in a dense mesh of information, which makes it difficult to read.
This second part of the case study focuses on the impact of map design on traffic congestion. My analysis of traveling habits conducted over the last few months has highlighted how people don’t spread homogeneously inside the urban infrastructure. As a result, some stations are very full while others are left empty, leading to traffic congestion.
This can be explained by the lack of diversity of tools that provide guidance.
Indeed, you can either opt for schematic maps (publicly displayed inside each metro station) ortransport apps which, in reality, just give you a digital interpretation of the schematic map.
However — if you remember part one — schematic maps are a very convenient, yet imperfect, visual representations of spatial information that offer a distorted representation of reality.
If travelers rely on a distorted map as a single resource to make their route decisions, they tend to choose a path that would seem convenient on the map but is not necessarily the optimal solution for their chosen route.
I live in Barbès-Rochechouart (line 2, 4), a well-connected metro station located in the north of the city and I work in Pont de Sèvres (line 9), the last station of its line, in the western suburbs of Paris.
As you can see on the map, the network infrastructure gives me plenty of alternative routes to go from Home to Work.
The most common transit apps available in Paris (Google Maps, Citymapper, GroupeRATP, Transit), however, all suggest the same path for this given route: the path of least resistance, which is assumed to be the quickest when compared to the other routes suggested below.
But from personal experience, I happen to know that this route is one of the busiest. This may be because it stops at very busy stations in the city center, but it may also be because it is the most common route suggested by transit apps.
Of course there are other intangible factors passengers take into account when making route decisions (such as hours, available seats, waiting times, station accessibility for instance) that you won’t find depicted on a transit map and if you try to find a balance between all the deciding factors, you’d probably find yourself with an endless variety of routes.
With this in mind, I took some time to consider which factors would influence my decision to choose one route over another.
Given my limited resources, I selected 4 different routes among the alternatives for my commuting routine and analyzed them according to the metrics mentioned above.
What you can see is that the most optimized path varies according to the metrics and the pros and cons of each route are now obvious.
This exercise has been done manually by taking only a few variables into account. But what if we think about processing millions of data instantaneously to give an even more personalized path?
Data can be used as a lens to better understand commuters’ behavior and anticipate their willingness to pay based on various intangible variables that are not yet taken into account by transport apps.
To have reliable information available from mobile networks into statistical indicators will help to collect millions of pieces of technical measurements. By gathering all relevant and useful information about the town(geolocation of attractions and areas of interest, traffic reports, environmental data,...), the network infrastructure (stations accessibility, stairs, lifts, security, large platform,...) and transportation systems (overcrowded trains, automatic trains, air conditioning, available seats,…), data can make it possible to improve and boost the fluidity of population.
With the increased quantity and improved accessibility of data, I believe we can use it as a design tool to create real value. But the struggle is real. How do we process thousands of data points — defined by multiple constantly evolving variables — and encode it to provide a seamless user experience?
“Successful information design depends on understanding the data, understanding the audience and understanding how people process information” Joel Katz
While I don’t have a concrete solution, my assumption would be to collect the data and encode it in a way that: 1) never requires that the users have to consider the complex algorithm that is behind the resulting information and 2) makes sure that the data is limited to exactly what we need to fulfill the value proposition at the time it is requested.
“ Data actually tell the designers a lot of things but data doesn’t give the designers the initial inspirations […]. The experiences that we design define the kind of data that we can and do collect on them.” Rochelle King, Global VP of User Experience and Design at Spotify
We have to be careful because it is easy to get lost in all these variables. The risk is that the data is used incorrectly. We have to tackle the data in a way that it thinks proactively so that the user doesn’t have to.
All of that said, my assumption is that by handcrafting data progressively we can do a better job processing the right information at the right time. We should not ask for all the data up front but rather ask for the ability to access the token of a verified group of attributes that would be context-dependent.
Above all, the experience design should be as seamless and malleable as possible: A simple yes-no question at the right time could be enough to deliver the most personalized and optimized path from point A to point B.
The goal here is to focus on a personalized content experience, rather than the customization of the interface itself. The interface is just the media that delivers the algorithm.
I am open for collaborations and obviously open to discussions, feel free to reach me — or Goodpatch — if you want to dig deeper into this topic together !
As urban populations continue to grow, innovations in technology and increasing access to real-time data can help municipalities to improve mobility access and efficiency.
And understand of the right data at the right time will accelerate the roll out smarter initiatives and provide more tangible and efficient solutions to common mobility issues. What’s more, this data will help us optimize infrastructure assets with better knowledge of travel flows throughout the city.
Additionally, Mobility-as-a-Service is a rising trend in the transit space. In a smart city, various public transportation providers no longer compete against each other. Now the competition is how best to complement each other. By linking various transit providers together — such as public transit, car sharing, bike sharing, e-scooters, taxi services — users are empowered to spread into the city homogeneously, beyond the network infrastructure .
Ultimately, by creating ways for residents to personalized their commutes, cities will reduce traffic congestion, lower emissions and eliminate pain points for their commuters. The results are a win-win for municipalities, businesses and anyone trying to get from point A to point B.