AccessLondon – an android app


Logo of the app

Logo of the app

Based on initial visualisations, we observed that many accessible tube stations were not used as much as we had anticipated. We therefore thought that it may be beneficial to create an app pointing the user towards the closest accessible stations along with pertinent information about the station and context. The idea continued to evolve and nearest bus stop information was also added, based on buses’ popularity and also due to the fact that all buses offer disabled access. There are many applications available that focus on trip planning. However, this one is different in the sense that it is focused on finding the nearest accessible stations and bus stops for the disabled users. The process is made fast and simple by requiring just two clicks (taps).

Screenshot of app home page

Screenshot of app home page


To add to this concept, the user also has live updates from the twitter accounts of @TfLAccess and @FreedomPassLDN, two channels that deal with accessibility of public transport in London, on which users can make queries that are answered on a frequent basis. As an extra, informative feature, the app gives users data on the popularity of each mode of transport.


The source code can be found here and the application can be dowloaded from here.

Travel Patterns / Revealed

Travel Patterns / Screenshot

Travel Patterns / Screenshot


This Processing application has the purpose of revealing travel patterns for Disabled Freedom Pass holders (DFPH) and comparing them to non-disabled Oyster card users (NDOCU). When launched, on the left half of the screen, the application shows trips as lines from a station to another throughout each minute of each day of a week. The trips have different coloring, with red representing DFPH trips and white representing DFPH.

On the right half of the screen, seven graphs are plotted, representing the days of the week from Sunday to Saturday. The length of the vertical lines of the graph represents the load of the tube at each time step. Again, the white lines represent NDOCU and the red DFPH. As the week progresses, we clearly see that the white lines have notable peaks during the morning and afternoon rush hours, whereas the red lines have no peaks. This is likely because people with limited mobility may have a tendency to avoid use of the Underground if other options are available to them, particularly during peak times, and as indicated in the data, 83.9% of DFPH trips are made on London Transport Buses.

The app is interactive, and by hovering the mouse over each station, the user can see the name of the station, whether the station is accessible or not, and two line-graphs. One shows the load of the station compared to the mean load of all stations (thus, its popularity) for DFPH and NDOCU. These graphs help us understand how much a station is actually used, also compared to its general load of passengers. Finally, another two options are added, where the user can choose to show all step-free access stations, or all stations ranked according to their total load of passengers (where the radius of each circle represents the percentage).


p.s.: there will soon follow a github link to the code

The active “city” beyond barriers. Commuters and citizens.


We live above the ground. It is from above that we start and end our trips, and it is what is above that drives our decisions. This part of the project focuses on what is above ground, the distribution and composition of population. The distribution and ratio of disabled to the non-disabled is here geographically visualised to explore the dynamic spatial and temporal patterns that emerge. This part of the project can be summarised with the accompanying logo displayed in Figure (1). Above the ground, we have the living and active city that for its function depends on a below-ground transportation network. So doing, people are constantly moving from place to place and thereby changing the composition of population in space and time in proximity to stations. We here establish a link between above-ground and below-ground data by linking census data with TFL data.

Disability in census 2011

TFL offers two versions of Freedom Passes, one being for elderly, and the other for the disabled. For our exploratory analysis, we assume an overlap between those that are eligible for the Disabled Freedom Pass, and those that report their “Day-to-day activities limited a lot” or “day-to-day activities limited a little”. These categorisations require a long-term health problem or disability of a duration greater than 12 months. These categories therefore do not directly include elderly people unless they have a specific disability or significant age-related ailment of significant duration. The fact that Freedom Passes granted for old age are easier to obtain than those given to people with a disability, will create a possible misalignment between the two datasets. A further distinction between the two datasets is that the TFL data includes trips made by non-residents[1]. In general terms, our exploration assumes that:

Assumption 1: (Day-to-day activities limited a lot + Day-to-day activities limited a little) = Eligible for freedom pass. 

We investigate at the MSOA[2] level the spatial relationship amongst populations with different percentages of people reporting disabilities. In Figure 2, some patterns can already be found, where the more central areas that have better transportation access are characterized by a lower degree of population reporting limitations in their day-to-day activities.


Figure 2 Spatial patterns of disabilities

The construction of the geodata

The analysis proceeds to create catchment areas around all tube stations to explore workflows and gain insight from the merging of the two datasets for investigating opportunities for visualisation. A buffer of 2 Km around stations identifies an approximate area served by all stations (Figure 3).


Figure 3 Stations (red dots) and buffered area


Assumption 2: Every station has an exclusive “catchment area”

The space around each of the stations is assigned to each station by using a Voronoi tessellation (Figure 4).


Figure 4 Voronoi from stations

The Voronoi and buffer are then linked with census data by using centre points for the Output Areas (Figure 5), which are the smallest spatial units available in the census.


Figure 5 Voronoi and OA centroids (blue dots)

For every catchment area, the census data for the resident population and residents with long-term health problems or disabilities limiting their activities by a-lot or a-little, are summarised. This is used as a starting point for exploring how the resident populations may vary throughout the course of the day based on tube trip data.

The dynamics of these daily population movements is explored by using Processing, into which the data and geospatial information are imported and animated. Note that in Figure 6, the starting display shows zero values for some central stations that do not have an allocated population.

After some hours, the situation changes, but mainly in the central areas.

The specific features of this code

This exploration attempts to integrate TFL data with “external” census and spatial data to gain a more complex and dynamic conception that goes beyond solely creating engaging visualizations with the city as a background.

The core Processing code derives from the initial Processing sketches and accompanying code developed by Gareth and Katerina, and TFL data preparation by Katerina and Stelios. Subsequent development specific to this sub-project:

  1. Import and display shapefiles using the geoMap library for creating the “zero-night-time” situation with the initial allocation of population from census data and updating the data as soon as people tap in or out of a station. A table holding all values for disabled and non-disabled people for every catchment area is created in memory and updated in real time.
  2. Create a diverging choropleth map to represent areas with ratios higher or lower than the median (since distribution at time=0 is roughly gaussian).
  3. Highlight the existence of areas where the number of commuters that take the tube overcomes the number of resident population, giving rise to negative numbers.
  4. An attempt to optimize the memory use of the Processing code by deleting used arraylists and hashmaps.

Methodological issues

  • Large amounts of data require an experienced and skilled programmer for optimisation. Architects and planners can go up to a certain point in their proficiency in coding. Dealing with large amounts of data requires specific education.
  • It became evident that processing is a very supportive and well documented programming language. The teaching style was also challenging, but stimulating, and encouraged me to attempt my own programming, with some support from other members of the group.
  • A smarter way to calculate catchment areas could come from some CA implementation over the transportation network and land-use maps. It is difficult to model the behaviour of citizens living in central areas because of access to multiple stations within walking distance.

Proposed theme issues

  • The visualization showed that negative values occurred only in the central areas while the other areas with an increasing numbers of resident population have only very small changes.
  • The dynamic visualization of the ratio between people without limitations and disabled people is not really engaging from a dynamic point of view.


geoMap library ( from the giCentre, city University London.

Image with accessible logo from


Identified Flying Objects

The question behind the third part of our portfolio is: “Is there an effective way to visualise the relationship between the (underground) spatial movement of people with disabilities and the (above-ground) built environment?”. What we mean by “effective”is an engaging way to communicate the desired message to the public in an easily comprehensible way. In this case, the message is the information that our data encapsulates, and which is revealed through an exploration of key points that influence the nature of our visualisation methodologies.

The challenge is therefore twofold: understanding and revealing the nature of the data; and developing visualisation methodologies that are the most capable of releasing this information.

To begin with, the nature of our data implies the temporal duration and spatial distance of each trip from its starting to ending points. These points are located within the fabric of the urban environment, but the movement between these points occurs in a void of spatial context. We are therefore seeking a way to visualise this movement in a manner that permits a greater comprehension of temporal duration and spatial distance, and which can therefore be perceived in a more engaging way.

There are many ways to depict the city. The potential variances between what is depicted and what the observer perceives is narrowed in the case of a picture, or even better, a video of the city. The concept thus lends itself to representing the underground movements of people above-ground instead, where the animated patterns of movement contrast with the stationary built environment, while simultaneously making the connection between underground and above-ground locations. In a sense, here we can perceive these movements as an engine that shapes and empowers the city over time.

For brevity, we compress a day’s movements of disabled Freedom Pass card users into approximately one and a half minutes, which also aids in better understanding   density, scale, and temporal and spatial patterns.

A short analysis

 At the bottom of the display there is a line with three attributes: the time, the mean straight-line distance that people with disabled Freedom Pass card-users travel per day, and the mean straight-line distance that other card holders travel per day. The interesting result is that general trips move in a radius of more-or-less eight kilometres whereas disabled Freedom Pass card-users move an average of seven kilometres per day. This observation, in comparison with the fact that this difference is constant throughout the day, regardless of rush hours, can lead to several hypotheses about how the London Underground is used by disabled people, though we will leave this open-ended rather than making overly general or speculative assumptions.

The method

 The project Identified Flying Objects was built in the open-source animation and render software, Blender v.2.70. It is worth mentioning that the whole procedure was entirely new for the main contributor (Stelios) as well as for the rest group: from the Blender and Adobe Premiere video editor to the Visual effects (VFX) processes (Zwerman, Okun, 2010). VFX are display techniques which combine live-action scenes with generated imagery in order for a realistic environment to be composed (Brinkmann, 2008). The choice of the VFX method for the visualisation of the current project was based on three factors: First of all, as it is referred above, the wish to convey the nature of the data in an engaging and familiar contextual environment. Secondly, the opportunity to explore novel visualisation techniques that are not used widely in scientific analysis. Finally, the choice to use real-life video rather than building a detailed digital 3D model of London allowed for a realistic outcome and efficient workflow.

The workflow

 The VFX process consists of three well-divided sections.

1)     At the beginning of the first section is the video recording, which was performed with an iPhone 5. The video was taken from the 33rd floor of the Shard building (51.5045oN, 0.0865oW). Blender can only recognise two movement planes for a camera’s path. In no cases can it handle a combination of movement in 3D space and rotation around a point. The next step is to setup the digital camera in the Blender environment. This particular camera recognises the path in the recorded video by tracking random points across the whole video frames. In this way, the digital camera performs exactly the same motion as the regular camera, thus accurately depicting the digital rendered output.



Importantly, the digital camera must be set up with the same technical specifications of the video camera, including factors such as focal length, angle of view, lens distortion, etc.

2)     The second step is the combination of the video with the information derived from the dataset. The data was imported into blender using code scripted for the blender python “bpy”API. (Contributed by Gareth) The script consists of the following steps: Importing all data and creating trip data “objects”, which are stored in a trip “dictionary”; Creating a default trip object blender material; Setting up a starting point for the blender scene, including a base plane and lighting; Optional (not used for the final rendition) smoke trails; Iterating through the trip-object dictionary to create an object for each trip; Whilst iterating, each trip object is key-framed to invisible, then visible at the time the trip commences, then starting point, then ending point, and then invisible again. Note that because the objects are not instanced and destroyed “on-the-fly”there are some practical limitations to the number of objects that can be imported. The plane and the objects constitute the 3D digital environment of our model which is overlaid with the video of London.


Due to the setup in step 1, the start and end points of the flying objects are correlated to the video. Unfortunately, Blender is not capable of automatically establishing the digital camera in the correct position according its real coordinates, even though the software already recognises the right height of it through the first section. This can be overcome manually by matching points within the 3d model with key-points in the video. This is by no means perfect and is considered as a limitation of Blender.

3)     The third part of the VFX process is the modification of the final output in terms of a realistic and illustrated visualisation. This includes lighting and texture settings. A significant role was played by Adobe Premiere in generating the final edit of the video, and was used for visual effects such as blur, adding sound, and for reducing the length of the clip to fit the needs of the presentation. A Processing sketch (contributed by Katerina) sums the distances between points and divides it by the number of the trips each time in order to calculate the mean distances of the routes of each social group. The Processing rendering was exported and overlaid with the Blender output in Adobe Premiere.


Brinkmann (2008) The Art and Science of Digital Compositing: Techniques for Visual Effects, Animation and Motion Graphics. 2 edition. Amsterdam ; Boston, Morgan Kaufmann.

Zwerman, S. & Okun, J.A. (2010) The VES Handbook of Visual Effects: Industry Standard VFX Practices and Procedures. 1 edition. Burlington, MA, Focal Press.

Visualising Disabled Freedom Pass Card trips on the London Underground

This video was generated in Unity3d to visualise the trips made by disabled freedom pass holders on the London Underground network. The bright orange streaks represent disabled freedom pass users, whereas the smaller white trails represent other users. This data is sourced from Transport For London and is drawn from a 5% sample in the month of November, 2009.

Data courtesy and copyright of Transport For London.

Created by Gareth Simons as part of the Activity Beyond Barriers group presentation with Gianfranco, Stelios, & Katerina at CASA UCL.

Adding Interactivity with Unity 3D

The ins-and-outs of different visualisation software.

Each of the visualisation strategies employed by our group reveals a different natural fit for the exploration, visualisation, and interaction with data. Unity occupies a unique position because it offers a degree of ‘real-time’ interaction and performance not matched by the other approaches. As a game engine, it is designed from the ground-up for this purpose, thereby offering a unique range of benefits:

1)     It offers a modular approach towards “assets” and “resources”, allowing for the flexible arrangement and combination of data, 3d models, settings, and scripts;

2)     It separates ‘real time’ from the frame rate, which means that the frame rate is constantly optimized for a device’s computational performance, without leading to wildly fluctuating time-step changes;

3)     It is capable of handling a serious quantity of objects in real-time. Testing with minimal rendering requirements indicated the ability to manipulate well upwards of 3,500 objects depending on computational power and the time scale;

4)     It is further capable of offering suffiently high-quality graphics rendered in real-time, therefore distinguishing itself from traditional rendering and animation engines which can be notoriously slow at rendering, albeit with increased realism.

5)     Due to these and other reasons, it is inherently well-suited to the creation of dynamic and interactive visualisations that actively respond to user inputs.

Unity’s capabilities stand in contrast to our experimentations with Blender (per the video developed by Stelios) which favoured a more traditional approach towards modelling and animation. Blender doesn’t offer the naturally interactive format and its scripting API is difficult to understand, as well as weakly documented. We also experimented with Rhino, though didn’t pursue it beyond initial experimentation in which we imported data to animate CSV data, and wherein we found it’s abilities more suited to generative and form-based modelling and less so to the creation of rich and dynamic scenes.

Unity 3d - Underground View

Data Preparation

The data preparation for Unity was done in Python and took three inputs:

1)     The tube lines with each of the stations. This was developed out of work done by Katerina, in which she identified all neighbouring stations on each tube line;

2)     The stations names with the station coordinates;

3)     The data file prepared by Katerina with assistance from Stelios, consisting of all trips.

The script creates a station index, and a station coordinates list, which it then uses to create a weighted adjacency matrix of all stations on the network. The scipy.csgraph package is then used to return a solved shortest path array. Subsequently, the data file is imported and the start and ending location for each trip is then resolved against the shortest path array, with the resulting waypoints for each trip written to a new CSV file. A further CSV file is also generated, containing each station’s name and coordinates.

This approach proved more computationally feasible than earlier approaches using Nav Meshes, as explored and tested by Gianfranco.

Unity Implementation.

The Unity implementation consists of various components:

1)     The 3d models for the London landmarks were found in the Sketchup 3D warehouse. Their materials were removed and they were imported to Unity in FBX format;

2)     The outline for Greater London was prepared as a shapefile by Gianfranco, which he subsequently exported to FBX via City Engine;

3)     An empty game object is assigned with the main “Controller” script that provides a springboard to other scripts and regulates the timescale and object instancing throughout the visualisation. This script allows numerous variables to be set via the inspector panel, including the maximum and minimum time scales, the maximum number of non-disabled trip objects permitted at one time (to allow performance fine-tuning), a dynamic time scaling parameter, and the assignment of object prefabs for the default disabled and non-disabled trip objects. Further options include a movie-mode with preset camera paths and a demo of station selections;

4)     One of the challenges in the creation of the visualisation was the need to develop a method for handling time scaling dynamically to reduce computational bottlenecks during rush-hours, and also to speed up the visualisation for the hours between midnight and morning to reduce the duration of periods of low activity. The Controller script is therefore written to dynamically manage the time scale;

5)     The controller script relies on the “FileReader” script to load the CSV files. The stations CSV file is used to instance new station symbols at setup time, each of which, in turn, contain a “LondonTransport” script file, with the purpose of spinning the station symbols. It also sets up behavior so that when a station is clicked, the station name is instanced (“stationText” script) above the station, and trips only to and from that station are displayed via the Controller script. The FileReader script also reads the main trip data CSV file, and loads all trips at setup time into a dictionary of trip data objects that include the starting and ending stations, as well as the waypoint path generated by the Python script. The trips data objects are then sorted into a “minute” dictionary that keeps track of which trips are instanced at each point in time. The minute dictionary is in turn used by the Controller script for instancing trip objects.

6)     The “Passenger” and “SelectedPassenger” objects and accompanying script files are responsible for governing the appearance and behavior of each trip instance. Since thousands of these scripts can be active at any one point in time, they are kept as simple as possible, effectively containing only information for setting up the trip interpolation based on Bob Berkebile’s free and open source iTween for Unity. iTween is equipped with easing and spline path parameters, thereby simplify the amount of complexity required for advanced interpolation. The trip instance scripts will further destroy the object once it arrives at the destination.

7)     Other script files are responsible for managing the cameras, camera navigation settings, motion paths for the movie mode camera, rotation of the London Eye model, and for setting up the GUI.

Unity 3d

Visual and Interaction Design

It was decided to keep the London context minimal with only selected iconic landmarks included for the purpose of providing orientation, and a day-night lighting cycle to give a sense of time. Disabled Freedom Pass journeys consist of a prefab object with a noticeable bright orange trail and particle emitter, in contrast to other trips which consist of simple prefab objects with a thin white trail renderer and no unnecessarily complex shaders or shadows due to the large quantities of these objects. The trip objects are randomly spaced across four different heights, giving a more accurate depiction of the busyness of a route, as well as a more three-dimensional representation of the flows.

Interactivity is encouraged through the use of keyboard navigation controls for the cameras, as well as a mouse “look around” functionality, switchable cameras, and the ability to take screenshots. When individual stations are clicked, then only trips to or from that station are displayed.

Unity in real space: exploring a workflow from GIS.

The possibility of taking advantage of “nav meshes” instead of the creation of a specific routing algorithm for solving trips in Unity was explored. The possibility of creating physical barriers (and consequent pathways for nav meshes) was used to suggest pathways for the movement of agents based on the use of volumetric objects extracted from GIS layers.

As a starting experimentation, a sample area of the tube network was extracted and agents were set-up to seek targets by using pathways constrained to these pathways. Though it proved quite difficult to extract the geometric data from shapefiles into Unity. After some days of difficulties and computer crashes, a fully automated chain of software and commands was developed to transfer the “canyon” files from ArcGIS into Sketchup, and then into Unity. Other software attempted included Autodesk-Autocad and 3D Studio Max.

However, the need to create 3d elements covering the entire TFL network made the broadening of this exploration very difficult to achieve. Only the use of CityEngine made the aided the process because it is able to import shapefiles, then simplify them choosing the preferred accuracy, and then export to FBX which is readily usable by Unity.

The concurrent explorations by Gareth on a Python based routing algorithm, and consequently overcoming of the need to process GIS data, stopped this line of development.



The city is its people. It is the people it houses and the people it bears on its streets and infrastructure. People move within the city and its public spaces, venues of social interaction and economic exchange, which are the predominant activities that constitute cities. The city must be able to host and accept all of this movement and exchange within and around its public spaces, so that it can offer people the opportunity of being more active and socially engaged. It must truly offer the possibility of limitless exploration of public space and the opportunities it affords. For this to be, the city has to be equiped with adequate infrastructure for people to use and move around its clusters of social and retail spaces. The ever increasing size of cities also means that transportation infrastructure is increasingly essential for the mobility of city citizens. It is, therefore, a fundamental role of the city to provide its residents with sufficient and equitable access to its streets, public spaces, and transportation network.


If public space is not available for all, then it is no longer truly public. It is limited-to-those-who-have-access-to-it space. What defines a space as public is its accessibility. The term accessibility refers to a space being and feeling open and accessible to all. Accessibility issues are raised when a space is not accessible to all. Many spaces seem to be designed for the stereotypical white (western) successful male in his prime. The more a person deviates from this “ideal” the more inaccessible a space might be or seem. And deviations can have the nature of race (asian or african), gender (woman or homosexual) or ease of mobility (disabled, blind, deaf, elderly, pregnant women, new parents carrying buggies).


For our project, we chose to focus on the work that has been done to make public transport available for people with limited physical mobility, such as wheelchair users. Observing London, we can see, seemingly everywhere, a significant attempt to make spaces, buildings, and public transport, accessible to this group of the population. Our goal is to explore to what extent these are used in relation to the general population and the general use of these spaces. It is an attempt, in a sense, to derive how effective the efforts towards improving accessibility have been.

We believe that a city can only be truly active and engaged when all of its citizens are afforded the same opportunities to fully engage with it across space and time. We recognise that London is developing mechanisms and techniques to get everyone involved in its daily activities, to help it breathe and grow. We therefore want to detect whether these techniques work effectively enough, and how they contribute in making London active. How successful is London in getting everyone involved in its somewhat frenetic pace?!


As mentioned before, in order to answer some of our questions, we are focusing our exploration and research on the use of the public transport network by people with disabilities. We will try to show, through the data we have available, to what extent the public transportation network is used by Oyster Disabled Free Pass holders. We hope that this approach and visualisation will illustrate a number of potential issues concerning the accessibility of public transport.


We recognise that we are setting off with quite a goal, and what we have at hand is hardly able to prove anything conclusively. What we will try to do, however, is to visualise and representat the information we can extract from the data that we have available. A lot of assumptions are made, and a lot of numbers are generalised to make the results communicate the most they can. We know that our sample is limited, because not all residents of London that have a disability request a free oyster pass. We also know that not all Oyster Disabled Free Pass holders use public transport. We are keeping the process open to the reader/viewer, so that they themselves can choose what conclusions to keep or derive from this exploration.

We have obtained a 5% sample of Oyster Card trips made in November 2009 within London’s public transport network. We have data on the types of users using each means of transport. Initially, we extract the data related to disabled pass holders and we present this data in contrast with the rest of the users. The results are yet to be seen.