AccessLondon – an android app

 

Logo of the app

Logo of the app

Based on initial visualisations, we observed that many accessible tube stations were not used as much as we had anticipated. We therefore thought that it may be beneficial to create an app pointing the user towards the closest accessible stations along with pertinent information about the station and context. The idea continued to evolve and nearest bus stop information was also added, based on buses’ popularity and also due to the fact that all buses offer disabled access. There are many applications available that focus on trip planning. However, this one is different in the sense that it is focused on finding the nearest accessible stations and bus stops for the disabled users. The process is made fast and simple by requiring just two clicks (taps).

Screenshot of app home page

Screenshot of app home page

 

To add to this concept, the user also has live updates from the twitter accounts of @TfLAccess and @FreedomPassLDN, two channels that deal with accessibility of public transport in London, on which users can make queries that are answered on a frequent basis. As an extra, informative feature, the app gives users data on the popularity of each mode of transport.

 

The source code can be found here and the application can be dowloaded from here.

Advertisements

The active “city” beyond barriers. Commuters and citizens.

1

We live above the ground. It is from above that we start and end our trips, and it is what is above that drives our decisions. This part of the project focuses on what is above ground, the distribution and composition of population. The distribution and ratio of disabled to the non-disabled is here geographically visualised to explore the dynamic spatial and temporal patterns that emerge. This part of the project can be summarised with the accompanying logo displayed in Figure (1). Above the ground, we have the living and active city that for its function depends on a below-ground transportation network. So doing, people are constantly moving from place to place and thereby changing the composition of population in space and time in proximity to stations. We here establish a link between above-ground and below-ground data by linking census data with TFL data.

Disability in census 2011

TFL offers two versions of Freedom Passes, one being for elderly, and the other for the disabled. For our exploratory analysis, we assume an overlap between those that are eligible for the Disabled Freedom Pass, and those that report their “Day-to-day activities limited a lot” or “day-to-day activities limited a little”. These categorisations require a long-term health problem or disability of a duration greater than 12 months. These categories therefore do not directly include elderly people unless they have a specific disability or significant age-related ailment of significant duration. The fact that Freedom Passes granted for old age are easier to obtain than those given to people with a disability, will create a possible misalignment between the two datasets. A further distinction between the two datasets is that the TFL data includes trips made by non-residents[1]. In general terms, our exploration assumes that:

Assumption 1: (Day-to-day activities limited a lot + Day-to-day activities limited a little) = Eligible for freedom pass. 

We investigate at the MSOA[2] level the spatial relationship amongst populations with different percentages of people reporting disabilities. In Figure 2, some patterns can already be found, where the more central areas that have better transportation access are characterized by a lower degree of population reporting limitations in their day-to-day activities.

2

Figure 2 Spatial patterns of disabilities

The construction of the geodata

The analysis proceeds to create catchment areas around all tube stations to explore workflows and gain insight from the merging of the two datasets for investigating opportunities for visualisation. A buffer of 2 Km around stations identifies an approximate area served by all stations (Figure 3).

3

Figure 3 Stations (red dots) and buffered area

Then:

Assumption 2: Every station has an exclusive “catchment area”

The space around each of the stations is assigned to each station by using a Voronoi tessellation (Figure 4).

4

Figure 4 Voronoi from stations

The Voronoi and buffer are then linked with census data by using centre points for the Output Areas (Figure 5), which are the smallest spatial units available in the census.

5

Figure 5 Voronoi and OA centroids (blue dots)

For every catchment area, the census data for the resident population and residents with long-term health problems or disabilities limiting their activities by a-lot or a-little, are summarised. This is used as a starting point for exploring how the resident populations may vary throughout the course of the day based on tube trip data.

The dynamics of these daily population movements is explored by using Processing, into which the data and geospatial information are imported and animated. Note that in Figure 6, the starting display shows zero values for some central stations that do not have an allocated population.

After some hours, the situation changes, but mainly in the central areas.

The specific features of this code

This exploration attempts to integrate TFL data with “external” census and spatial data to gain a more complex and dynamic conception that goes beyond solely creating engaging visualizations with the city as a background.

The core Processing code derives from the initial Processing sketches and accompanying code developed by Gareth and Katerina, and TFL data preparation by Katerina and Stelios. Subsequent development specific to this sub-project:

  1. Import and display shapefiles using the geoMap library for creating the “zero-night-time” situation with the initial allocation of population from census data and updating the data as soon as people tap in or out of a station. A table holding all values for disabled and non-disabled people for every catchment area is created in memory and updated in real time.
  2. Create a diverging choropleth map to represent areas with ratios higher or lower than the median (since distribution at time=0 is roughly gaussian).
  3. Highlight the existence of areas where the number of commuters that take the tube overcomes the number of resident population, giving rise to negative numbers.
  4. An attempt to optimize the memory use of the Processing code by deleting used arraylists and hashmaps.

Methodological issues

  • Large amounts of data require an experienced and skilled programmer for optimisation. Architects and planners can go up to a certain point in their proficiency in coding. Dealing with large amounts of data requires specific education.
  • It became evident that processing is a very supportive and well documented programming language. The teaching style was also challenging, but stimulating, and encouraged me to attempt my own programming, with some support from other members of the group.
  • A smarter way to calculate catchment areas could come from some CA implementation over the transportation network and land-use maps. It is difficult to model the behaviour of citizens living in central areas because of access to multiple stations within walking distance.

Proposed theme issues

  • The visualization showed that negative values occurred only in the central areas while the other areas with an increasing numbers of resident population have only very small changes.
  • The dynamic visualization of the ratio between people without limitations and disabled people is not really engaging from a dynamic point of view.

References:

geoMap library (http://www.gicentre.net/geomap/) from the giCentre, city University London.

Image with accessible logo from http://www.waag.co.uk/awards.asp

 

Identified Flying Objects

The question behind the third part of our portfolio is: “Is there an effective way to visualise the relationship between the (underground) spatial movement of people with disabilities and the (above-ground) built environment?”. What we mean by “effective”is an engaging way to communicate the desired message to the public in an easily comprehensible way. In this case, the message is the information that our data encapsulates, and which is revealed through an exploration of key points that influence the nature of our visualisation methodologies.

The challenge is therefore twofold: understanding and revealing the nature of the data; and developing visualisation methodologies that are the most capable of releasing this information.

To begin with, the nature of our data implies the temporal duration and spatial distance of each trip from its starting to ending points. These points are located within the fabric of the urban environment, but the movement between these points occurs in a void of spatial context. We are therefore seeking a way to visualise this movement in a manner that permits a greater comprehension of temporal duration and spatial distance, and which can therefore be perceived in a more engaging way.

There are many ways to depict the city. The potential variances between what is depicted and what the observer perceives is narrowed in the case of a picture, or even better, a video of the city. The concept thus lends itself to representing the underground movements of people above-ground instead, where the animated patterns of movement contrast with the stationary built environment, while simultaneously making the connection between underground and above-ground locations. In a sense, here we can perceive these movements as an engine that shapes and empowers the city over time.

For brevity, we compress a day’s movements of disabled Freedom Pass card users into approximately one and a half minutes, which also aids in better understanding   density, scale, and temporal and spatial patterns.

A short analysis

 At the bottom of the display there is a line with three attributes: the time, the mean straight-line distance that people with disabled Freedom Pass card-users travel per day, and the mean straight-line distance that other card holders travel per day. The interesting result is that general trips move in a radius of more-or-less eight kilometres whereas disabled Freedom Pass card-users move an average of seven kilometres per day. This observation, in comparison with the fact that this difference is constant throughout the day, regardless of rush hours, can lead to several hypotheses about how the London Underground is used by disabled people, though we will leave this open-ended rather than making overly general or speculative assumptions.

The method

 The project Identified Flying Objects was built in the open-source animation and render software, Blender v.2.70. It is worth mentioning that the whole procedure was entirely new for the main contributor (Stelios) as well as for the rest group: from the Blender and Adobe Premiere video editor to the Visual effects (VFX) processes (Zwerman, Okun, 2010). VFX are display techniques which combine live-action scenes with generated imagery in order for a realistic environment to be composed (Brinkmann, 2008). The choice of the VFX method for the visualisation of the current project was based on three factors: First of all, as it is referred above, the wish to convey the nature of the data in an engaging and familiar contextual environment. Secondly, the opportunity to explore novel visualisation techniques that are not used widely in scientific analysis. Finally, the choice to use real-life video rather than building a detailed digital 3D model of London allowed for a realistic outcome and efficient workflow.

The workflow

 The VFX process consists of three well-divided sections.

1)     At the beginning of the first section is the video recording, which was performed with an iPhone 5. The video was taken from the 33rd floor of the Shard building (51.5045oN, 0.0865oW). Blender can only recognise two movement planes for a camera’s path. In no cases can it handle a combination of movement in 3D space and rotation around a point. The next step is to setup the digital camera in the Blender environment. This particular camera recognises the path in the recorded video by tracking random points across the whole video frames. In this way, the digital camera performs exactly the same motion as the regular camera, thus accurately depicting the digital rendered output.

 

Figure3

Importantly, the digital camera must be set up with the same technical specifications of the video camera, including factors such as focal length, angle of view, lens distortion, etc.

2)     The second step is the combination of the video with the information derived from the dataset. The data was imported into blender using code scripted for the blender python “bpy”API. (Contributed by Gareth) The script consists of the following steps: Importing all data and creating trip data “objects”, which are stored in a trip “dictionary”; Creating a default trip object blender material; Setting up a starting point for the blender scene, including a base plane and lighting; Optional (not used for the final rendition) smoke trails; Iterating through the trip-object dictionary to create an object for each trip; Whilst iterating, each trip object is key-framed to invisible, then visible at the time the trip commences, then starting point, then ending point, and then invisible again. Note that because the objects are not instanced and destroyed “on-the-fly”there are some practical limitations to the number of objects that can be imported. The plane and the objects constitute the 3D digital environment of our model which is overlaid with the video of London.

Figure9

Due to the setup in step 1, the start and end points of the flying objects are correlated to the video. Unfortunately, Blender is not capable of automatically establishing the digital camera in the correct position according its real coordinates, even though the software already recognises the right height of it through the first section. This can be overcome manually by matching points within the 3d model with key-points in the video. This is by no means perfect and is considered as a limitation of Blender.

3)     The third part of the VFX process is the modification of the final output in terms of a realistic and illustrated visualisation. This includes lighting and texture settings. A significant role was played by Adobe Premiere in generating the final edit of the video, and was used for visual effects such as blur, adding sound, and for reducing the length of the clip to fit the needs of the presentation. A Processing sketch (contributed by Katerina) sums the distances between points and divides it by the number of the trips each time in order to calculate the mean distances of the routes of each social group. The Processing rendering was exported and overlaid with the Blender output in Adobe Premiere.

References

Brinkmann (2008) The Art and Science of Digital Compositing: Techniques for Visual Effects, Animation and Motion Graphics. 2 edition. Amsterdam ; Boston, Morgan Kaufmann.

Zwerman, S. & Okun, J.A. (2010) The VES Handbook of Visual Effects: Industry Standard VFX Practices and Procedures. 1 edition. Burlington, MA, Focal Press.

Visualising Disabled Freedom Pass Card trips on the London Underground

This video was generated in Unity3d to visualise the trips made by disabled freedom pass holders on the London Underground network. The bright orange streaks represent disabled freedom pass users, whereas the smaller white trails represent other users. This data is sourced from Transport For London and is drawn from a 5% sample in the month of November, 2009.

Data courtesy and copyright of Transport For London.

Created by Gareth Simons as part of the Activity Beyond Barriers group presentation with Gianfranco, Stelios, & Katerina at CASA UCL.

Adding Interactivity with Unity 3D

The ins-and-outs of different visualisation software.

Each of the visualisation strategies employed by our group reveals a different natural fit for the exploration, visualisation, and interaction with data. Unity occupies a unique position because it offers a degree of ‘real-time’ interaction and performance not matched by the other approaches. As a game engine, it is designed from the ground-up for this purpose, thereby offering a unique range of benefits:

1)     It offers a modular approach towards “assets” and “resources”, allowing for the flexible arrangement and combination of data, 3d models, settings, and scripts;

2)     It separates ‘real time’ from the frame rate, which means that the frame rate is constantly optimized for a device’s computational performance, without leading to wildly fluctuating time-step changes;

3)     It is capable of handling a serious quantity of objects in real-time. Testing with minimal rendering requirements indicated the ability to manipulate well upwards of 3,500 objects depending on computational power and the time scale;

4)     It is further capable of offering suffiently high-quality graphics rendered in real-time, therefore distinguishing itself from traditional rendering and animation engines which can be notoriously slow at rendering, albeit with increased realism.

5)     Due to these and other reasons, it is inherently well-suited to the creation of dynamic and interactive visualisations that actively respond to user inputs.

Unity’s capabilities stand in contrast to our experimentations with Blender (per the video developed by Stelios) which favoured a more traditional approach towards modelling and animation. Blender doesn’t offer the naturally interactive format and its scripting API is difficult to understand, as well as weakly documented. We also experimented with Rhino, though didn’t pursue it beyond initial experimentation in which we imported data to animate CSV data, and wherein we found it’s abilities more suited to generative and form-based modelling and less so to the creation of rich and dynamic scenes.

Unity 3d - Underground View

Data Preparation

The data preparation for Unity was done in Python and took three inputs:

1)     The tube lines with each of the stations. This was developed out of work done by Katerina, in which she identified all neighbouring stations on each tube line;

2)     The stations names with the station coordinates;

3)     The data file prepared by Katerina with assistance from Stelios, consisting of all trips.

The script creates a station index, and a station coordinates list, which it then uses to create a weighted adjacency matrix of all stations on the network. The scipy.csgraph package is then used to return a solved shortest path array. Subsequently, the data file is imported and the start and ending location for each trip is then resolved against the shortest path array, with the resulting waypoints for each trip written to a new CSV file. A further CSV file is also generated, containing each station’s name and coordinates.

This approach proved more computationally feasible than earlier approaches using Nav Meshes, as explored and tested by Gianfranco.

Unity Implementation.

The Unity implementation consists of various components:

1)     The 3d models for the London landmarks were found in the Sketchup 3D warehouse. Their materials were removed and they were imported to Unity in FBX format;

2)     The outline for Greater London was prepared as a shapefile by Gianfranco, which he subsequently exported to FBX via City Engine;

3)     An empty game object is assigned with the main “Controller” script that provides a springboard to other scripts and regulates the timescale and object instancing throughout the visualisation. This script allows numerous variables to be set via the inspector panel, including the maximum and minimum time scales, the maximum number of non-disabled trip objects permitted at one time (to allow performance fine-tuning), a dynamic time scaling parameter, and the assignment of object prefabs for the default disabled and non-disabled trip objects. Further options include a movie-mode with preset camera paths and a demo of station selections;

4)     One of the challenges in the creation of the visualisation was the need to develop a method for handling time scaling dynamically to reduce computational bottlenecks during rush-hours, and also to speed up the visualisation for the hours between midnight and morning to reduce the duration of periods of low activity. The Controller script is therefore written to dynamically manage the time scale;

5)     The controller script relies on the “FileReader” script to load the CSV files. The stations CSV file is used to instance new station symbols at setup time, each of which, in turn, contain a “LondonTransport” script file, with the purpose of spinning the station symbols. It also sets up behavior so that when a station is clicked, the station name is instanced (“stationText” script) above the station, and trips only to and from that station are displayed via the Controller script. The FileReader script also reads the main trip data CSV file, and loads all trips at setup time into a dictionary of trip data objects that include the starting and ending stations, as well as the waypoint path generated by the Python script. The trips data objects are then sorted into a “minute” dictionary that keeps track of which trips are instanced at each point in time. The minute dictionary is in turn used by the Controller script for instancing trip objects.

6)     The “Passenger” and “SelectedPassenger” objects and accompanying script files are responsible for governing the appearance and behavior of each trip instance. Since thousands of these scripts can be active at any one point in time, they are kept as simple as possible, effectively containing only information for setting up the trip interpolation based on Bob Berkebile’s free and open source iTween for Unity. iTween is equipped with easing and spline path parameters, thereby simplify the amount of complexity required for advanced interpolation. The trip instance scripts will further destroy the object once it arrives at the destination.

7)     Other script files are responsible for managing the cameras, camera navigation settings, motion paths for the movie mode camera, rotation of the London Eye model, and for setting up the GUI.

Unity 3d

Visual and Interaction Design

It was decided to keep the London context minimal with only selected iconic landmarks included for the purpose of providing orientation, and a day-night lighting cycle to give a sense of time. Disabled Freedom Pass journeys consist of a prefab object with a noticeable bright orange trail and particle emitter, in contrast to other trips which consist of simple prefab objects with a thin white trail renderer and no unnecessarily complex shaders or shadows due to the large quantities of these objects. The trip objects are randomly spaced across four different heights, giving a more accurate depiction of the busyness of a route, as well as a more three-dimensional representation of the flows.

Interactivity is encouraged through the use of keyboard navigation controls for the cameras, as well as a mouse “look around” functionality, switchable cameras, and the ability to take screenshots. When individual stations are clicked, then only trips to or from that station are displayed.

Unity in real space: exploring a workflow from GIS.

The possibility of taking advantage of “nav meshes” instead of the creation of a specific routing algorithm for solving trips in Unity was explored. The possibility of creating physical barriers (and consequent pathways for nav meshes) was used to suggest pathways for the movement of agents based on the use of volumetric objects extracted from GIS layers.

As a starting experimentation, a sample area of the tube network was extracted and agents were set-up to seek targets by using pathways constrained to these pathways. Though it proved quite difficult to extract the geometric data from shapefiles into Unity. After some days of difficulties and computer crashes, a fully automated chain of software and commands was developed to transfer the “canyon” files from ArcGIS into Sketchup, and then into Unity. Other software attempted included Autodesk-Autocad and 3D Studio Max.

However, the need to create 3d elements covering the entire TFL network made the broadening of this exploration very difficult to achieve. Only the use of CityEngine made the aided the process because it is able to import shapefiles, then simplify them choosing the preferred accuracy, and then export to FBX which is readily usable by Unity.

The concurrent explorations by Gareth on a Python based routing algorithm, and consequently overcoming of the need to process GIS data, stopped this line of development.