Skip To Content
The Mayor of London The London Assembly

Big data to plan for council tax reform

In early September, the third annual Data for Policy international conference took place here in London. The objective of these conferences is simple: to share learning and knowledge about using data in the public sector.

I went along as the representative for the Intelligence Unit at the Greater London Authority (GLA). I presented a paper on using innovative big data approaches to understand the distributional impact of potential council tax reforms in London. An important component of the GLA’s approach to devolution, this analysis essentially involved creating a model that contained every household in London with information about its current price, council tax band and location. In the absence of a complete database containing this information, several individual datasets were combined in a model containing over 1.3 million data points. You can read more about our analysis here.

Government by algorithm?

There were also representatives from government departments and agencies, academia, research organisations and businesses based across the world. Just a small glimpse of the things we discussed included:

• Creating new definitions that reflect modern industries
We discussed ways about creating new definitions that relate to 21st century sectors. These can be used to illustrate the importance of these emerging sectors and track performance over time. For example, Nesta is working on identifying and then classifying ‘innovative’ companies using data scraping and unsupervised learning. And the Office for National Statistics (ONS) is using administrative data in addition to surveys to better classify specific FinTech activities – something limited by the current Standard Industrial Classification (SIC) codes.

• New ways of collecting data
Organisations are using novel data collection techniques to create new indicators. For example, the Local Data Company, in partnership with University College London and the Consumer Data Research Centre, is using sensors to measure and map live footfall in shopping areas. Similarly, the ONS is working with Zoopla to create estimates of private rents at low geographies.

• New policy insights
We also discussed ways that data can be used to provide new policy insights. The OECD is using Google Street View data to rank global cities in terms of their accessibility to services such as shops, transport, hospitals and parks. This can then be used by city planners to determine the ‘right’ level of accessibility to such services in their cities. Similarly, UNICEF has used data to match demand and supply of relief efforts and target those most in need. And Sandtable has used agent-based models to simulate the long-term effects of inherited wealth on social equality.

The common feature – knowledge

The thing that these projects have in common is using data to push our knowledge. Whether this is creating new indicators, new definitions or new insights. It all adds to our understanding of how things work and can be harnessed by the public sector to improve the efficiency of service delivery or develop new policies. But to have this intended impact, we need to share our analysis and findings, as well as actively encourage civil servants to access and use this data.

Making data more accessible

That is why we also talked about ways to make data more accessible. This includes Eurostat who are using stories and interesting facts (such as this one showing where your ice cream comes from) to get more people using data. Plus, at City Hall, we created the London Datastore as a central depository for city data. We also regularly present our analysis to policy colleagues so that they can embed it into their work and engage with them to fill any data gaps.

But there’s still work to be done to remove some of the perceived barriers with data. Some people think you need to be a maths expert to work with data or to have specialist knowledge to understand the jargon – but it doesn’t need to be. For starters, we can give examples of how our analysis can be used by policymakers, minimise the use of technical language, use visuals more effectively, and promote our work to a wider audience.

There are already some projects that do just that. For example, the ONS Data Science Campus has developed a tool called Churchill Discover that visually shows trends in key metrics. This not only makes policymakers more aware of the data that is available but also, as it does the analysis behind-the-scenes, doesn’t require any specialist know-how. It is currently in testing within the government, but policymakers have already highlighted its usefulness in accessing data.

Going forwards

With so many organisations using data to support policymaking, it’s important that we all share our work to advance our collective knowledge. This Data for Policy conference is a good example as to how we can do this; by talking to one another. For us, we also make our work freely available on both the London Datastore and the City Hall website and we have plans to disseminate our work more widely. But we can also make our work more accessible to policy colleagues by using simple English and illustrating its worth among others. These two steps – sharing and accessing data – are essential in moving towards an evidence-led public sector and ensuring data is ultimately used in policymaking.