This tutorial demonstrates how to merge data cubes using linked data frames.
For brevity the code used to construct the input cubes is hidden. You can read the details in the source code for this vignette on github.
There are a several ways to combine data cubes:
We’ll explore each in turn.
You can concatenate observations by “row binding” tables - adding rows from one table onto the end of another making a single, longer table.
This is simple if the cubes have the same components. This basically means that tables need to have the same set of columns.
The following two excerpts from population datasets on linked.nisra.gov.uk have the same structure.
One is for Local Government Districts:
The other for Parliamentary Constituencies:
These cubes are essentially the same, it’s just the there are different types of geography in the
area column. Because the URIs for the two geography types can’t overlap (they’re Unique Resource Identifiers after all), we can concatenate the rows into a single table.
If the cubes have different structures we will need to alter the tables of observations so that they have the same columns. We can remove spare columns if we first filter the rows such that there’s only a single value for that column. We can add missing columns by filling in values with a default.
These two excerpts from the Journey times to key services datasets demonstrate the problem.
These are the journey times from each neighbourhood to the town centre:
The employment centres table by contrast has one extra column:
employment_centre_size. This breakdown wasn’t applicable to town centres but it’s important to distinguish employment centres.
|E01000001||Employment centre with 100 to 499 jobs||PT/walk||2017||20||Minutes|
|E01000001||Employment centre with 100 to 499 jobs||Cycle||2017||12||Minutes|
|E01000001||Employment centre with 100 to 499 jobs||Car||2017||13||Minutes|
|E01000001||Employment centre with 500 to 4999 jobs||PT/walk||2017||4||Minutes|
|E01000001||Employment centre with 500 to 4999 jobs||Cycle||2017||7||Minutes|
|E01000001||Employment centre with 500 to 4999 jobs||Car||2017||7||Minutes|
We could choose to ignore it, picking a single value from the breakdown. In some cases the codelist will be hierarchical, having e.g. a value for “Any” or “Total” at the top of the tree. This effectively ignores the breakdown 1. In this case that’s not possible so we’ll have to pick an employment centre size.
Let’s take moderately-sized employment centres. Once we’ve filtered to those rows, we can remove the redundant column:
We can then concatenate this with
jt_town. If we do this now, however, there won’t be a way to distinguish between the rows that come from each dataset. The destination of the service being travelled to is defined only in metadata (i.e. by the dataset’s title) and isn’t available in the data to let the observations distinguish themselves 2. We need to add this column in ourselves:
|E01000001||PT/walk||2017||4.00000||Minutes||Employment Centre (500-4999 jobs)|
|E01000001||Cycle||2017||7.00000||Minutes||Employment Centre (500-4999 jobs)|
|E01000001||Car||2017||7.00000||Minutes||Employment Centre (500-4999 jobs)|
Alternatively we can add the column to
vctrs::vec_rbind() will automatically set this to
NA for us:
|E01000001||PT/walk||2017||20.00000||Minutes||Employment Centre||Employment centre with 100 to 499 jobs|
|E01000001||Cycle||2017||12.00000||Minutes||Employment Centre||Employment centre with 100 to 499 jobs|
|E01000001||Car||2017||13.00000||Minutes||Employment Centre||Employment centre with 100 to 499 jobs|
|E01000001||PT/walk||2017||4.00000||Minutes||Employment Centre||Employment centre with 500 to 4999 jobs|
|E01000001||Cycle||2017||7.00000||Minutes||Employment Centre||Employment centre with 500 to 4999 jobs|
|E01000001||Car||2017||7.00000||Minutes||Employment Centre||Employment centre with 500 to 4999 jobs|
The components ought to have compatible values too. We can mix codes from different codelists (
skos:Concept URIs from different
skos:ConceptSchemes) in the same column, but it is generally nicer to work with a common set.
This is particularly true if the scope of the codelists overlaps. If you can derive a single codelist with a set of mutually exclusive and comprehensively exhaustive codes, then it’s possible to aggregate the values without double-counting or gaps.
We can use correspondence tables to lookup equivalent values from one codelist in another.
# example with correspondence table # Map SITC to CPA at highest level?
If two cubes use the same codelist (or have dimensions with common patterns for URIs like intervals or geographies) then it’s possible to join them on that basis.
If every dimension is common to both cube codes (or if it only a single value), then it’s possible to do a full join using all the dimensions. This will result in a combined cube with the same set of dimensions (which uniquely identify the rows) and two values - one from each of the original cubes.
If these values are distinguished by dimension values, then they will be on different rows. Otherwise, the values could be put into different columns on the same rows. It’s possible to transform the data between these two representations using the
tidyr package as described in the Tabulating DataCubes vignette. The longer (row bound) form is nicer for analysis (e.g. modelling or charting) whereas the wider (column bound) form is nicer for calculations that compare the values (e.g. denomination).
We’ll use two tables from Eurostat for this example. The first describes the total number of people employed in each country (in thousands):
The second describes the economically active population of each country (in thousands):
Both have a
country dimension that uses the same codes. We can use them together to calculate the employment rate.
If we combine them by column-binding we create a single, wider table 3.
This form is convenient where we want to address the two variables with metadata.
For example, we can calculate the employment rate by referring to each variable:
emp_pop_wide %>% mutate(employment_rate = employment / population) %>% arrange(-employment_rate) %>% slice_head(n=5) %>% kable()
|Germany (until 1990 former territory of the FRG)||39955||41234||0.9689819|
Alternatively, we can row-bind the tables to form a single, longer table. We have to add another dimension -
variable - to the table to distinguish the rows from each source. This allows us to combine the
population measures into a single measure which simply counts
This form is convenient where we want to address the two variables as data.
For example, we can plot employment against population:
library(ggplot2) ggplot(filter(emp_pop_long, !country %in% c("EU27_2020","EU28","EU15","EA19")), aes(label(country), people, fill=indicator)) + geom_col(position="dodge") + scale_fill_brewer() + theme_minimal() + coord_flip() + labs(title="Demographics of European Countries", x="Country", y="Count of People (Thousands)", fill="Indicator")
We’re not really “ignoring” the breakdown, so much as marginalising it by taking a value which effectively represents the integral over that dimension. In the case of counts, for example, this will be a sum. Picking an arbitrary value from the dimension would conditionalise the distribution upon it. Either way we’re removing a dimension from the cube.↩︎
Of course the observation URIs would still be distinct, but we didn’t return those from the query so they aren’t in the table. We could instead have had the query bind a
dataset variable for the
qb:dataSet property in each case (yielding each dataset’s URI), but doing it explicitly like this is hopefully more instructive.↩︎
dplyr::inner_join() instead of
base::merge() because this will ensure the resource descriptions also get joined (dplyr uses vctrs underneath). Merge will only retain the description of the left-hand data frame so it’s safe to use
merge(all=F) for inner-joins. For right- or full-joins you’ll need to use dplyr instead of (or rebuild the description yourself with e.g.