Talk at the Royal Statistical Society

Today I presented part of the work done for my PhD and the PURE project at the event “Big Data and Spatial Analytics“, organised by the Business and Industrial Section of the Royal Statistical Society. It was a great opportunity to meet people interested in Big Data and geospatial analytics.

My slides are available on SlideShare.

I also presented a demo, the Rmarkdown file that I used to generate the dynamic report is available on GitHubGist.

You are very welcome to use it and share it with others!





FUSE model and parameters information

A quick post to show how to find what model building decisions, options (name and ID number) and depending parameters are associated with a given FUSE model.

First of all, install/load the fuse package:

    install.packages("fuse", repos="")

Load devtools and source the gist below (it contains a function called FUSEinfo)


Choose one of FUSE’s models:

    mid <- 60 # This is TOPMODEL

Run the function FUSEinfo using mid as input:


The result of FUSEinfo is a dataframe containing 32 columns.

  rferr arch1 arch2 qsurf qperc esoil qintf q_tdh rferr_add rferr_mlt maxwatr_1 maxwatr_2 fracten
1    12    21    34    43    51    62    71    82     FALSE      TRUE      TRUE      TRUE    TRUE
  frchzne fprimqb rtfrac1 percrte percexp sacpmlt sacpexp percfrac iflwrte baserte qb_powr qb_prms
  qbrate_2a qbrate_2b sareamax axv_bexp loglamb tishape timedelay
1     FALSE     FALSE    FALSE    FALSE    TRUE    TRUE      TRUE

The first 8 columns contain the model building decisions: rferr (rainfall error), arch1 (upper soil layer), arch2 (lower soil layer), qsurf (runoff mechanism), qperc (percolation), esoil (evaporation), qintfl (interflow) and q_tdh (routing). See the table below for more information:

The remaining 24 columns list the parameters (see table below). If the value of a parameter is TRUE, it means that parameter is used by the model, FALSE means the parameter is not used.


Split long time series into (hydrological) years in R

I have been recently working on a rather basic task: splitting long time series into years. Although this might sound trivial for calendar years, I had to think a bit to find a relatively elegant solution for hydrological years. Below is what I came up with, however if you are aware of a better way, please leave a comment!

For this exercise, we need to load only one library:

# Load library

Let’s generate a dummy time series:

# Generate dummy time series
from <- as.Date("1950-01-01")
to <- as.Date("1990-12-31")
myDates <- seq.Date(from=from,to=to,by="day")
myTS <- as.xts(runif(length(myDates)),

When working with standard calendar years (from Jan to Dec), splitting a time series into years is not too much of a problem:

# Split the time series into calendar years
myList <- tapply(myTS, format(myDates, "%Y"), c)

The result is a list of 41 time series, each of lenght = 1 year.

Any time series can be accessed, as usual, via its index:

plot( myList[[1]] )


Things become more interesting with non-standard calendars, such as hydrological years (starting on the 1st October and ending on the following 30th September).

The first step is to calculate the number of hydrological years, this is going to be:

the number of years in which we have records from Jan (index = 0) to September (index = 8) minus 1 (because we cannot count the first year).

# calculate the number of hydrological years
nHY <- length(split(myTS[.indexmon(myTS) %in% 0:8], f="years"))-1

Then we create an empty list and populate it with a series in which we append (or bind) the records from October to December of a generic year “counter”, to the records from Jan to Sep of the year “counter+1”.

# create an empty table , to be populate by a loop
myList <- list()

for ( counter in 1:nHY ){
 oct2dec <- split(myTS[.indexmon(myTS) %in% 9:11], f="years")[[counter]]
 jan2sep <- split(myTS[.indexmon(myTS) %in% 0:8], f="years")[[counter + 1]]
 myList[[counter]] <- rbind(oct2dec, jan2sep)

Again, any time series can be accessed via its index:

plot( myList[[1]] )


That’s all! The code in this post is also available as public gist.

The new “hddtools”, an R package for Hydrological Data Discovery

The R package hddtools is an open source project designed to facilitate non programmatic access to on-line data sources. This typically implies the download of a metadata catalogue, selection of information needed, formal request for dataset(s), de-compression, conversion, manual filtering and parsing. All those operation are made more efficient by re-usable functions.

Depending on the data license, functions can provide offline and/or on-line modes. When redistribution is allowed, for instance, a copy of the dataset is cached within the package and updated twice a year. This is the fastest option and also allows offline use of package’s functions. When re-distribution is not allowed, only on-line mode is provided.

The package hddtools can be installed via devtools:

install_github("r_hddtools", username = "cvitolo", subdir = "hddtools")

Data sources and Functions

 The Koppen Climate Classification map

The Koppen Climate Classification is the most widely used system for classifying the world’s climates. Its categories are based on the annual and monthly averages of temperature and precipitation. It was first updated by Rudolf Geiger in 1961, then by Kottek et al. (2006), Peel et al. (2007) and then by Rubel et al. (2010).

The package hddtools contains a function to identify the updated Koppen-Greiger climate zone, given a bounding box.

# Extract climate zones from Peel's map:

# Extract climate zones from Kottek's map:

The Global Runoff Data Centre

The Global Runoff Data Centre (GRDC) is an international archive hosted by the Federal Institute of Hydrology (Bundesanstalt für Gewässerkunde or BfG) in Koblenz, Germany. The Centre operates under the auspices of the World Meteorological Organisation and retains services and datasets for all the major rivers in the world.

Catalogue, kml files and the product “Long-Term Mean Monthly Discharges” are open data and accessible via the hddtools.

# 1. GRDC full catalogue

# 2. Filter GRDC catalogue based on a bounding box 
grdcCatalogue(BBlonMin = -3.82,
              BBlonMax = -3.63,
              BBlatMin = 52.43,
              BBlatMax = 52.52,
              mdDescription = TRUE) 

# 3. Monthly data extraction

The Data60UK dataset

In the decade 2003-2012, the IAHS Predictions in Ungauged Basins (PUB) international Top-Down modelling Working Group (TDWG) collated daily datasets of areal precipitation and streamflow discharge across 61 gauging sites in England and Wales. The database was prepared from source databases for research purposes, with the intention to make it re-usable. This is now available in the public domain free of charge.

The hddtools contain two functions to interact with this database: one to retreive the catalogue and another to retrieve time series of areal precipitation and streamflow discharge.

# 1a. Data60UK full catalogue

# 1.b Filter Data60UK catalogue based on bounding box 
data60UKCatalogue(BBlonMin = -3.82, 
                  BBlonMax = -3.63,
                  BBlatMin = 52.43,
                  BBlatMax = 52.52) 

# 2. Extract time series 

NASA’s Tropical Rainfall Measuring Mission (TRMM)

The Tropical Rainfall Measuring Mission (TRMM) is a joint mission between NASA and the Japan Aerospace Exploration Agency (JAXA) that uses a research satellite to measure precipitation within the tropics in order to improve our understanding of climate and its variability.

The TRMM satellite records global historical rainfall estimation in a gridded format since 1998 with a daily temporal resolution and a spatial resolution of 0.25 degrees. This information is openly available for educational purposes and downloadable from an FTP server.

HDDTOOLS provides a function, called trmm, to download and convert a selected portion of the TRMM dataset into a raster-brick that can be opened in any GIS software. This function is a slight modification of the code published on Martin Brandt’s post (thanks Martin!).

# Generate multi-layer GeoTiff containing mean monthly precipitations from 3B43_V7 for 2012 (based on a bounding box)

Please leave your feedback

I would greatly appreciate if you could leave your feedbacks either via email ( or taking a short survey (

Image credits to cilipmarketing:

The new FUSE implementation is now 145 times faster!

Four of my previous posts were about the fuse implementation in RHydro. Since I published them I received many emails and requests for more info. It is clear the topic is of interest for many. I thought I would post a short note on a new FUSE implementation which is still now available as a separate package called “fuse” on GitHub.

# install/load dependent libraries
if(!require(zoo)) install.packages("zoo")
if(!require(tgp)) install.packages("tgp")
if(!require(qualV)) install.packages("qualV")
if(!require(hydromad)) install.packages("hydromad",repos="")
if(!require(devtools)) install.packages("devtools")

# install the fuse package directly from GitHub
install_github("ICHydro/r_fuse", subdir = "fuse")

The functions are named as in RHydro, the only difference is that the list of model structures is now called internally and does not need to be passed as input. It is still compatible with hydromad and below you find few lines to run a test (also available as gist here).

# Load sample data

# Set the parameter ranges

# Set model
modspec <- hydromad(DATA, sma = "fusesma", routing = "fuserouting", mid = 1:1248, deltim = 1)

# Randomly generate 1 parameter set 
myNewParameterSet <- parameterSets( coef(modspec, warn=FALSE), 1, method="random")

# Run a simulation using the parameter set generated above
modx <-  update(modspec, newpars = myNewParameterSet)

# Generate a summary of the result

# Plot results 
hydromad:::xyplot.hydromad(modx, with.P=TRUE)

I thought a basic benchmark between RHydro and fuse packages would be interesting (here the gist).

The result is that fuse’s functions seem to run over 145 times faster than the corresponding functions in RHydro.

That’s a great news if you plan to do anything that requires hundreds/thousands of runs!
Plot of benchmark results
Here the detailed results of the benchmark:
> compare
Unit: seconds
                expr        min         lq     median        uq        max neval
 f(DATA, parameters) 423.230827 433.070465 446.845983 451.28512 461.818262    10
 g(DATA, parameters)   2.893856   2.988898   3.076531   3.59736   3.713473    10
My session info:

> sessionInfo() R version 3.1.1 (2014-07-10) Platform: x86_64-pc-linux-gnu (64-bit) locale: [1] LC_CTYPE=en_GB.UTF-8 LC_NUMERIC=C LC_TIME=en_GB.UTF-8 [4] LC_COLLATE=en_GB.UTF-8 LC_MONETARY=en_GB.UTF-8 LC_MESSAGES=en_GB.UTF-8 [7] LC_PAPER=en_GB.UTF-8 LC_NAME=C LC_ADDRESS=C [10] LC_TELEPHONE=C LC_MEASUREMENT=en_GB.UTF-8 LC_IDENTIFICATION=C attached base packages: [1] stats graphics grDevices utils datasets methods base other attached packages: [1] ggplot2_1.0.0 microbenchmark_1.3-0 tgp_2.4-9 fuse_1.1.0 RHydro_2014-04.1 [6] qualV_0.3 KernSmooth_2.23-12 XML_3.98-1.1 deSolve_1.10-9 lhs_0.10 [11] sp_1.0-15 xts_0.9-7 zoo_1.7-11 loaded via a namespace (and not attached): [1] colorspace_1.2-4 digest_0.6.4 grid_3.1.1 gtable_0.1.2 lattice_0.20-29 MASS_7.3-33 [7] munsell_0.4.2 plyr_1.8.1 proto_0.3-10 Rcpp_0.11.2 reshape2_1.4 scales_0.2.4 [13] stringr_0.6.2 tools_3.1.1

Image credits to Nick Chill:

FUSE model in RHydro package (part 4: HydroMAD compatibility)

This is the fourth of a series of tutorials on the FUSE implementation within the RHydro package. The script for this tutorial is available here. If you are interested in following the discussion related to this post and see how it evolves, join the R4Hydrology community on Google+!

If you want to know what FUSE is, how to prepare your data and run a simple simulation go to the first post of the series, for a basic calibration example (using 1 model structure) go to the second post, while for an example of multi-model calibration got to the third post.

RHydro-HydroMAD compatibility

HydroMAD is an excellent framework for hydrological modelling, optimization, sensitivity analysis and assessment of results. It contains a large set of soil moisture accounting modules and routing functions.

Thanks to Joseph Guillaume (hydromad’s maintainer), FUSE-RHydro is now compatible with HydroMAD, therefore using and calibrating FUSE becomes even easier! Joseph provided many of the examples below, many thanks for that too!

In this tutorial I will show how to:

  1. set up FUSE and its parameter ranges using the hydromad approach
  2. run a simulation
  3. calibrate FUSE using one of the hydromad’s algorithms

Recap from previous posts

Load the package and prepare your default list of models


Read sample data in

temp <- read.csv("dummyData.csv") 

Convert to date the first column and then convert to zoo object

temp[,1] <- as.Date(temp[,1],format="%Y-%m-%d") 
DATA <- read.zoo(temp)

Step 1:  set up FUSE and its parameter ranges using the hydromad approach

HydroMAD allows to specify a rainfall-runoff model of choice. This is achieved by using the hydromad() function and specifying the soil moisture accounting model and routing function to use.

Set the parameter ranges using hydromad.options

hydromad.options(fusesma fusesma.ranges(),
                 fuserouting fuserouting.ranges())
Set up the model
modspec <- hydromad(DATA,
                    sma = "fusesma"                    routing = "fuserouting", 
                    mid = 1:1248, 
                    modlist = modlist)
# Randomly generate 1 parameter set
myNewParameterSet <- parameterSetscoef(modspecwarn=FALSE), 
                                    1                                    method="random" )

Step 2: run a single simulation

Run a simulation using the parameter set generated above
modx <- update(modspec,
               newpars = myNewParameterSet)
Generate a summary of the result
The instantaneous runoff is
U <- modx$U
The routed discharge is
Qrout <- modx$fitted.values
Plot the Observed vs Simulated value
Add the precipitation to the above plot
hydromad:::xyplot.hydromad(modx, with.P=TRUE)

Step 3: calibrate FUSE using one of hydromad’s algorithms

Hydromad provide the “fitBy” method to calibrate using a specified algorithm. As an example, the Shuffled Complex Evolution method can be used as shown below. Please note that the procedure is likely to take a LONG time.

modfit <- fitBySCE(modspec)

Get a summary of the result


If you want to use the latest version of fuse, the above steps can be adapted as follows:


# Load data
temp <- read.csv("dummyData.csv") 
temp[,1] <- as.Date(temp[,1],format="%Y-%m-%d") 
DATA <- read.zoo(temp)

# Set the parameter ranges using hydromad.options
hydromad.options(fusesma = fusesma.ranges(),
                 fuserouting = fuserouting.ranges())

# Set up the model
modspec <- hydromad(DATA,
                    sma = "fusesma", 
                    routing = "fuserouting",
                    mid = 1:1248,
                    deltim = 1/24)

# Calibrate FUSE using one of hydromad’s algorithms
modfit <- fitBySCE(modspec)

# Get a summary of the result

What’s next?

This tutorials was just a brief introduction to a topic that can be explored in many different directions. From a technical point of view, it could be worth to invest some effort into code optimization and/or parallelisation. This would also have an impact on the scientific side facilitating experiments on sensitivity analysis, regionalisation of catchment characteristics and model structure variability.

Some of those ideas are already on their way, some others are just random thoughts. Therefore…watch this space!

FUSE model in RHydro package (part 3: ensemble)

This is the third of a series of tutorials on the FUSE implementation within the RHydro package.  The script for this tutorial is available here. If you are interested in following the discussion related to this post and see how it evolves, join the R4Hydrology community on Google+!

If you want to know what FUSE is, how to prepare your data and run a simple simulation go to the first post of the series, while for a basic calibration example (using 1 model structure) go to the second post.


Recap from previous posts

Load the package and prepare your data:

temp <- read.csv("dummyData.csv") 
DATA <- zooreg(temp[,2:4],[,1])
myDELTIM <- 1


FUSE ensemble

Very often hydrologists decide to use a particular hydrological model based on code availability, familiarity and experience rather than based on hydrological suitability. The real advantage of using FUSE is the possibility to work with an ensemble of multiple models so that uncertainties related to the model structure can be quantified.

The input that defines the model structure is called mid (model identification number) and its value ranges between 1 and 1248. When the most suitable model structure(s) is not known a priori,  the mid can be added to the list of parameters and calibrated.

Adding the full mid-range implies the need to increase significantly the sampling space. There are, however, 4 model structures (called parent models) from which all the other model combinations are derived. In this tutorial I will only consider those 4 model structures.

In this tutorial I will show how to:

A. define the update sampling space (parameter + mid ranges)

B. run a multi-model calibration

C. compare results


Step A: define the parameter ranges + mid range

The parent models are as follows:



342 = PRMS


Therefore mid can be one of those 4 values:

mids <- c(60, 230, 342, 426)

The parameter ranges are defined as in the previous post.

DefaultRanges <- data.frame(rbind(rferr_add = c(0,0),
                                  rferr_mlt = c(1,1), 
                                  maxwatr_1 = c(25,500), 
                                  maxwatr_2 = c(50,5000),
                                  fracten = c(0.05,0.95), 
                                  frchzne = c(0.05,0.95),
                                  fprimqb = c(0.05,0.95), 
                                  rtfrac1 = c(0.05,0.95), 
                                  percrte = c(0.01,1000), 
                                  percexp = c(1,20), 
                                  sacpmlt = c(1,250), 
                                  sacpexp = c(1,5), 
                                  percfrac = c(0.05,0.95), 
                                  iflwrte = c(0.01,1000), 
                                  baserte = c(0.001,1000), 
                                  qb_powr = c(1,10), 
                                  qb_prms = c(0.001,0.25), 
                                  qbrate_2a = c(0.001,0.25), 
                                  qbrate_2b = c(0.001,0.25), 
                                  sareamax = c(0.05,0.95), 
                                  axv_bexp = c(0.001,3), 
                                  loglamb = c(5,10), 
                                  tishape = c(2,5), 
                                  timedelay = c(0.01,5) )
names(DefaultRanges) <- c("Min","Max")
nRuns <- 100
parameters <- lhs( nRuns, as.matrix(DefaultRanges) )
parameters <- data.frame(parameters)
names(parameters) <- c("rferr_add","rferr_mlt","maxwatr_1","maxwatr_2","fracten","frchzne","fprimqb","rtfrac1","percrte","percexp","sacpmlt","sacpexp","percfrac","iflwrte","baserte","qb_powr","qb_prms","qbrate_2a","qbrate_2b","sareamax","axv_bexp","loglamb","tishape","timedelay")

Step B: run a multi-model calibration

Use the Nash-Sutcliffe efficiency as objective function and run the model 4*nRuns times (for each mid and  parameter set).

indices <- rep(NA,4*nRuns)
discharges <- matrix(NA,ncol=4*nRuns,nrow=dim(DATA)[1])
kCounter <- 0
for (m in 1:4){
  myMID <- mids[m]
  for (pid in 1:nRuns){
  kCounter <- kCounter + 1
  ParameterSet <- as.list(parameters[pid,])
  # Run FUSE Soil Moisture Accounting module
  Qinst <-   fusesma.sim(DATA,
                         states=FALSE, fluxes=FALSE, fracstate0=0.25,

    # Run FUSE Routing module
    Qrout <- fuserouting.sim(Qinst, mid=myMID, 

    indices[kCounter] <- EF(DATA$Q,Qrout)
    discharges[,kCounter] <- Qrout


Step C: compare results

Deterministically, the best simulation according to the Nash-Sutcliffe efficiency, is the one with the maximum factor.

bestRun <- which(indices == max(indices))

This corresponds to the model ARNOXVIC, in fact:

bestModel <- function(runNumber){
 if (runNumber<(nRuns+1)) myBestModel <- "TOPMODEL"
 if (runNumber>(nRuns+1) & runNumber<(2*nRuns+1)) myBestModel <- "ARNOXVIC"
 if (runNumber>(2*nRuns+1) & runNumber<(3*nRuns+1)) myBestModel <- "PRMS"
 if (runNumber>(3*nRuns+1) & runNumber<(4*nRuns+1)) myBestModel <- "SACRAMENTO"

plot(coredata(DATA$Q),type="l",xlab="",ylab="Streamflow [mm/day]", lwd=0.5)
for(pid in 1:(4*nRuns)){
 lines(discharges[,pid], col="gray", lwd=3)
lines(coredata(DATA$Q),col="black", lwd=1)
lines(discharges[,bestRun],col="red", lwd=1)

The plot below shows the observed streamflow in black, all the simulated results in grey and the “best” simulated streamflow in red.

As you can see, using multiple model structures inflates the uncertainty.

FUSE simulations (4 models)

FUSE simulations (4 models)


How the best simulation of each model structure compare to each other?

bestRun0060 <- which(indices[1:nRuns] == max(indices[1:nRuns]))
bestRun0230 <- nRuns + which(indices[(nRuns+1):(2*nRuns)] == max(indices[(nRuns+1):(2*nRuns)]))
bestRun0342 <- 2*nRuns + which(indices[(2*nRuns+1):(3*nRuns)] == max(indices[(2*nRuns+1):(3*nRuns)]))
bestRun0426 <- 3*nRuns + which(indices[(3*nRuns+1):(4*nRuns)] == max(indices[(3*nRuns+1):(4*nRuns)]))

plot(coredata(DATA$Q),type="l",xlab="",ylab="Streamflow [mm/day]", lwd=1)
lines(discharges[,bestRun0060], col="green", lwd=1)
lines(discharges[,bestRun0230], col="blue", lwd=1)
lines(discharges[,bestRun0342], col="pink", lwd=1)
lines(discharges[,bestRun0426], col="orange", lwd=1)

        col = c("green", "blue", "pink", "orange"),
        lty = c(1, 1, 1, 1))

Best simulation for each model structure

Best simulation for each model structure

The plot above shows that TOPMODEL seems the least affected by the initial conditions, while PRMS is the most affected.

What’s next?

FUSE-RHydro is also compatible with HydroMAD, therefore model calibration and assessment becomes even easier (see an example here)!