This paper analyses the bivariate relationship between flood peaks and corresponding flood event volumes modelled by empirical and theoretical copulas in a regional context, with a focus on flood generation processes in general, the regional differentiation of these and the effect of the sample size on reliable discrimination among models. A total of 72 catchments in North-West of Austria are analysed for the period 1976-2007. From the hourly runoff data set, 25 697 flood events were isolated and assigned to one of three flood process types: synoptic floods (including long- and short-rain floods), flash floods or snowmelt floods (both rain-on-snow and snowmelt floods). The first step of the analysis examines whether the empirical peak-volume copulas of different flood process types are regionally statistically distinguishable, separately for each catchment and the role of the sample size on the strength of the statements. The results indicate that the empirical copulas of flash floods tend to be different from those of the synoptic and snowmelt floods. The second step examines how similar are the empirical flood peak-volume copulas between catchments for a given flood type across the region. Empirical copulas of synoptic floods are the least similar between the catchments, however with the decrease of the sample size the difference between the performances of the process types becomes small. The third step examines the goodness-of-fit of different commonly used copula types to the data samples that represent the annual maxima of flood peaks and the respective volumes both regardless of flood generating processes (the traditional engineering approach) and also considering the three process-based classes. Extreme value copulas (Galambos, Gumbel and Hüsler-Reiss) show the best performance both for synoptic and flash floods, while the Frank copula shows the best performance for snowmelt floods. It is concluded that there is merit in treating flood types separately when analysing and estimating flood peak-volume dependence copulas; however, even the enlarged dataset gained by the process-based analysis in this study does not give sufficient information for a reliable model choice for multivariate statistical analysis of flood peaks and volumes.
Editors of several journals in the field of hydrology met during the General Assembly of the European Geosciences Union–EGU in Vienna in April 2017. This event was a follow-up of similar meetings held in 2013 and 2015. These meetings enable the group of editors to review the current status of the journals and the publication process, and to share thoughts on future strategies. Journals were represented at the 2017 meeting by their editors, as shown in the list of authors. The main points on invigorating hydrological research through journal publications are communicated in this joint editorial published in the above journals.
In this study, the value of proxy data was explored for calibrating a conceptual hydrologic model for small ungauged basins, i.e. ungauged in terms of runoff. The study site was a 66 ha Austrian experimental catchment dominated by agricultural land use, the Hydrological Open Air Laboratory (HOAL). The three modules of a conceptual, lumped hydrologic model (snow, soil moisture accounting and runoff generation) were calibrated step-by-step using only proxy data, and no runoff observations. Using this stepwise approach, the relative runoff volume errors in the calibration and first and second validation periods were –0.04, 0.19 and 0.17, and the monthly Pearson correlation coefficients were 0.88, 0.71 and 0.64, respectively. By using proxy data, the simulation of state variables improved compared to model calibration in one step using only runoff data. Using snow and soil moisture information for model calibration, the runoff model performance was comparable to the scenario when the model was calibrated using only runoff data. While the runoff simulation performance using only proxy data did not considerably improve compared to a scenario when the model was calibrated on runoff data, the more accurately simulated state variables imply that the process consistency improved.
Substantial evidence shows that the frequency of hydrological extremes has been changing and is likely to continue to change in the near future. Non-stationary models for flood frequency analyses are one method of accounting for these changes in estimating design values. The objective of the present study is to compare four models in terms of goodness of fit, their uncertainties, the parameter estimation methods and the implications for estimating flood quantiles. Stationary and non-stationary models using the GEV distribution were considered, with parameters dependent on time and on annual precipitation. Furthermore, in order to study the influence of the parameter estimation approach on the results, the maximum likelihood (MLE) and Bayesian Monte Carlo Markov chain (MCMC) methods were compared. The methods were tested for two gauging stations in Slovenia that exhibit significantly increasing trends in annual maximum (AM) discharge series. The comparison of the models suggests that the stationary model tends to underestimate flood quantiles relative to the non-stationary models in recent years. The model with annual precipitation as a covariate exhibits the best goodness-of-fit performance. For a 10% increase in annual precipitation, the 10-year flood increases by 8%. Use of the model for design purposes requires scenarios of future annual precipitation. It is argued that these may be obtained more reliably than scenarios of extreme event precipitation which makes the proposed model more practically useful than alternative models.