Accurate estimation of precipitation in mountain catchments is challenging due to its high spatial variability and lack of measured ground data. Weather radar can help to provide precipitation estimates in such conditions. This study investigates the differences between measured and radar-estimated daily precipitation in the mountain catchment of the Jalovecký Creek (area 22 km2, 6 rain gauges at altitudes 815–1900 m a.s.l.) in years 2017–2020. Despite good correlations between measured and radar-based precipitation at individual sites (correlation coefficients 0.68–0.90), the radar-estimated precipitation was mostly substantially smaller than measured precipitation. The underestimation was smaller at lower altitude (on average by –4% to –17% at 815 m a.s.l.) than at higher altitudes (–35% to –59% at 1400–1900 m a.s.l.). Unlike measured data, the radar-estimated precipitation did not show the differences in precipitation amounts at lower and higher altitudes (altitudinal differences). The differences between the measured and radar-estimated precipitation were not related to synoptic weather situations. The obtained results can be useful in preparation of more accurate precipitation estimates for the small mountain catchments.
The interception process in subalpine Norway spruce stands plays an important role in the distribution of throughfall. The natural mountain spruce forest where our measurements of throughfall and gross precipitation were carried out, is located on the tree line at an elevation of 1,420 m a.s.l. in the Western Tatra Mountains (Slovakia, Central Europe). This paper presents an evaluation of the interception process in a natural mature spruce stand during the growing season from May to October in 2018–2020. We also analyzed the daily precipitation events within each growing season and assigned to them individual synoptic types. The amount and distribution of precipitation during the growing season plays an important role in the precipitation-interception process, which confirming the evaluation of individual synoptic situations. During the monitored growing seasons, precipitation was normal (2018), sub-normal (2019) and above-normal (2020) in comparison with long-term precipitation (1988–2017). We recorded the highest precipitation in the normal and above-normal precipitation years during the north-eastern cyclonic synoptic situation (NEc). During these two periods, interception showed the lowest values in the dripping zone at the crown periphery, while in the precipitation sub-normal period (2019), the lowest interception was reached by the canopy gap. In the central crown zone near the stem, interception reached the highest value in each growing season. In the evaluated vegetation periods, interception reached values in the range of 19.6–24.1% of gross precipitation total in the canopy gap, 8.3–22.2% in the dripping zone at the crown periphery and 45.7–51.6% in the central crown zone near the stem. These regimes are expected to change in the Western Tatra Mts., as they have been affected by windstorms and insect outbreaks in recent decades. Under disturbance regimes, changes in interception as well as vegetation, at least for some period of time, are unavoidable.
In many Austrian catchments in recent decades an increase in the mean annual air temperature and precipitation has been observed, but only a small change in the mean annual runoff. The main objective of this paper is (1) to analyze alterations in the performance of a conceptual hydrological model when applied in changing climate conditions and (2) to assess the factors and model parameters that control these changes. A conceptual rainfall-runoff model (the TUW model) was calibrated and validated in 213 Austrian basins from 1981–2010. The changes in the runoff model’s efficiency have been compared with changes in the mean annual precipitation and air temperature and stratified for basins with dominant snowmelt and soil moisture processes. The results indicate that while the model’s efficiency in the calibration period has not changed over the decades, the values of the model’s parameters and hence the model’s performance (i.e., the volume error and the runoff model’s efficiency) in the validation period have changed. The changes in the model’s performance are greater in basins with a dominant soil moisture regime. For these basins, the average volume error which was not used in calibration has increased from 0% (in the calibration periods 1981–1990 or 2001–2010) to 9% (validation period 2001–2010) or –8% (validation period 1981–1990), respectively. In the snow-dominated basins, the model tends to slightly underestimate runoff volumes during its calibration (average volume error = –4%), but the changes in the validation periods are very small (i.e., the changes in the volume error are typically less than 1–2%). The model calibrated in a colder decade (e.g., 1981–1990) tends to overestimate the runoff in a warmer and wetter decade (e.g., 2001–2010), particularly in flatland basins. The opposite case (i.e., the use of parameters calibrated in a warmer decade for a colder, drier decade) indicates a tendency to underestimate runoff. A multidimensional analysis by regression trees showed that the change in the simulated runoff volume is clearly related to the change in precipitation, but the relationship is not linear in flatland basins. The main controlling factor of changes in simulated runoff volumes is the magnitude of the change in precipitation for both groups of basins. For basins with a dominant snowmelt runoff regime, the controlling factors are also the wetness of the basins and the mean annual precipitation. For basins with a soil moisture regime, landcover (forest) plays an important role.
In a previous study, the topsoil and root zone ASCAT satellite soil moisture data were implemented into three multi-objective calibration approaches of the TUW hydrological model in 209 Austrian catchments. This paper examines the model parametrization in those catchments, which in the validation of the dual-layer conceptual semi-distributed model showed improvement in the runoff simulation efficiency compared to the single objective runoff calibration. The runoff simulation efficiency of the three multi-objective approaches was separately considered. Inferences about the specific location and the physiographic properties of the catchments where the inclusion of ASCAT data proved beneficial were made. Improvements were primarily observed in the watersheds with lower slopes (median of the catchment slope less than 15 per cent) and a higher proportion of farming land use (median of the proportion of agricultural land above 20 per cent), as well as in catchments where the runoff is not significantly influenced by snowmelt and glacier runoff. Changes in the mean and variability of the field capacity parameter FC of the soil moisture regime were analysed. The values of FC decreased by 20 per cent on average. Consequently, the catchments’ water balance closure generally improved by the increase in catchment evapotranspiration during the validation period. Improvements in model efficiency could be attributed to better runoff simulation in the spring and autumn month. The findings refine recommendations regarding when hydrological modelling could consider satellite soil moisture data added to runoff signatures in calibration useful.