Joaquin Bedia
2017-07-03T09:30:00Z
Forecast to be verified
With forecast
Joaquin Bedia
2017-07-03T14:44:40Z
A property linking the forecast with the way it is represented
With forecast representation
Joaquin Bedia
2017-07-11T08:59:01Z
A property of the Verification determining the Quality Aspect addressed
With Quality Aspect
Joaquin Bedia
2017-07-03T09:31:28Z
Verifying reference (tipically observations)
With reference
Joaquin Bedia
2017-07-03T09:33:21Z
Reference forecast used in skill score calculation
With reference forecast
Joaquin Bedia
2017-07-03T15:17:32Z
Has absolute threshold
This data property specifies the absolute thresholds (i.e. in the units of the variable) used to convert the continuous forecasts into category forecasts.
Joaquin Bedia
2017-07-03T15:14:46Z
This data property specifies the probability thresholds (i.e. in the range [0,1]) used to convert the continuous forecasts into category forecasts.
Has probability threshold
Joaquin Bedia
2017-07-03T10:37:54Z
Accuracy is a measure of the average distance (e.g. squared difference/error) between forecasts and observations.
Accuracy
Joaquin Bedia
2017-07-03T10:37:54Z
Association is a measure of dependency between forecasts and observations (e.g. linear association measured using the product moment correlation between ensemble mean forecast and corresponding observation)
Association
Joaquin Bedia
2017-07-03T10:37:54Z
The bias is a feature of a forecasting system whereby the mean state of the forecasted value differs from the mean reference observation. This difference (either negative or positive) is the bias.
Bias
Joaquin Bedia
2017-07-03T14:41:55Z
Joaquin Bedia
2017-07-03T14:41:55Z
Joaquin Bedia
2017-07-03T14:41:55Z
Joaquin Bedia
2017-07-03T10:37:54Z
Discrimination is a measure of how much forecasts vary for different observation values e.g. for deterministic forecasts of binary observations, the difference between the hit rate H and the false alarm rate F (i.e. how far the ROC curve deviates from H=F)
Discrimination
Joaquin Bedia
2017-07-03T14:41:55Z
Joaquin Bedia
2017-07-03T14:39:31Z
The way in which the forecast predictions are represented (e.g. binary probabilities, multi-category probabilities...)
https://docs.google.com/spreadsheets/d/1RSnCmztS_TC54Wq-A6RMI_NDMY-p_z8q_u-Q7ougPj8/edit#gid=0
Forecast representation
Joaquin Bedia
2017-07-03T14:41:55Z
Joaquin Bedia
2017-07-03T11:20:13Z
Forecast Quality is a multidimensional concept described by several scalar attributes that provide useful information about the performance of a forecasting system. Therefore, no single measure is sufficient for judging and comparing forecast quality. Several Quality Aspects are introduced as subclasses
Forecast Quality Aspect
Joaquin Bedia
2017-07-03T10:37:54Z
Reliability is a measure of the conditional bias of the forecasts (i.e. difference between the conditional expectation 'E(f|o)' of the forecasts 'f' for a given observation value 'o' and the observation value 'o'. In other words, the difference between the reliability curve 'E(f|o)' and the line 'f=o' when 'E(f|o)' is plotted against 'f' on a so-called reliability diagram
Reliability
Joaquin Bedia
2017-07-03T10:37:54Z
The resolution measures how much the conditional probabilities given the different forecasts differ from the climatic average.
Resolution
Joaquin Bedia
2017-07-03T11:01:21Z
A skill score is a relative measure of the performance of a forecast relative to some reference (benchmark) forecast
Skill score
Joaquin Bedia
2017-07-03T09:14:42Z
Forecast verification is a subfield of the climate, atmospheric and ocean sciences dealing with validating, verifying and determining the predictive power of prognostic model forecasts. Because of the complexity of these models, forecast verification goes a good deal beyond simple measures of statistical association or mean error calculations
Verification / validation