Historical Background


Due to a decision taken by C.F. Gauss himself, the classical error calculus refers to random errors only. Remarkably enough, Gauss also discussed what he then called regular or constant errors – time-constant perturbations being unknown in magnitude and sign. Today the latter are termed unknown systematic errors. As we know, Gauss dismissed suchlike errors arguing that it were up to the experimenter to remedy them. Later, the metrological community became aware that this assumption shaped up as wrong.  In order to unify the quotation of measurement uncertainties, the Gaussian error calculus had to be recast.

The turning point happened to be induced in the wake of a seminar held in February 1978 at the Physikalisch-Technische Bundesanstalt Braunschweig. Being one of the lecturers, I proposed, arguably for the first time, to considered unknown systematic errors to spawn biases, a view which would cause the conventional procedures hitherto used to evaluate measured data to collapse.  

Unfortunately, the aftershocks of the seminar led to a bifurcation: While  the world’s National Measurement Institutes pursued what they called the


Guide to the Expression of Uncertainty in Measurement,


GUM for short, I proposed a revision of error calculus from scratch to which I tentatively attri­buted the term


Generalized Gaussian Error Calculus



next page