Most of my life I have worked with William Volk’s *Applied Statistics for Engineers*. At present I work with his *1980 Reprint Edition*. I lost the *1969 Second Edition* while I was preaching sound sampling practices and applied statistics around the world. I’m hanging on to my tattered *1958 Original Edition*. Volk’s name translates into the Dutch word for *“nation”* or *“people”*. That led me to believe William Volk and Jan Visman may share the same roots. Volk holds a 1959 masters degree in mathematical statistics from Rutgers University and an undergraduate degree in chemical engineering from New York University. So he must have written much of his *Original Edition* before graduating from Rutgers. I took a real liking to *Chapter 7 Analysis of Variance*. What I like most of all is *Section 7.1.4 Variance of a general function*. For it was in this section that Volk proved that each function ought to have its own variance.

Volk’s grasp of the properties of variances shows how inspired he was by Fisher’s work. Probability theory had spawned applied statistics by the time Sir R A Fisher was knighted in 1953. And it was the concept of degrees of freedom that empowered applied statistics and set it apart from probability theory. Fisher in 1922 introduced the concept of degrees of freedom to correct Pearson’s χ²-distribution for finite sets of measured values. It did bridge the breach between probability theory and applied statistics. This is why applied statistics deals with finite samples selected from sampling units or sample spaces. What degrees of freedom also did at that time was fuel the legendary feud between those giants of statistics. Fisher was right because the F- and t-distributions both derive from the χ²-distribution once degrees of freedom are taken into account. Volk’s 1958 textbook is of lasting value because it links χ²-, F-, and t-distributions in such a logical manner.

Volk’s symbols and terms are mostly clear and concise. I found Volk’s “*central tendency measures”* less intuitive than “*central values”* (of sets of measured values with either constant or variable weights). I avoid terms such as *“successive observations”* when discussing an ordered set of measured values of a stochastic variable in a sampling unit or a sample space. All it takes in my work is text and context to correctly explain applied statistics and its symbols.

Volk applied Fisher’s F-test to verify whether or not a pair of variances is statistically identical. He applied Bartlett’s χ²-test to verify whether or not a set of variances is homogeneous. He did not show how to apply Fisher’s F-test to verify spatial dependence between measured values in ordered sets. All it would have taken is to apply Fisher’s F-test to *var(x)*, the variance of a set of measured values, and *var _{1}(x)*, the first variance term of the ordered set. Volk, a chemical engineer, may well have worked with some ordered set of measured values in a sampling unit or a sample space of time but he never showed how to derive a sampling variogram.

John von Neumann was a brilliant mathematician at Princeton’s Institute for Advanced Studies when he coauthored *Distribution of the Ratio of the Mean Square Successive Difference to the Variance*. He seemed unaware in 1941 that a set of ** n** samples gives

*df=n–1*degrees of freedom, and that an ordered set of

**observations gives**

*n**df*degrees of freedom. Had he added all of the terms

_{o}=2(n–1)*x*, he would have gotten x

_{1}–x_{2},…,x_{i}–x_{i+1},…,x_{n-1}–x_{n}*, the n*

_{1}–x_{n}^{th}variance term of the ordered set. Had he counted degrees of freedom, he would have gotten the correct number for the ordered set. He may not have noticed that all but

*x*and

_{1}*x*are used twice.

_{n}Von Neumann deemed working with random numbers a sin of sorts. It explains why he frowned upon heuristic proof. In those days, random numbers were listed in handbooks of statistical tables. That made the mean squared successive difference of a set about as tedious to derive as its variance. He was a pure mathematician, which may well be why his 1941 study did so little to advance mathematical statistics.

Anders Hald, a Professor of Statistics at the University of Copenhagen, pointed out that the correct number of degrees of freedom for the first variance term of an ordered set of ** n** measured values is

*df*. He did so in

_{o}=2(n–1)*Section 13.5. The Mean Square of Successive Differences*of his 1952 textbook on

*Statistical Theory with Engineering Applications*. Hald, too, studied the distribution of

*r=var*rather than Fisher’s F-distribution. Otherwise, he would have noticed that a significant degree of spatial dependence between measured values in some ordered set gives an observed value of

_{1}(x)/var(x)*F=var(x)/var*.

_{1}(x)>1Textbooks on applied statistics such as Volk’s give a table with F-values at 0.05 and 0.01 probability for a matrix of degrees of freedom. Nowadays, Excel’s FINV makes it easy to get the correct F-value at any probability level and with any number of degrees of freedom for either variance. What’s more, Excel’s RAND makes it simple to prove that *Standard Uniform Random Numbers* *(SURNs)* and *Normally Distributed Random Numbers (NDRNs)* do not display a significant degree of spatial dependence. Visit ** geostatscam.com** and find out about

*SURNs*and

*NDRNs*under

*.*

**Sampling and statistics explained**Not all geoscientists know how to test for spatial dependence by applying Fisher’s F-test. In fact, geostatisticians would rather assume than test for spatial dependence. They have also been taught that some functions do not have variances. The problem is that too many geoscientists know too little about sampling and statistics and too much about surreal geostatistics.