Agterberg’s tribute

It’s high time to try and read Agterberg’s state of mind in his tribute to the life and times of Professor Dr George Matheron. It taught me so much more about his way of thinking than I had learned when we talked in the early 1990s. Neither could I have found out what I needed to know had the Centre de Géosciences (CG) not posted Matheron’s works on its website. When I looked at CG’s spiced up website for the first time I found out that he wrote his Note statistique No 1 in 1954. So, it seems safe to assume Matheron thought he was working with statistics. His thoughts are accessible again since CG’s website is back online.

Agterberg said in his tribute that Matheron “commenced work on regionalized random variables inspired by De Wijs and Krige.” Let’s take a look at Matheron’s very first paper and try to find out what he did in his Formule des Minerais Connexes. He tested for associative dependence between lead and silver grades in lead ore. He derived length-weighted average lead and silver grades of core samples that varied in lengths. What he didn’t do was derive variances of length-weighted average lead and silver grades. Neither did he test for spatial dependence between metal grades of ordered core samples. He didn’t give his primary data but scribbled a few stats in this 1954 paper. He didn’t refer to De Wijs or to Krige. In fact, Matheron rarely referred to the works of others.

Where’s the Central Limit Theorem?

Matheron was a master at working with symbols. Yet, he wouldn’t have made the grade in statistics because the Central Limit Theorem was beyond his grasp. The Founder of Spatial Statistics did indeed have a long way to go in 1954. So, he penned nothing but Notes Statistique until 1959. That’s when he tucked Note géostatisque No 20 tightly behind Note statistique No19. Why did he switch from stats to geostats? It took quite a while to explain but here’s what Matheron said in 1978. He did it because “geologists stress structure” and “statisticians stress randomness.” That sort of drivel stands the test of time in Matheron’s Foreword to Mining Geostatistics just as much as Journel’s mad zero kriging variance does in Section V.A. Theory of Kriging.

What did D G Krige do that so inspired young Matheron? In 1954 Krige had looked at, “A statistical approach to some mine valuation problems on the Witwatersrand.” It does read like real statistics, doesn’t it? In 1960 he had reflected, “On the departure of ore value distributions from the lognormal model in South African gold mines.” That’s the ugly reality at gold mines! So, Krige did indeed work with statistics in those days. He may since have had some epiphany because he cooked up in 1976, “A review of the development of geostatistics.” Surely, Krige was highly qualified to put a preface to David’s 1977 Geostatistical Ore Reserve Estimation with its infinite set of simulated values in Section 12.2 Conditional Simulations.

Why did H J De Wijs wind up in Agterberg’s tribute to Matheron? Agterberg had found out in 1958 that De Wijs worked with formulas that “differed drastically from those used by mathematical statisticians.” Agterberg himself preferred “the conventional method of serial correlation.” Why would Agterberg talk about mathematical statistics and serial correlation in 1958 when he was to strip the variance of his own distance-weighted average point grade in 1970 and in 1974? Agterberg ought to explain why in 2009!

De Wijs brought vector analysis without confidence limits to mining engineering at the Technical University of Delft in the Netherlands when he left Bolivia after the Second World War. Jan Visman worked in the Dutch coal mines and surfaced after the war with tuberculosis, an innovative sampling theory, and a huge set of test results determined in samples taken from heterogeneous sampling units of coal. Visman had so much information that he was encouraged to write his PhD thesis on this subject. And that’s exactly what he did! He continued to work as a mining engineer at the Dutch State Mines. When he found out that the Dutch Government was thinking of closing its coal mines he migrated to Canada in 1951. He worked briefly in Ottawa until 1955, and moved to Alberta where his formidable expertise was put to work in the coal industry.

Going, going, gone in geostatistics

Visman’s sampling experiment with pairs of small and large increments is described in ASTM D2234-Collection of a Gross Sample of Coal, Annex A1. Test Method for Determining the Variance Components of a Coal. Visman’s sampling theory has been quoted in a range of works. Following are some surprising references to Visman’s work, and to the lack thereof after Gy’s work was widely accepted for no apparent reason.

Gy’s 1967 L’Échantillonnage des Minerais en Vrac, Tome 1 two
Gy’s 1973 L’Échantillonnage des Minerais en Vrac, Tome 2eight
David’s 1977 Geostatistical Ore Reserve Estimation two
Journel & Huijbregts’s 1978 Mining Geostatisticszero
Clark’s 1979 Practical Geostatisticszero
Gy’s 1979 Sampling Particulate Materials, Theory Practicezero

Visman’s sampling theory is based on the additive property of variances. None of the above works deals with the additive property of variances in measurement hierarchies.

How to measure what we speak about

NASA satellites have been measuring lower troposphere global temperatures since 1979. At that time I went around the world at a snail’s pace. Lord Kelvin’s thoughts about how to measure what we speak about were much on my mind in those days. I thought a lot of metrology in general, and of sampling and statistics in detail. I was to visit all of Cominco’s operations around the world. My task was to assess the sampling and weighing of a wide range of materials. Of course, it couldn’t possibly have crossed my mind that I would look in 2008 at the statistics for 30 years of lower troposphere global temperatures.

My job with Cominco did have its perks. When I was at the Black Angel mine in Greenland, I saw Wegener’s sledge on a glacier above the Banana ore zone. I knew how geologists had struggled with Wegener’s continental drift, and how they slowed it down to plate techtonics.

Southeast Coast of Greenland

I knew geologists were struggling with Matheron’s new science of geostatistics. I travelled around the world with a bag of red and white beans, a HP41 calculator and a little printer to make the Central Limit Theorem come alive during workshops on sampling and statistics. I lost my bag of beans because it was confiscated at customs in Australia.

On-stream analyzers that measure metal grades of slurry flows at mineral processing plants ranked high on my list of tools to work with. The fact that the printed list of measured values was just peeled of the printer at the end of a shift rubbed me the wrong way. I got into the habit of asking who did what with measured values. It was not much at that time because on-stream analyzers were as rare as weather satellites. Daily sheets made up a monthly pile, and that was the end of it. I entered the odd set in my HP41 to derive the arithmetic mean and its confidence limits for a single shift. But that was too tedious a task. That’s why spreadsheet software ranked high on my list of stuff to work with.

I met a metallurgist who tried to put to work Box and Jenkins 1976 Time series analysis. So, he did have a few questions. I explained what Visman’s sampling theory had taught me. First of all, the variance terms of an ordered set of measured values give a sampling variogram. Secondly, the lag of a sampling variogram shows where orderliness in a sample space or a sampling unit dissipates into randomness. The problem is that Time series analysis doesn’t work with sampling variograms. So, the metallurgist got rid of his Box and Jenkins and I took his Time series analysis. Box and Jenkins referred to M S Bartlett, R A Fisher, A Hald, and J W Tukey but not to F P Agterberg or G Matheron. Box and Jenkins provide interesting data sets. I’ve got to look at the statistics for Wölfer’s Yearly Sunspot Numbers for the period from 1770 to 1869.

Sunspots

Visman’s sampling theory did come alive while I was working with Cominco. So much so that I decided to put together Sampling and Weighing of Bulk Solids. The interleaved sampling protocol plays a key role in deriving confidence limits for the mass of metal contained in a concentrate shipment. So, I was pleased that ISO Technical Committee 183 approved ISO/DIS 13543–Determination of Mass of Contained Metal in the Lot. I was already thinking about measuring the mass of metal contained in an ore deposit! But CIM’s geostatistical thinkers had different thoughts. For example, CIM’s Geological Society rejected Precision Estimates for Ore Reserves. In contrast, CIM’s Metallurgical Society approved Simulation Models for Mineral Processing Plants.

In other words, testing for spatial dependence is acceptable when applied to an ordered set of metal grades in a slurry flow. Testing for spatial dependence is unacceptable when applied to metal grades of ordered rounds in a drift. So I talked to Dr W D Sinclair, Editor, CIM Bulletin. He was but one of a few who would listen to my objection against such ambiguity. In fact, I put together a technical brief and called it Abuse of Statistics. I mailed it on July 2, 1992, and asked it be reviewed by a statistician. A few weeks later Sinclair called and said Dr F P Agterberg, his Associate Editor, was on the line with a question. What Agterberg wanted to know is when and where Wells did praise statistical thinking. That was all!

H G Wells

I didn’t know when or where Wells said it! I didn’t even know whether he said it or not! What I did know was that Darrell Huff thought he had said it. In fact, he did quote it in How to Lie with Statistics. I didn’t know much about Agterberg in 1992. What I did know then was that David in his 1977 Geostatistical Ore Reserve Estimation referred to Agterberg’s 1974 Geomathematics. And I found out that Agterberg didn’t trust statisticians when he reviewed Abuse of Statistics.

F P Agterberg

Agterberg , CIM Bulletin’s Associate Editor in 1992, was a leading scholar with the Geological Survey of Canada. Yet, he didn’t know that functions do have variances. It does explain why he fumbled the variance of his own distance-weighted average zero-dimensional point grade first in 1970, and again in 1974. He could have told me in 1992 that this variance was gone but chose not to. Agterberg was the President of the International Association for Mathematical Geology when it was recreated as the International Association for Mathematical Geosciences. He is presently IAMG’s Past President. He still denies that his zero-dimensional distance-weighted average point grade does have a variance. Agterberg was wrong in 1970, in 1974, and in 1992. And he is still wrong in 2009. That’s bad news for geoscientists!

A Case For Free Advice – It Helps a Customer as Well as Yourself

When I browse through this and other forums I am often fascinated at how much free advice is made available by the many experts.  At first one might think that this unpaid consulting at its best.  I am a firm believer in helping people out with as much information as you can provide.  Why?  It reminds me of my own sour grape experience as an adolescent trying to install a new cylinder and piston on my moped.  I had bought the – I might add expensive – parts at my up to then favorite motor cycle dealer.  Since I had been a gearhead all my live the mechanical part of the job was no problem with the exception to the ignition timing.  The store owner would not provide me with that value insisting that I let him do that at his shop, which was 10 miles away from home.  The experience turn me off so bad that I took my chances, figured it out myself and never went back to the guy again.  How many people do you think I told about the bad experience? 

This is where suppliers and subject matter experts can make a huge difference in the lives of others.  Anything affects everything and who knows when the one whom you helped may be able to help you one day?  If you are interested read Joseph Jaworski’s book about “Synchronicity”, it is a great book.  While there is always a fine line between free advice and giving proprietary information away, what do you have to lose?  There is always a great step between obtaining advice and successfully implementing the solutions.  Usually the basic technical details may be provided in the initial request fro help, but I have rarely seen people provide either all the site details or assumptions made.  Kudos to anyone that can figure out his own issues based on the advice given to him.   You as the advice giver will always glean something new from a request for help. 

Energy consumption per ton of a pneumatic conveying system.

 

Pneumatic conveying installations suffer from the image of being not energy efficient.

Usually, this bad image is explained by the statement that a pneumatic conveying system is based on high velocities. High velocities are usually synonymous for high energy demand.

 

The definition of the efficiency of a pneumatic conveying system is (in case of an electric drive):

 

Total efficiency = Electric energy / tons       (in kWh/ton)

 

This Total efficiency can be divided in 4 partial efficiencies.

 

1) Drive efficiency = Mechanical energy / Electric energy

 2) Compressor efficiency = Thermal compressing energy / Mechanical energy

 3) Thermo dynamic conveying efficiency = Thermal compressing energy / Thermal expansion energy

 4) Pneumatic conveying efficiency = Thermal expansion energy / tons

  Continue reading Energy consumption per ton of a pneumatic conveying system.

How to work with real statistics

Lorne Gunter called on skeptics to unite. He did so in the National Post. His story was about scientists who don’t warm up to “the orthodoxy on global warming”. What a shame but a few got this call because it came on Monday, October 20, 2008. The timing couldn’t have been worse. It was another Monday when Wall Street and Bay Street watchers saw stock indices move straight south. Global warming isn’t of as much concern as are shrinking stock portfolios. It may explain why Lorne’s tale was told on a Monday. Sandra Rubin’s story, too, ran on a Monday. NP’s head honchos run their own stories mostly in weekend editions. NP’s very first edition was printed on October 27, 1998. At that time, it was Lord Black’s pride and joy. At this time, Lord Black is doing time and NP’s kingpins are still timing things their own way.

Lorne need not have urged skeptics to unite since they did so long ago. Skeptics do hold a dim view of pseudo scientists who play games with scientific integrity. I may well have been a born skeptic. I was taught more than I could grasp about heaven and hell from a pulpit in a Dutch village. Nowadays I teach how to test for spatial dependence in sampling units and sample spaces. Stanford’s Journel taught in 1992 that spatial dependence between measured values may be assumed. I never thought much of Journel’s thinking. Neither did JMG’s Editor. All I thought about at that time was to rid the world of Matheron’s junk statistics. Come hell or high water! And I still do!

The National Post brought to light on November 7th that President-Elect Barrack Obama is set to “Stop global warming”. It brought back that off the wall “Stop continental drift” slogan. Geologists slowed down continental drift by calling it plate tectonics. Plates are still moving, and earthquakes, magma flows and tsunamis are tagging along. The National Post on November 10 claimed that climate change, too, is on some kind of yes-we-can list. Surely, geoscientists should study climate change. What the study of global warming has done so far is set the stage for a constant belief bias.

Lorne’s story about skeptics and global warming came about because of the work of Professor Dr John R Christie. More than 300,000 daily temperature readings around the globe with NASA’s eight weather satellites over 30 years gave Christy and his coauthor a massive data set to work with. It was marked “Lower Troposphere Global Temperature: 1979-2008.” The authors had drawn a trend line thru a see-saw plot. It was the shape of this trend line that piqued my interest. What I wanted to do was test for spatial dependence between measured values and determine where orderliness in our own sample space of 30 years dissipates into randomness. So I asked Lorne and he did sent me the whole set that underpins the plot in his story!

The first step in the statistical analysis is to verify spatial dependence between observed temperatures in this sample space of time by applying Fisher’s F-test to the variance of the set and the first variance of the ordered set.

The observed value of F=6.27 exceeds the tabulated value of F0.001;df;dfo=1.32 at 99.9% probability by a margin of magnitude. Hence, monthly temperatures display an extraordinary high degree of spatial dependence. The probability that this inference is false is much less than 0.1%.

The second step is to verify whether or not the weighted average difference of 0.063 centigrade is statistically identical to zero. Since the first set and the last one have different degrees of freedom than intermediate sets, Student’s t-test is applied with a month-weighted average variance. Such weighted variances are called pooled variances in applied statistics.

The observed value of t=4.245 exceeds the tabulated value of t0.001;dfo=3.674. Hence, the probability is less than 0.1% that this weighted average difference of 0.063 centigrade is statistically identical to zero. Alternatively, this probability of 99.9% points to a statistically significant but small change of 0.063 centigrade during this 30-year period. Detection limits that take into account Type I risk only and the combined Type I and II risks are of critical importance in risk analysis and control. In this case, the Type I risk is ±0.031 centigrade, and the combined Type I risk and Type II risk is ±0.056 centigrade.

The third step is to verify whether or not the variances of ordered temperatures in centigrade constitute a homogeneous set.

Bartlett’s chi square test shows that the observed χ2-value of 22.979 falls between 42.557 at 5% probability and 17.708 at 95% probability. Hence, the set of variances for this 30-year period is homogeneous.

Sir Ronald A Fisher was knighted in 1953 for his work with analysis of variance. Dr F P Agterberg fumbled the variance of his distance-weighted average point grade in 1970 and in 1974. NASA started to measure Lower Troposphere Temperatures in 1979. I showed how to test for spatial dependence between metal grades in ordered sets for the first time in 1985. So why would any geoscientist assume spatial dependence between measured values in ordered sets? Agterberg is the President of the International Association for Mathematical Geosciences. He should explain why his distance-weighted average point grade does not have a variance.

Reverse Auction for Process Equipment – Hopefully a Thing of The Past

Do you remember the times when some customers actually thought that reverse auctions where the thing to do for just about anything?  It probably started out pretty well for items such as commodities.  I used to be in utter disbelieve when a few customers started to also include capital equipment.  It ended up as a lot of unpaid consulting on our part since part if not all of the system design information would almost always end up on the web. 

More confounding was trying to figure out the question what if anything the customer was supposed to get out of this deal.  Sure, the thought of a vendor having to compete with his own bid must have been appealing to buyers and accountants.  A paper towel is pretty much a paper towel, but a bulk conveying system is just too specialized to be able to obtain comparable quotes.  Almost every system ? even including primary air movers ? is the culmination of years worth of detail laden OEM manufacturer experience.  Without proper communication the chance of getting such an endeavor successfully up and running were slim to none.  I am glad that these auctions are mostly history.  How about you? 

How to lie with geostatistics

Here’s how to in a nutshell. The most brazen lie of all was to deny that weighted averages do have variances. The stage for this lie was set at the French Geological Survey in Algeria on November 25, 1954. It came about when a novice in geology with a knack for probability theory put together his very first research paper. The author had called his paper Formule des Minerais Connexes. He had set out to prove associative dependence between lead and silver in lead ore. He worked with symbols on the first four pages. Handwritten on page 5 are arithmetic mean grades of 0.45% lead and 100 g/t silver, variances of 1.82 for lead and 1.46 for silver, and a correlation coefficient of 0.85. He had worked with symbols until page 5 and did omit his set of primary data. Neither did he refer to any of his peers. Those peculiar practices would remain this author’s modus operandi for life.

The budding author was to be the renowned Professor Dr Georges Matheron, the founder of spatial statistics and the creator of geostatistics. What young Matheron had derived in his 1954 paper were arithmetic mean lead and silver grades of drill core samples. But he had not taken into account that his core samples varied in lengths. So he did derive length-weighted average lead and silver grades and appended a correction to his 1954 paper on January 13, 1955. What he had not done is derive the variances of his length-weighted average lead and silver grades. Neither did he test for, or even talk about, spatial dependence between metal grades of ordered core samples. Matheron’s first paper showed that testing for spatial dependence was beyond his grasp in 1954.

Why was Formule des Minerais Connexes marked Note statistique No 1? Matheron had not derived variances to compute confidence limits for arithmetic mean lead and silver grades but applied correlation-regression analyis. Statisticians do know that the central limit theorem underpins sampling theory and practice. So why didn’t young Matheron derive confidence limits? Surely, he was familiar with this theorem, wasn’t he? Or was it because he thought he was some sort of genius at probability theory? That would explain why he worked mostly with symbols and rarely with real data. Had he worked with real data, he would still have cooked up odd statistics because the variances of his central values went missing. That’s why he was but a self-made wizard of odd statistics. It was Matheron who called the weighted average a kriged estimate as a tribute to the first mining engineer who took to working with weighted averages. Matheron never bothered to differentiate area-, count-, density-, distance-, length-, mass- and volume-weighted averages. But thenn neither did any of his disciples.

Matheron’s followers, unlike real statisticians, didn’t take to counting degrees of freedom. Statisticians do know why and when degrees of freedom should be counted. Geostatisticians don’t know much about degrees of freedom but they do know how to blame others when good grades go bad. They always blame mine planners, grade control engineers, or assayers whenever predicted grades fail to pan out. They claim over-smoothing causes kriging variances of kriged estimates to rise and fall. Kriging variances rise and fall because they are pseudo variances that have but squared dimensions in common with true variances. Of course, Matheron’s odd new science is never to blame for bad grades or bad statistics.

It is a fact that Matheron fumbled the variance of his length-weighted average in 1954. Several years before the Bre-X fraud I derived the variance of a length- and density-weighted average metal grade. The following example is based on core samples from an ore deposit in Canada. The mine itself is no longer as Canadian as it once was. The Excel template with the set of primary data and its derived statistics are posted on a popular but wicked website.


My website was set up early in the Millennium. I loved to send emails with links to my reviews of Matheron’s new science of geostatistics. The students at the Centre de Géostatistique (CDG) in Fontainebleau, France, ranked on high on my list of those who ought to pass Statistics 101. I was pleased when PDF files of Matheron’s work were posted with CDG’s online library. But I was surprised to find out that Matheron’s first paper was no longer listed as Note statistique No 1 in the column marked Reference but as Note géostatistique No 1. Just the same, the PDF file of this paper and its appended correction are still marked Note statistique No 1. On October 27, 2008, five out of six of Matheron’s 1954 papers were still marked Note statistique Nrs 2 to 6.

What was going on? Was the birth date of Matheron’s new science of geostatistics under review? Who reviewed it? And why? Why not retype the whole paper? Why not add the variances of length-weighted average lead and silver grades? And how about testing for spatial dependence between metal grades of ordered core samples? Where have all of Matheron’s sets of primary data gone? And what has happened to his old Underwood typewriter? I have so many questions but hear nothing but silence!

Matheron himself moved from odd statistics to geostatistics in 1959 when he went without a glitch from Note statistique no 19 to Note géostatistique no 20. Check it out before geostat revisionists strike again. I admit to having paraphrased Darrell Huff’s How to lie with statistics. But I couldn’t have made up that this delightful little work was published for the first time in 1954. That’s precisely when young Matheron was setting the stage for his new science of geostatistics in North Africa. Matheron, the creator of geostatistics, never read Huff’s work. But then, Huff didn’t read Matheron’s first paper either. Thank goodness Darrell Huff’s How to lie with statistics is still in print!

Pneumatic conveying, turbo- or positive displacement air mover?

 

The choice between a turbo compressor or a positive displacement pump (blower or screw compressor with internal compression) air mover for a pneumatic conveying system can be evaluated by the influence of the pump characteristics on the pneumatic conveying parameters.

A positive displacement pump displaces a constant volume of air, irrespective of the pressure at the inlet.

A turbo compressor compresses the air adiabatically and is therefore best compared with a screw compressor with internal compression.

A turbo compressor transfers impulse to the air in its impeller.

The more mass flow of air, the more power is consumed.

Therefore, the turbo is kept at a constant mass flow of air by regulated throttle at the inlet, which keeps the pressure ratio constant.

The application of controllable diffusers at the exit of the impeller makes it regulate to control the airflow between approx. 50% to 100% without efficiency reduction.

A turbo compressor always operates at its design point and therefore always consumes a constant (full) power over the full pressure range of the system. 

The screw compressor with internal compression consumes less energy at partial pressure, but still requires energy for the internal compression.

The energy consumption of a blower is proportional to the system pressure drop. 

In pneumatic conveying systems there are pressure systems and vacuum systems to recognize.

In a pressure system, a positive displacement pump and a turbo compressor deliver a constant mass flow of air, as the inlet conditions of the air are constant (atmospheric).

In a pneumatic conveying system with fluctuating pressure, the turbo compressor has the disadvantage of consuming high energy per ton at lower pressures as the power demand of the turbo is constant.

Applying a turbo compressor for sewage aeration with a constant pressure (water height) is a good choice.

In pneumatic vacuum conveying, the influence of the pump characteristics is much more complex.

Continue reading Pneumatic conveying, turbo- or positive displacement air mover?

Going green and gone nuts

Is our world going green? It may be a long while before we know. That’s because scores of geoscientists have gone nuts and work with junk statistics. In Canada, too, geoscientists would rather infer than test for spatial dependence in sampling units and sample spaces. The more so since it’s all in The Inspector’s Field Sampling Manual. Nobody should have to read it. Not even EC’s own inspectors. I had to in the early 2000s because Environment Canada had taken a client of mine to court. It was about my statistical analysis of test results determined in interleaved primary samples. So I worked my way through EC’s manual and found all sorts of sampling methods. What I didn’t find was the interleaved sampling method. I had put this method on my list of smart statistics long before global warming got hot.

 

Here’s what I did find out when I struggled with EC’s manual. Inspectors are taught, “Systematic samples taken at regular time intervals can be used for geostatistical data analysis, to produce site maps showing analyte locations and concentrations. Geostatistical data analysis is a repetitive process, showing how patterns of analytes change or remain stable over distances or time spans.”

 

Geostatistics already rubbed me the wrong way long before it converted Bre-X’s bogus grades and Busang’s barren rock into a massive phantom gold resource. In fact, Matheron’s new science of geostatistics has been a thorn in my side for some twenty years. That sort of junk statistics still runs rampant in the Journal for Mathematical Sciences. Just the same, EC’s field inspectors read under Systematic (Stratified) Sampling , “1) shellfish samples taken at 1-km intervals along a shore, 2) water samples taken from varying depths in the water column.” Numerical examples are missing as much in A Sampling Manual and Reference Guide for Environment Canada Inspectors as they were throughout Matheron’s seminal work. Not all of EC’s geoscientists know as little about testing for spatial dependence in sampling units and sample spaces as do those who cooked up The Inspector’s Field Sampling Manual.

 

In his letter of October 15, 1992, to Dr R Ehlich, Editor, Journal for Mathematical Geology, Stanford’s Professor Dr A G Journel claimed , “The very reason for geostatistics or spatial statistics in general is the acceptance (a decision rather) that spatially distributed data should be considered a priori as dependent one to another, unless proven otherwise.” He believed that my anger “arises fro [sic] a misreading of geostatistical theory, or a reading too encumbered by classical ‘Fischerian’ [sic] statistics.” JMG’s Editor advised me in his letter of October 26, 1992, “Your feeling that geostatistics is invalid might be correct.”

 

Each and every geoscientist on this planet ought to know how to test for spatial dependence and how to chart sampling variograms that show where spatial dependence in our own sample space of time dissipates into randomness. Following is an Excel spreadsheet template that shows how to apply Fisher’s F-test. Geoscientists should figure out why Excel’s FINV-function requires the number of degrees of freedom both for the set and for the ordered set.

 

Of course, it’s easy to become a geostatistically smart geoscientist. All it takes is to infer spatial dependence between measured values, interpolate by kriging, select the least biased subset of some infinite set of kriged estimates, smooth its kriging variance to perfection, and rig the rules of real statistics with impunity. All but a few of those who have gone nuts and work with junk statistics have written books about geostatistics!

Metrology in mining and metallurgy

A poster in my office reads, “Metrology, the Science of Measurement.” It’s a bit faded because I’ve had it for so long. Standards Council of Canada had it printed for educational purposes. I got my poster with a set of slides about international units of measure. Most of them have since been redefined. The famous platinum-iridium artifact that has so long defined the International Unit of Mass is about to bite the dust. A sphere of pure silicon will take its place. The famous Central Limit Theorem has stood the test of time since Abraham de Moivre (1667-1754) brought to the world The Doctrine of Chances. De Moivre’s work underpins sampling theory and sampling practice. His work is bound to stand the test of time until our planet runs out of it. 

The science of measurement has always played a key role in my work. That’s why I put together Sampling and Weighing of Bulk Solids after I had completed my assignment with Cominco Ltd. I was pleased to see it in print in 1985. What pleased me even more was that ISO Technical Committee 183–Copper, lead, zinc and nickel ores and concentrates approved an ISO standard method based on deriving confidence intervals and ranges for metal contents of concentrate and ore shipments.

Several years later I got a slim paperback the cover of which I didn’t recognize. What I did recognize inside of it were my own charts and graphs embedded between Chinese characters. A friend of mine told me it was a Mandarin translation printed on rice paper. My book is protected by copyright but I have yet to be paid a single yuan. Teaching innovative sampling practices and sound statistical methods ranks much higher on my list of things to do than becoming a small c capitalist.

Sampling and Weighing of Bulk Solids
Mandarin translation, November 1989

My son and I were pleased when Precision Estimates for Ore Reserves was praised by Erzmetall and published in its October 1991 issue. The more so since peer reviewers in Canada, the USA and Britain did reject that very paper. One of CIM Bulletin’s reviewers spotted a lack of references to geostatistical literature. The other was ticked off because we were not “…relying on the abundant geostatistical literature…” We had found out that geostatisticians do not explain how to derive confidence intervals and ranges for metal contents of in-situ ore. So we did in our paper and submitted it to CIM Bulletin on September 28, 1989.

Both of us had taken statistics courses at the same university but at different times. Ed leads the Eclipse Modeling Framework project and coleads of the Eclipse Modeling project. He is a coauthor of the authoritative book EMF: Eclipse Modeling Framework which is nearing completion of a second edition. He is an elected member of the Eclipse Foundation Board of Directors and has been recognized by the Eclipse Community Awards as Top Ambassador and Top Committer. Ed is currently interested in all aspects of Eclipse modeling and its application and is well recognized for his dedication to the Eclipse community, posting literally thousands of newsgroup answers each year. He spent 16 years at IBM, achieving the level of Senior Technical Staff Member after completing his Ph.D. at Simon Fraser University. He has started his own small company, Macro Modeling, is a partner of itemis AG, and serves on Skyway Software’s Board of Advisors. His experience in modeling technology spans 25 years.

I was proud to have his pre-IBM credentials printed on the backside of Part 1– Precision and Bias for Mass Measurement Techniques. I shall convert all Lotus 1-2-3 files into Excel files and post them on my website. Some time ago Dr W E Sharp, the Editor-in-Chief for what was recently renamed the Journal of Mathematical Geosciences, wanted Dr Ed Merks to review papers on computer applications. What Sharp also asked me was to write a paper on testing for spatial dependence by applying Fisher’s F-test. I did but we couldn’t agree on degrees of freedom for ordered sets.

Metrology in Mining and Metallurgy
First part and also the last one

After Part 1 was completed in 1992 I went to work on Part 2– Precision and Bias for Ore Reserves. It was coming along nicely until Barrick Gold asked me in December 1996 to look at Bre-X’s test results for gold in crushed core and Lakefield’s test results for gold in library core. The hypothesis that 2.9 m crushed core and 0.1 m library core were once part of the same 3.0 m whole core proved to be highly improbable. CIM’s statistically dysfunctional but otherwise qualified persons were not at all keen to know how Bre-X’s salting scam could have been avoided altogether. Surely, life after Bre-X couldn’t have been any more bizarre. But that’s another story altogether!

The ISO copyright office in Geneva, Switzerland, suggests that it holds the copyright to ISO/FDIS 12745:2007(E)–Precision and bias of mass measurement techniques. Yet, this ISO standard is an ad verbatim copy of Part 1–Precision and bias for mass measurement techniques. Part 1 is supposed to be protected by Canadian copyright. So what gives? Didn’t ISO have to ask permission to reprint? What’s this world coming to when ISO violated Canadian copyright in 2007 just as much as China did in 1992?

What Ed and I have decided to do is put together a paper on Metrology in Mineral Exploration. I want to present it at APCOM 2009 in Vancouver, BC. Home sweet home! Maybe I’ll talk Ed into coming home for a while. I’ll have to post an abstract before the deadline. By the way, APCOM stands for Applications of Computers and Operations Research in the Mineral Industry. Acronym talk does make a lot of sense, doesn’t it?

A weblog for the worldwide powder and bulk solids handling and processing community.

Single Sign On provided by vBSSO