In the bulkonline forum are several pneumatic conveying questions posed, using the descriptions dense phase conveying and dilute phase conveying.
It seemed then that there was not a general understanding about the definition of the two conveying regimes.
After the discussion on the Forum it became clear that the definition was related to the so called Zenzdiagram.
The Zenz diagram is widely accepted as a description of pneumatic conveying with explanatory properties.
Since the calculation of a Zenz diagram is now possible by an extensive computer program, it is also possible to investigate how the diagram is formed.
The calculation approach is described in the Bulkblog article “Pneumatic Conveying, Performance and Calculations!”. By varying the air flow at constant capacity, the resulting partial pressure drops were calculated and combined into a table.
The summation of the partial pressure drops results in the total pressure drop of the system under the chosen conditions.
Dividing the calculated pressure drops by the total length, the pressure drop per meter is derived.
This procedure could also be differentiated to partial pressure drops over partial lengths.
Then it can be checked whether one part of the conveying pipeline is in f.i. dense phase, while another part of the conveying pipeline is dilute phase. This not executed for this article.
Zenz diagram
The curve in the Zenz – diagram represents pneumatic conveying as the pressure drop per unit of length as a function of the air flow (or air velocity).
For this curve the solids flow rate and the pipeline are kept constant.
For a cement conveying pipe line, this curve is calculated.
The calculation curves are given below:
cement 
200 

ton/hr 

pipeline 
12″ 

meter 



pressure 

SLR 
Pumpvolume 
pressure 
/ meter 
kWh/ton 
mu 
0,8 
24745 
134 
0,86 
55,68 
0,9 
20475 
111 
0,82 
49,49 
1,0 
18577 
100 
0,83 
44,54 
1,1 
17295 
93 
0,86 
40,49 
1,13 
17048 
92 
0,87 
39,53 
1,2 
16428 
89 
0,90 
37,12 
1,3 
15794 
85 
0,95 
34,26 
1,4 
15333 
83 
0,99 
31,81 
1,5 
15040 
81 
1,05 
29,69 
1,6 
14819 
80 
1,10 
27,84 
2,0 
14612 
79 
1,37 
22,27 
2,1 
14680 
79 
1,44 
21,21 
2,2 
14750 
80 
1,51 
20,25 
2,3 
14875 
80 
1,59 
19,37 
2,4 
15013 
81 
1,67 
18,56 
2,5 
15171 
82 
1,76 
17,82 
3,0 
16175 
87 
2,22 
14,85 
3,5 
17460 
94 
2,76 
12,73 
4,0 
18844 
102 
3,37 
11,14 
4,5 
20340 
110 
4,05 
9,90 
5,0 
21900 
118 
4,81 
8,91 
5,5 
23540 
127 
5,65 
8,10 
6,0 
25260 
137 
6,57 
7,42 
From 0.8 m3/sec to 2.0 m3/sec, the pressure drop decreases.
This can be explained as the stronger influence of the decreasing loading ratio, opposed to the
weaker influence of the increasing velocity, which would increase the pressure drop per meter.
In addition, the residence time of the particles becomes shorter with increasing velocity and the required pressure drop for keeping the particles in suspension decreases.
From 2.0 m3/sec to 6.0 m3/sec, the pressure drop increases.
This can be explained as the weaker influence of the decreasing loading ratio and the decreasing pressure drop for keeping the particles in suspension, opposed to the stronger influence of the increasing velocity, which increases the pressure drop per meter.
The lowest pressure drop per meter occurs at 2.0 m3/sec.
Left of this point of the lowest pressure drop per meter, the pneumatic conveying is considered: dense phase and on the right of this point, the pneumatic conveying is considered: dilute phase.
As can be read from the calculation table, the loading ratio (mu) is higher on the left part of the curve than on the right part of the curve.
Regarding the energy consumption per ton conveyed, the lowest value occurs at 0.9 m3/sec.
This can be explained as follows:
The energy consumption per ton is depending on the required power for the air flow.
(solids flow rate is kept constant)
This required power is determined as a function of (pressure * flow ).
It appears that the minimum in pressure drop does not coincide with the lowest power demand of the air flow.
As soon as the decreasing airflow (causing lower power demand) is compensated by the increasing pressure drop, the lowest energy consumption per conveyed ton is reached.
The calculation for an air flow of 0.8 m3/sec indicated the beginning of sedimentation in the pipeline, due to the velocities becoming too low.
From this calculation, it can be concluded that a pneumatic conveying design for the lowest possible energy demand, is also a design, using the lowest possible air flow (or velocity).
The lowest possible velocities are also favorable for particle degradation and component’s wear.
Contribution of partial pressure drops to the total pressure drop
To investigate the physical background of the shape of the Zenz diagram, a cement pressure conveying installation is assumed and calculated, whereby the partial pressure drops are noticed.
The installation is described by:
Horizontal conveying length = 71 m
Vertical conveying length = 28 m
Number of bends = 2
Pipe diameter = 243 mm (10”)
Capacity basis for Zenz diagram = 200 tons/hr
The compressor airflow is varied from 0.5 m3/sec to 3.0 m3/sec
The calculation results are presented in the following table.
Compressor flow in m3/sec 
0,50 
0,55 
0,60 
0,65 
0,70 
Pressure drop mbar/meter 





intake 
0,10 
0,10 
0,10 
0,10 
0,10 
acceleration 
0,62 
1,03 
1,27 
1,42 
1,61 
product 
14,33 
12,26 
10,70 
9,58 
9,63 
elevation 
5,73 
5,00 
4,45 
4,01 
3,65 
suspension 
21,46 
16,93 
14,10 
12,06 
10,35 
gas 
0,12 
0,13 
0,13 
0,14 
0,17 
filter 
0,02 
0,03 
0,03 
0,04 
0,04 






total dp 
42,39 
35,48 
30,78 
27,35 
25,55 
kWh/ton 
0,90 
0,86 
0,84 
0,83 
0,85 
SLR 
97,90 
87,60 
79,30 
72,60 
67,00 

Sedimentation 





Sub turbulent flow 
Turbulent flow 
Compressor flow in m3/sec 
0,75 
0,80 
0,85 
0,90 
0,95 
Pressure drop mbar/meter 





intake 
0,10 
0,10 
0,10 
0,10 
0,10 
acceleration 
2,65 
2,76 
2,86 
2,95 
3,04 
product 
8,72 
8,98 
9,16 
9,27 
9,34 
elevation 
3,35 
3,09 
2,86 
2,67 
2,49 
suspension 
8,95 
7,80 
6,89 
6,14 
5,52 
gas 
0,19 
0,22 
0,25 
0,29 
0,33 
filter 
0,02 
0,06 
0,06 
0,07 
0,08 






total dp 
23,98 
23,01 
22,18 
21,49 
20,90 
kWh/ton 
0,87 
0,89 
0,92 
0,96 
0,99 
SLR 
62,20 
58,20 
54,30 
51,40 
48,60 

No sedimentation 




Turbulent flow 



Compressor flow in m3/sec 
1,00 
1,25 
1,50 
2,00 
2,10 
Pressure drop mbar/meter 





intake 
0,10 
0,10 
0,10 
0,10 
0,10 
acceleration 
3,12 
3,55 
4,01 
4,96 
5,16 
product 
9,37 
9,11 
8,53 
7,22 
6,96 
elevation 
2,34 
1,79 
1,45 
1,06 
1,01 
suspension 
4,90 
3,33 
2,45 
1,59 
1,49 
gas 
0,37 
0,61 
0,91 
1,66 
1,84 
filter 
0,09 
0,14 
0,20 
0,35 
0,39 






total dp 
20,29 
18,64 
17,65 
16,95 
16,95 
kWh/ton 
1,02 
1,20 
1,39 
1,80 
1,89 
SLR 
46,10 
36,60 
30,30 
22,60 
21,50 

No sedimentation 




Turbulent flow 



Compressor flow in m3/sec 
2,20 
2,30 
2,40 
2,50 
2,60 
Pressure drop mbar/meter 





intake 
0,10 
0,10 
0,10 
0,10 
0,10 
acceleration 
5,35 
5,55 
5,75 
5,94 
6,14 
product 
6,72 
6,48 
6,25 
6,03 
5,82 
elevation 
0,96 
0,92 
0,88 
0,85 
0,82 
suspension 
1,40 
1,33 
1,26 
1,20 
1,14 
gas 
2,02 
2,20 
2,39 
2,59 
2,79 
filter 
0,43 
0,46 
0,50 
0,55 
0,59 






total dp 
16,98 
17,05 
17,14 
17,26 
17,40 
kWh/ton 
1,99 
2,08 
2,18 
2,29 
2,39 
SLR 
20,60 
19,70 
18,80 
18,10 
17,40 

No sedimentation 




Turbulent flow 



Compressor flow in m3/sec 
2,70 
2,80 
2,90 
3,00 
Pressure drop mbar/meter 




intake 
0,10 
0,10 
Lord Kelvin cool to assumptionsLord Kelvin (William Thomson, 18241907) was a brilliant scientist and an innovative engineer. His honorific name is forever linked to the absolute temperature of zero degrees Kelvin. His work often called for all sorts of variables to be measured. Here’s what he once said, “…when you can measure what you are speaking about, and express it in numbers, you know something about it, but when you cannot express it in numbers your knowledge is of the meagre and unsatisfactory kind…” Lord Kelvin’s view struck a chord with me because of the Dutch truism, “Meten is weten.” It translates into something like, “To measure is to know.” It may have messed up a perfect rhyme but didn’t impact good sense. It’s a leitmotif in my life! Lord Kelvin knew all about degrees Kelvin and degrees Celsius. But he couldn’t have been conversant with degrees of freedom because Sir Ronald A Fisher (18901960) was hardly his contemporary. Lord Kelvin might have wondered why geoscientists would rather assume than measure spatial dependence. Sir Ronald A Fisher could have verified spatial dependence by applying his ubiquitous Ftest to the variance of a set of measured values and the first variance term of the ordered set. He may not have had time to apply that variant of his Ftest because of his conflict with Karl Pearson (18571936). It was Fisher in 1928 who added degrees of freedom to Pearson’s chisquare distribution. Not all students need to know as much about Fisher’s Ftest as do those who study geosciences. The question is why geostatistically gifted geoscientists would rather assume spatial dependence than measure it. How do they figure out where orderliness in our own sample space of time dissipates into randomness? Sampling variograms, unlike semivariograms, cannot be derived without counting degrees of freedom. So much concern about climate change and global warming. So little concern about sound sampling practices and proven statistical methods! I derived sampling variograms for the set that underpins A 2000Year Global Temperature Reconstruction based on NonTree Ring Proxies. I downloaded the data that covers Year 16 to Year 1980, and derived corrected and uncorrected sampling variograms. The corrected sampling variogram takes into account the loss of degrees of freedom during reiteration. I transmitted both to Dr Craig Loehle, the author of this fascinating study. Excel spreadsheet templates on my website show how to derive uncorrected and corrected sampling variograms. Uncorrected sampling variogram Spatial dependence in this uncorrected sampling variogram dissipates into randomness at a lag of 394 years. The variance of the set gives 95% CI = +/1 centrigrade between consecutive years. The first variance term of the ordered set gives 95% CI = +/0.1 centrigrade between consecutive years. Corrected sampling variogram Spatial dependence in the corrected sampling variogram dissipates into randomness at a lag of 294 years. It is possible to derive 95% confidence intervals anywhere within this lag. Sampling variograms are part of my story about the junk statistics behind what was once called Matheron’s new science of geostatistics. I want to explain its role in mineral reserve and resource estimation in the mining industry but even more so in measuring climate change and global warming. Classical statistics turned into junk statistics under the guidance of Professor Dr Georges Matheron (19302000), a French probabilist who turned into a selfmade wizard of odd statistics. A brief history of Matheronian geostatistics is posted on my blog. My 20year campaign against the geostatocracy and its army of degrees of freedom fighters is chronicled on my website. Agterberg ranked Matheron on a par with giants of mathematical statistics such as Sir Ronald A Fisher (18901962) and Professor Dr J W Tukey (19152000). Agterberg was wrong! Matheron fumbled the variance of the lengthweighted average grade of core samples of variable lengths in 1954. Agterberg himself fumbled the variance of his own distanceweighted average point grade in his 1970 Autocorrelation Functions in Geology and again in his 1974 Geomathematics. Agterberg seems to believe it’s too late to reunite his distanceweighted average point grade and its longlost variance. I disagree because it’s never too late to right a wrong. What he did do was change the International Association of Mathematical Geology into the International Association for Mathematical Geosciences. Of course, geoscientists do bring in more dollars and cents than did geologists alone. I’m trying to made a clear and concise case that sound sampling practices and proven statistical methods ought to be taught at all universities on this planet. Time will tell whether or not such institutions of higher learning agree that functions do have variances, and that Agterberg’s distanceweighted average point grade is no exception! Bacterial heating of cereals and meals
Reading the article “Wood Pellet Combustible Dust Incidents” of John Astad, I remembered the following.
All biological products are subject to deterioration. This deterioration is caused by micro organisms (bacteria and micro flora) To prevent bacterial deterioration, it is necessary to condition the circumstances in such a way that micro organisms cannot grow.
1) By killing the micro organisms through sterilization, pasteurization or conservation. In transport also the gassing with methyl bromide is common but not without danger. 2) Creating an environment that micro organisms cannot develop by f.i. adding acids, salt, sweet or drying and cooling.
In storing cereals, grains, seeds, and derivatives, drying is the mostly used method to prevent bacterial heating. To prevent bacterial deterioration those materials need to be DRY before storing. To have or not to have variancesNot a word from CRIRSCO’s Chairman. I just want to know whether or not functions do have variances at Rio Tinto’s operations. Surely, Weatherstone wouldn’t toss a coin to make up his mind, would he? My functions do have variances. I work with central values such as arithmetic means and all sorts of weighted averages. It would be off the wall if the variance were stripped off any of those functions. But that’s exactly what had come to pass in Agterberg’s work. I’ve tried to find out what fate befell the variance of the distanceweighted average. I did find out who lost what and when. And it was not pretty in the early 1990s! When Matheron’s seminal work was posted on the web it became bizarre. The geostatistocrats turned silent and resolved to protect their turf and evade the question. They do know what’s true and what’s false. And I know scientific truth will prevail in the end. Agterberg talked about his distanceweighted average point grade for the first time during a geostatistics colloquium on campus at The University of Kansas in June 1970. He did so in his paper on Autocorrelation functions in geology. The caption under Figure 1 states; “Geologic prediction problem: values are known for five irregularly spaced Points P_{1} –P_{5}. Value at P_{0} is unknown and to be predicted from five unknown values.” Agterberg’s 1970 Figure 1 and 1974 Figure 64 Agterberg’s 1970 sample space became Figure 64 in Chapter 10. Stationary Random Variables and Kriging of his 1974 Geomathematics. Now his caption states, “Typical kriging problem, values are known at five points. Problem is to estimate value at point P_{0} from the known values at P_{1} –P_{5}”. Agterberg seemed to imply his 1970 geologic prediction problem and his 1974 typical kriging problem do differ in some way. Yet, he applied the same function to derive his predicted value as well as his estimated value. His symbols suggest a matrix notation in both his paper and textbook. The following function sums the products of weighting factors and measured values to obtain Agterberg’s distanceweighted average point grade. Agterberg’s distanceweighted average
Agterberg’s distanceweighted average point grade is a function of his set of measured values. That’s why the central value of this set of measured values does have a variance in classical statistics. Agterberg did work with the Central Limit Theorem in a few chapters of his 1974 Geomathematics. Why then is this theorem nowhere to be found in Chapter 10 Stationary Random Variables and Kriging? All the more so because this theorem can be brought back to the work of Abraham de Moivre (16671754). David mentioned the “famous” Central Limit Theorem in his 1977 Geostatistical Ore Reserve Estimation. He didn’t deem it quite famous enough to either work with it or to list it in his Index. Neither did he grasp why the central limit theorem is the quintessence of sampling theory and practice. Agterberg may have fumbled the variance of the distanceweighted average point grade because he fell in with the selfmade masters of junk statistics. What a pity he didn’t talk with Dr Jan Visman before completing his 1974 opus. The next function gives the variance of Agterberg’s distanceweighted average point grade. As such it defines the Central Limit Theorem as it applies to Agterberg’s central value. I should point out that this central value is in fact the zerodimensional point grade for Agterberg’s selected position P_{0}. Agterberg’s longlost variance Agterberg worked with symbols rather than measured values. Otherwise, Fisher’s Ftest could have been applied to test for spatial dependence in the sample space defined by his set. This test verifies whether var(x), the variance of a set, and var1(x), the first variance term of the ordered set, are statistically identical or differ significantly. The above function shows the first variance term of the ordered set. In Section 12.2 Conditional Simulation of his 1977 work, David brought up some infinite set of simulated values. What he talked about was Agterberg’s infinite set of zerodimensional, distanceweighted average point grades. I do miss some ISO Standard on Mineral Reserve and Resource Estimation where a word means what it says, and where text, context and symbols make for an unambiguous read. But I digress as we tend to do in our family. Do CRIRSCO’s Chairman and his Crirsconians know that our sun will have bloated to a red giant and scorched Van Gogh’s Sunflowers to a crisp long before Agterberg’s infinite set of zerodimensional point grades is tallied? And I don’t want to get going on the immeasurable odds of selecting the least biased subset of some infinite set. Weatherstone should contact the International Association of Mathematical Geosciences and request IAMG’s President to bring back together his distanceweighted average and its longlost variance. That’s all. At least for now! Fighting factoids with factsNiall Weatherstone of Rio Tinto and Larry Smith of Vale Inco have been asked to study a geostatistical factoid and a statistical fact. I asked them to do so by email on July 8, 2008. Next time they chat I want them to discuss whether or not geostatistics is an invalid variant of classical statistics. I’ve asked Weatherstone to transmit my question to all members of his team. CRIRSCO’s Chairman has yet to confirm whether he did or not. I just want to bring to the attention of his Crirsconians my ironclad case against the junk science of geostatistics. Not all Crirsconians assume, krige, and smooth quite as much as do Parker and Rendu. The problem is nobody grasps how to derive unbiased confidence intervals and ranges for contents and grades of reserves and resources. Otherwise, Weatherstone would have blown his horn when he talked to Smith. A few geostatistical authors referred per chance to statistical facts. Nobody has responded to my questions about geostatistical factoids. The great debate between Shurtz and Parker got nowhere because the question of why kriging variances “drop off” was never raised. So I’ll take my turn at explaining the rise and fall of kriging variances. In the 1990s I didn’t geostat speak quite as well as did those who assume, krige and smooth. I did assume Matheron knew what he was writing about but he wasn’t. BreX proved it makes no sense to infer gold mineralization between salted boreholes. The BreX fraud taught me more about assuming, kriging, and smoothing than I wanted to know. And I wasn’t taught to blather with confidence about confidence without limits. It reminds me of another story I’ll have to blog about some other day. It’s easy to take off on a tangent because I have so many factoids and facts to pick and choose from. Functions have variances is a statistical fact I’ve quoted to Weatherstone and Smith. Not all functions have variances I cited as a geostatistical factoid. Factoid and fact are mutually exclusive but not equiprobable. Onetoone correspondence between functions and variances is a condition sine qua non in classical statistics. Therefore, factoid and fact have as much in common as do a stuffed dodo and a soaring eagle. My opinion on the role of classical statistics in reserve and resource estimation is necessarily biased. The very function that should never have been stripped off its variance is the distanceweighted average. For this central value is in fact a zerodimensional point grade. All the same, its variance was stripped off twice on Agterberg’s watch. David did refer to “the famous central limit theorem.” What he didn’t mention is the central limit theorem defines not only the variance of the arithmetic mean of a set of measured values with equal weights but also the variance of the weighted average of a set of measured values with variable weights. It doesn’t matter that a weighted average is called an honorific kriged estimate. What does matter is that the kriged estimate had been stripped off its variance. Two or more test results for samples taken at positions with different coordinates in a finite sample space give an infinite set of distanceweighted average point grades. The catch is that not a single distanceweighted average point grade in an infinite set has its own variance. So, Matheron’s disciples had no choice but to contrive the surreal kriging variance of some subset of an infinite set of kriged estimates. That set the stage for a mad scramble to write the very first textbook on a fatally flawed variant of classical statistics. Stepout drilling at Busang’s South East Zone produced nine (9) salted holes on SEZ44 and eleven (11) salted holes on SEZ49. Interpolation by kriging gave three (3) lines with nine (9) kriged holes each. Following is the YX plot for BreX’s salted and kriged holes.
Fisher’s Ftest is applied to verify spatial dependence. The test is based on comparing the observed Fvalue between the variance of a set and the first variance of the ordered set with tabulated Fvalues at different probability levels and with applicable degrees of freedom. Neither set of salted holes displays a significant degree of spatial dependence. By contrast, the observed Fvalues for sets of kriged holes seem to imply a high degree of spatial dependence. If I didn’t know kriged holes were functions of salted holes, then I would infer a high degree of spatial dependence between kriged holes but randomness between salted holes. Surely, it’s divine to create order where chaos rules! But do Crirsconians ever wonder about Excel functions such CHIINV, FINV, and TINV? Wouldn’t Weatherstone want to have a metallurgist with a good grasp of classical statistics on his team?
High variances give low degrees of precision. I like to work with confidence intervals in relative percentages because it easy to compare precision estimates at a glance. SEZ44 gives 95% CI= ±23.5%rel whereas SEZ49 gives 95% CI= ±26.4%rel. By contrast, low variances give high degrees of precision. Three (3) lines of kriged holes give confidence intervals of 95% CI= ±0.8%rel to 95% CI= ±1.6%rel. Crirsconians should know not only how to verify spatial dependence by applying Fisher’s Ftest but also how to count degrees of freedom. Kriging variances cannot help but going up and down as yoyos! Going GIGO with CRIRSCOSnappy acronyms add spice to the way we blog and talk. GIGO has been tagging along with computing science without losing its punch. CRIRSCO is but one tong twisting tour de force for Combined Reserves International Reporting Standards Committee. Its Chairman is Niall Weatherstone of Rio Tinto. Larry Smith of Vale Inco asked Weatherstone about Stetting International Standards. Weatherstone said CRIRSCO was set up in 1993 but its website says it was 1994. CRIRSCO’s website makes a tough read because of its dreadfully long lines. So what have Weatherstone and his Crirsconians been doing during all those years? Smith should have but didn’t ask what CRIRSCO has accomplished. It would seem some sort of semiinternational reporting template has been set up. The problem is the Russian Federation has a code of its own, and China’s is sort of similar. As it stands, Crirsconians have yet to develop valuation codes for mineral properties. At the present pace, valuation codes that give unbiased confidence limits for contents and grades of reserves and resources might be ready in 2020, the year of perfect vision. It had better be based on classical statistics! Here’s what was happening in my life when CRIRSCO came about either in 1993 or in 1994. I talked to CIM Members in Vancouver, BC, about the use and abuse of statistics in ore reserve estimation. BreX Minerals raised money to acquire the Busang property. Clark wanted me to go from Zero to Kriging in 30 Hours at the Mackay School of Mines. I didn’t go because her semivariograms are rubbish. The international forum on Geostatistics for the Next Century at McGill University didn’t want to hear about The Properties of Variances. David S Robertson, PhD, PEng, CIM President, failed to, “… find support for your desire to debate.” What irked me was JeanMichel Rendu’s 1994 Jackling Lecture on Mining geostatistics – Forty years passed. What lies ahead? He rambled on about, “…an endless list of other ‘kriging’ methods…” and prophesied geostatistics, “… is here to stay with all its strengths and weaknesses.” At that time, Rendu knew about infinite sets of kriged estimates and zero kriging variances. Rendu’s lecture stood in sharp contrast to A Geostatistical Monograph of The Mining and Metallurgical Society of America. Robert Shurtz, a mining engineer and a friend of mine, wrote The Geostatistics Machine and the Drill Core Paradox. Harry Parker, a Stanfordbred geostat sage, was to find fault in Shurtz’s work. This great debate got nowhere because neither grasped the properties of variances. Otherwise, both of them could have put in plain words why kriging variances drop off. A few of Parker’s geostat pals had already found out why in 1989. Figure 2 is rather odd in the sense that, “The kriging variance rises up to a maximum and then drops off.” That’s precisely what Armstrong and Champigny wrote in A Study of Kriging Small Blocks published in CIM Bulletin of March 1989. What I saw kriging variances do is what real variances never do. Armstrong and Champigny alleged kriging variances drop off because mine planners oversmooth small blocks. More research brought to light that kriged block estimates and actual grades were “uncorrelated.” That would make a random number generator of sorts for kriged block grades. It was David himself who approved that blatant nonsense for publication in CIM Bulletin.
Figure 2 gives kriging variances as a function of variogram ranges. As such, it was more telling than Parker’s. Neither Shurtz nor Parker scrutinized Armstrong and Champigny’s 1989 A Study of Kriging Small Blocks. Otherwise, Shurtz might have pointed out Parker’s kriging variances looked a touch oversmoothed. Neither did Parker confess he does oversmooth the odd time.
Corrected and uncorrected sampling variograms for BreX’s bonanza grade borehole BSSE198 show where spatial dependence between bogus gold grades of crushed, salted and ordered core samples from this borehole dissipates into randomness. The adjective “corrected” implies that the variance of selecting a test portion of a crushed and salted core sample, and the variance of analyzing such a test portion, are extraneous to the in situ variance of gold in BreX’s Busang resource. Subtracting the sum of extraneous variances gives an unbiased estimate for the intrinsic variance of bogus gold in Busang’s phantom gold resource. Fisher’s Ftest proved this intrinsic variance to be statistically identical to zero.
Harry Parker and JeanMichel Rendu appear to speak for the Society for Mining, Metallurgy and Exploration (SME) in the USA. What it takes to cook up ballpark reserves and resources are soothsayers who know how to failingly infer mineralization between boreholes, hardcore krigers and cocksure smoothers. What CRIRSCO ought to have done after the BreX fraud is set up an ISO Technical Committee on reserve and resource estimation. It’s never too late to do it! GIGO may be a bit dated but Garbage In does stand the test of time. Nowadays, Good Graphics Bad Statistics Out is a much more likely outcome. What a pity that GIGGBSO lacks GIGO’s punch! Pneumatic Unloaders: Problems to AvoidTerminals and factories, receiving their (raw) materials by ship, operate unloaders. One category of unloaders is the pneumatic unloader. Although the unloading does not belong to the core business of the company, it can be considered as an umbilical cord to the company’s process or trade. Without incoming materials there will be no end product nor sales. A stevedoring company will even stop to perform immediately. Owners of such installations should be aware of the possible impact on their day to day operations and possible risk in case of failures and therefore should evaluate the offers for their installations with great care. Purchasing under quality or under designed and built units will create unpleasant problems (and costs) later on. In those cases where a pneumatic unloader does not fulfill the specified expectations, the following causes are possible:
Ad 1) Installation does not reach the design specifications In case the capacity is not reached, this could be influenced by:
In case the energy consumption is not reached, this could be influenced by:
Operational influences affecting the performance could be caused by:
Ad 2) Frequent breakdowns
Design specifications : The design specifications are the values against which the performance of a pneumatic shipunloader has to be compared. The design specifications are the result of a set of considerations in terms of: Economic basis:
Environmental basis
Technical basis
Capacity / energy consumptionPneumatic design
