Dense phase- or dilute phase pneumatic conveying

In the bulk-online forum are several pneumatic conveying questions posed, using the descriptions dense phase conveying and dilute phase conveying.
It seemed then that there was not a general understanding about the definition of the two conveying regimes.
After the discussion on the Forum it became clear that the definition was related to the so called Zenz-diagram.
The Zenz diagram is widely accepted as a description of pneumatic conveying with explanatory properties.
Since the calculation of a Zenz diagram is now possible by an extensive computer program, it is also possible to investigate how the diagram is formed. 

The calculation approach is described in the Bulkblog article “Pneumatic Conveying, Performance and Calculations!. By varying the air flow at constant capacity, the resulting partial pressure drops were calculated and combined into a table.
The summation of the partial pressure drops results in the total pressure drop of the system under the chosen conditions.
Dividing the calculated pressure drops by the total length, the pressure drop per meter is derived.

This procedure could also be differentiated to partial pressure drops over partial lengths.
Then it can be checked whether one part of the conveying pipeline is in f.i. dense phase, while another part of the conveying pipeline is dilute phase. This not executed for this article.

Zenz diagram 

The curve in the Zenz – diagram represents pneumatic conveying as the pressure drop per unit of length as a function of the air flow (or air velocity).

For this curve the solids flow rate and the pipeline are kept constant.

For a cement conveying pipe line, this curve is calculated. 

The calculation curves are given below:

cement

200

 

ton/hr

 

pipeline

12″

 

meter

 

 

pressure

 

SLR

Pumpvolume

pressure

/ meter

kWh/ton

mu

0,8

24745

134

0,86

55,68

0,9

20475

111

0,82

49,49

1,0

18577

100

0,83

44,54

1,1

17295

93

0,86

40,49

1,13

17048

92

0,87

39,53

1,2

16428

89

0,90

37,12

1,3

15794

85

0,95

34,26

1,4

15333

83

0,99

31,81

1,5

15040

81

1,05

29,69

1,6

14819

80

1,10

27,84

2,0

14612

79

1,37

22,27

2,1

14680

79

1,44

21,21

2,2

14750

80

1,51

20,25

2,3

14875

80

1,59

19,37

2,4

15013

81

1,67

18,56

2,5

15171

82

1,76

17,82

3,0

16175

87

2,22

14,85

3,5

17460

94

2,76

12,73

4,0

18844

102

3,37

11,14

4,5

20340

110

4,05

9,90

5,0

21900

118

4,81

8,91

5,5

23540

127

5,65

8,10

6,0

25260

137

6,57

7,42

 

From 0.8 m3/sec to 2.0 m3/sec, the pressure drop decreases.

This can be explained as the stronger influence of the decreasing loading ratio, opposed to the

weaker influence of the increasing velocity, which would increase the pressure drop per meter.

In addition, the residence time of the particles becomes shorter with increasing velocity and the required pressure drop for keeping the particles in suspension decreases.

From 2.0 m3/sec to 6.0 m3/sec, the pressure drop increases.

This can be explained as the weaker influence of the decreasing loading ratio and the decreasing pressure drop for keeping the particles in suspension, opposed to the stronger influence of the increasing velocity, which increases the pressure drop per meter.

The lowest pressure drop per meter occurs at 2.0 m3/sec.

Left of this point of the lowest pressure drop per meter, the pneumatic conveying is considered: dense phase and on the right of this point, the pneumatic conveying is considered: dilute phase.

As can be read from the calculation table, the loading ratio (mu) is higher on the left part of the curve than on the right part of the curve.

Regarding the energy consumption per ton conveyed, the lowest value occurs at 0.9 m3/sec.

This can be explained as follows:

The energy consumption per ton is depending on the required power for the air flow.
(solids flow rate is kept constant)

This required power is determined as a function of (pressure * flow ).

It appears that the minimum in pressure drop does not coincide with the lowest power demand of the air flow.

As soon as the decreasing airflow (causing lower power demand) is compensated by the increasing pressure drop, the lowest energy consumption per conveyed ton is reached. 

The calculation for an air flow of  0.8 m3/sec indicated the beginning of sedimentation in the pipeline, due to the velocities becoming too low.

From this calculation, it can be concluded that a pneumatic conveying design for the lowest possible energy demand, is also a design, using the lowest possible air flow (or velocity).

The lowest possible velocities are also favorable for particle degradation and component’s wear.

 

Contribution of partial pressure drops to the total pressure drop

To investigate the physical background of the shape of the Zenz diagram, a cement pressure conveying installation is assumed and calculated, whereby the partial pressure drops are noticed.

The installation is described by:

Horizontal conveying length              =         71        m

Vertical conveying length                  =         28        m

Number of bends                               =         2

Pipe diameter                                     =         243      mm (10”)

Capacity basis for Zenz diagram        =         200      tons/hr

The compressor airflow is varied from 0.5 m3/sec to 3.0 m3/sec

The calculation results are presented in the following table.

Compressor flow in m3/sec

0,50

0,55

0,60

0,65

0,70

Pressure drop mbar/meter

 

 

 

 

 

intake

0,10

0,10

0,10

0,10

0,10

acceleration

0,62

1,03

1,27

1,42

1,61

product

14,33

12,26

10,70

9,58

9,63

elevation

5,73

5,00

4,45

4,01

3,65

suspension

21,46

16,93

14,10

12,06

10,35

gas

0,12

0,13

0,13

0,14

0,17

filter

0,02

0,03

0,03

0,04

0,04

 

 

 

 

 

 

total dp

42,39

35,48

30,78

27,35

25,55

kWh/ton

0,90

0,86

0,84

0,83

0,85

SLR

97,90

87,60

79,30

72,60

67,00

 

Sedimentation

 

 

 

 

Sub turbulent flow

Turbulent flow

 

Compressor flow in m3/sec

0,75

0,80

0,85

0,90

0,95

Pressure drop mbar/meter

 

 

 

 

 

intake

0,10

0,10

0,10

0,10

0,10

acceleration

2,65

2,76

2,86

2,95

3,04

product

8,72

8,98

9,16

9,27

9,34

elevation

3,35

3,09

2,86

2,67

2,49

suspension

8,95

7,80

6,89

6,14

5,52

gas

0,19

0,22

0,25

0,29

0,33

filter

0,02

0,06

0,06

0,07

0,08

 

 

 

 

 

 

total dp

23,98

23,01

22,18

21,49

20,90

kWh/ton

0,87

0,89

0,92

0,96

0,99

SLR

62,20

58,20

54,30

51,40

48,60

 

No sedimentation

 

 

 

Turbulent flow

 

 

 

 

Compressor flow in m3/sec

1,00

1,25

1,50

2,00

2,10

Pressure drop mbar/meter

 

 

 

 

 

intake

0,10

0,10

0,10

0,10

0,10

acceleration

3,12

3,55

4,01

4,96

5,16

product

9,37

9,11

8,53

7,22

6,96

elevation

2,34

1,79

1,45

1,06

1,01

suspension

4,90

3,33

2,45

1,59

1,49

gas

0,37

0,61

0,91

1,66

1,84

filter

0,09

0,14

0,20

0,35

0,39

 

 

 

 

 

 

total dp

20,29

18,64

17,65

16,95

16,95

kWh/ton

1,02

1,20

1,39

1,80

1,89

SLR

46,10

36,60

30,30

22,60

21,50

 

No sedimentation

 

 

 

Turbulent flow

 

 

 

 

Compressor flow in m3/sec

2,20

2,30

2,40

2,50

2,60

Pressure drop mbar/meter

 

 

 

 

 

intake

0,10

0,10

0,10

0,10

0,10

acceleration

5,35

5,55

5,75

5,94

6,14

product

6,72

6,48

6,25

6,03

5,82

elevation

0,96

0,92

0,88

0,85

0,82

suspension

1,40

1,33

1,26

1,20

1,14

gas

2,02

2,20

2,39

2,59

2,79

filter

0,43

0,46

0,50

0,55

0,59

 

 

 

 

 

 

total dp

16,98

17,05

17,14

17,26

17,40

kWh/ton

1,99

2,08

2,18

2,29

2,39

SLR

20,60

19,70

18,80

18,10

17,40

 

No sedimentation

 

 

 

Turbulent flow

 

 

 

 

Compressor flow in m3/sec

2,70

2,80

2,90

3,00

Pressure drop mbar/meter

 

 

 

 

intake

0,10

0,10

Lord Kelvin cool to assumptions

Lord Kelvin (William Thomson, 1824-1907) was a brilliant scientist and an innovative engineer. His honorific name is forever linked to the absolute temperature of zero degrees Kelvin. His work often called for all sorts of variables to be measured. Here’s what he once said, “…when you can measure what you are speaking about, and express it in numbers, you know something about it, but when you cannot express it in numbers your knowledge is of the meagre and unsatisfactory kind…” Lord Kelvin’s view struck a chord with me because of the Dutch truism, “Meten is weten.” It translates into something like, “To measure is to know.” It may have messed up a perfect rhyme but didn’t impact good sense. It’s a leitmotif in my life!

Lord Kelvin knew all about degrees Kelvin and degrees Celsius. But he couldn’t have been conversant with degrees of freedom because Sir Ronald A Fisher (1890-1960) was hardly his contemporary. Lord Kelvin might have wondered why geoscientists would rather assume than measure spatial dependence. Sir Ronald A Fisher could have verified spatial dependence by applying his ubiquitous F-test to the variance of a set of measured values and the first variance term of the ordered set. He may not have had time to apply that variant of his F-test because of his conflict with Karl Pearson (1857-1936). It was Fisher in 1928 who added degrees of freedom to Pearson’s chi-square distribution.

Not all students need to know as much about Fisher’s F-test as do those who study geosciences. The question is why geostatistically gifted geoscientists would rather assume spatial dependence than measure it. How do they figure out where orderliness in our own sample space of time dissipates into randomness? Sampling variograms, unlike semi-variograms, cannot be derived without counting degrees of freedom. So much concern about climate change and global warming. So little concern about sound sampling practices and proven statistical methods!

I derived sampling variograms for the set that underpins A 2000-Year Global Temperature Reconstruction based on Non-Tree Ring Proxies. I downloaded the data that covers Year 16 to Year 1980, and derived corrected and uncorrected sampling variograms. The corrected sampling variogram takes into account the loss of degrees of freedom during reiteration. I transmitted both to Dr Craig Loehle, the author of this fascinating study. Excel spreadsheet templates on my website show how to derive uncorrected and corrected sampling variograms.

Uncorrected sampling variogram

Spatial dependence in this uncorrected sampling variogram dissipates into randomness at a lag of 394 years. The variance of the set gives 95% CI = +/-1 centrigrade between consecutive years. The first variance term of the ordered set gives 95% CI = +/-0.1 centrigrade between consecutive years.

Corrected sampling variogram

Spatial dependence in the corrected sampling variogram dissipates into randomness at a lag of 294 years. It is possible to derive 95% confidence intervals anywhere within this lag.

Sampling variograms are part of my story about the junk statistics behind what was once called Matheron’s new science of geostatistics. I want to explain its role in mineral reserve and resource estimation in the mining industry but even more so in measuring climate change and global warming. Classical statistics turned into junk statistics under the guidance of Professor Dr Georges Matheron (1930-2000), a French probabilist who turned into a self-made wizard of odd statistics. A brief history of Matheronian geostatistics is posted on my blog. My 20-year campaign against the geostatocracy and its army of degrees of freedom fighters is chronicled on my website. Agterberg ranked Matheron on a par with giants of mathematical statistics such as Sir Ronald A Fisher (1890-1962) and Professor Dr J W Tukey (1915-2000). Agterberg was wrong! Matheron fumbled the variance of the length-weighted average grade of core samples of variable lengths in 1954. Agterberg himself fumbled the variance of his own distance-weighted average point grade in his 1970 Autocorrelation Functions in Geology and again in his 1974 Geomathematics.

Agterberg seems to believe it’s too late to reunite his distance-weighted average point grade and its long-lost variance. I disagree because it’s never too late to right a wrong. What he did do was change the International Association of Mathematical Geology into the International Association for Mathematical Geosciences. Of course, geoscientists do bring in more dollars and cents than did geologists alone. I’m trying to made a clear and concise case that sound sampling practices and proven statistical methods ought to be taught at all universities on this planet. Time will tell whether or not such institutions of higher learning agree that functions do have variances, and that Agterberg’s distance-weighted average point grade is no exception!

Bacterial heating of cereals and meals

 

Reading the article “Wood Pellet Combustible Dust Incidentsof John Astad, I remembered the following.

 

All biological products are subject to deterioration.

This deterioration is caused by micro organisms (bacteria and micro flora)

 To prevent bacterial deterioration, it is necessary to condition the circumstances in such a way that micro organisms cannot grow.

 

1)      By killing the micro organisms through sterilization, pasteurization or conservation. In transport also the gassing with methyl bromide is common but not without danger.

2)      Creating an environment that micro organisms cannot develop by f.i. adding acids, salt, sweet or drying and cooling.

 

In storing cereals, grains, seeds, and derivatives, drying is the mostly used method to prevent bacterial heating.

 To prevent bacterial deterioration those materials need to be DRY before storing.

  Continue reading Bacterial heating of cereals and meals

To have or not to have variances

Not a word from CRIRSCO’s Chairman. I just want to know whether or not functions do have variances at Rio Tinto’s operations. Surely, Weatherstone wouldn’t toss a coin to make up his mind, would he? My functions do have variances. I work with central values such as arithmetic means and all sorts of weighted averages. It would be off the wall if the variance were stripped off any of those functions. But that’s exactly what had come to pass in Agterberg’s work. I’ve tried to find out what fate befell the variance of the distance-weighted average. I did find out who lost what and when.  And it was not pretty in the early 1990s! When Matheron’s seminal work was posted on the web it became bizarre. The geostatistocrats turned silent and resolved to protect their turf and evade the question. They do know what’s true and what’s false. And I know scientific truth will prevail in the end.

Agterberg talked about his distance-weighted average point grade for the first time during a geostatistics colloquium on campus at The University of Kansas in June 1970. He did so in his paper on Autocorrelation functions in geology. The caption under Figure 1 states; “Geologic prediction problem: values are known for five irregularly spaced Points P1 –P5. Value at P0 is unknown and to be predicted from five unknown values.”

Agterberg’s 1970 Figure 1 and 1974 Figure 64

Agterberg’s 1970 sample space became Figure 64 in Chapter 10. Stationary Random Variables and Kriging of his 1974 Geomathematics. Now his caption states, “Typical kriging problem, values are known at five points. Problem is to estimate value at point P0 from the known values at P1 –P5”. Agterberg seemed to imply his 1970 geologic prediction problem and his 1974 typical kriging problem do differ in some way. Yet, he applied the same function to derive his predicted value as well as his estimated value. His symbols suggest a matrix notation in both his paper and textbook.

The following function sums the products of weighting factors and measured values to obtain Agterberg’s distance-weighted average point grade.

Agterberg’s distance-weighted average

Agterberg’s distance-weighted average point grade is a function of his set of measured values. That’s why the central value of this set of measured values does have a variance in classical statistics. Agterberg did work with the Central Limit Theorem in a few chapters of his 1974 Geomathematics. Why then is this theorem nowhere to be found in Chapter 10 Stationary Random Variables and Kriging? All the more so because this theorem can be brought back to the work of Abraham de Moivre (1667-1754).

David mentioned the “famous” Central Limit Theorem in his 1977 Geostatistical Ore Reserve Estimation. He didn’t deem it quite famous enough to either work with it or to list it in his Index. Neither did he grasp why the central limit theorem is the quintessence of sampling theory and practice. Agterberg may have fumbled the variance of the distance-weighted average point grade because he fell in with the self-made masters of junk statistics. What a pity he didn’t talk with Dr Jan Visman before completing his 1974 opus.

The next function gives the variance of Agterberg’s distance-weighted average point grade. As such it defines the Central Limit Theorem as it applies to Agterberg’s central value. I should point out that this central value is in fact the zero-dimensional point grade for Agterberg’s selected position P0.

Agterberg’s long-lost variance

Agterberg worked with symbols rather than measured values. Otherwise, Fisher’s F-test could have been applied to test for spatial dependence in the sample space defined by his set. This test verifies whether var(x), the variance of a set, and var1(x), the first variance term of the ordered set, are statistically identical or differ significantly. The above function shows the first variance term of the ordered set. In Section 12.2 Conditional Simulation of his 1977 work, David brought up some infinite set of simulated values. What he talked about was Agterberg’s infinite set of zero-dimensional, distance-weighted average point grades. I do miss some ISO Standard on Mineral Reserve and Resource Estimation where a word means what it says, and where text, context and symbols make for an unambiguous read.

But I digress as we tend to do in our family. Do CRIRSCO’s Chairman and his Crirsconians know that our sun will have bloated to a red giant and scorched Van Gogh’s Sunflowers to a crisp long before Agterberg’s infinite set of zero-dimensional point grades is tallied? And I don’t want to get going on the immeasurable odds of selecting the least biased subset of some infinite set. Weatherstone should contact the International Association of Mathematical Geosciences and request IAMG’s President to bring back together his distance-weighted average and its long-lost variance. That’s all. At least for now!

Fighting factoids with facts

Niall Weatherstone of Rio Tinto and Larry Smith of Vale Inco have been asked to study a geostatistical factoid and a statistical fact. I asked them to do so by email on July 8, 2008. Next time they chat I want them to discuss whether or not geostatistics is an invalid variant of classical statistics. I’ve asked Weatherstone to transmit my question to all members of his team. CRIRSCO’s Chairman has yet to confirm whether he did or not. I just want to bring to the attention of his Crirsconians my ironclad case against the junk science of geostatistics.

Not all Crirsconians assume, krige, and smooth quite as much as do Parker and Rendu. The problem is nobody grasps how to derive unbiased confidence intervals and ranges for contents and grades of reserves and resources. Otherwise, Weatherstone would have blown his horn when he talked to Smith. A few geostatistical authors referred per chance to statistical facts. Nobody has responded to my questions about geostatistical factoids. The great debate between Shurtz and Parker got nowhere because the question of why kriging variances “drop off” was never raised. So I’ll take my turn at explaining the rise and fall of kriging variances.

In the 1990s I didn’t geostat speak quite as well as did those who assume, krige and smooth. I did assume Matheron knew what he was writing about but he wasn’t. Bre-X proved it makes no sense to infer gold mineralization between salted boreholes. The Bre-X fraud taught me more about assuming, kriging, and smoothing than I wanted to know. And I wasn’t taught to blather with confidence about confidence without limits. It reminds me of another story I’ll have to blog about some other day. It’s easy to take off on a tangent because I have so many factoids and facts to pick and choose from.

Functions have variances is a statistical fact I’ve quoted to Weatherstone and Smith. Not all functions have variances I cited as a geostatistical factoid. Factoid and fact are mutually exclusive but not equiprobable. One-to-one correspondence between functions and variances is a condition sine qua non in classical statistics. Therefore, factoid and fact have as much in common as do a stuffed dodo and a soaring eagle. My opinion on the role of classical statistics in reserve and resource estimation is necessarily biased.

The very function that should never have been stripped off its variance is the distance-weighted average. For this central value is in fact a zero-dimensional point grade. All the same, its variance was stripped off twice on Agterberg’s watch. David did refer to “the famous central limit theorem.” What he didn’t mention is the central limit theorem defines not only the variance of the arithmetic mean of a set of measured values with equal weights but also the variance of the weighted average of a set of measured values with variable weights. It doesn’t matter that a weighted average is called an honorific kriged estimate. What does matter is that the kriged estimate had been stripped off its variance.

Two or more test results for samples taken at positions with different coordinates in a finite sample space give an infinite set of distance-weighted average point grades. The catch is that not a single distance-weighted average point grade in an infinite set has its own variance. So, Matheron’s disciples had no choice but to contrive the surreal kriging variance of some subset of an infinite set of kriged estimates. That set the stage for a mad scramble to write the very first textbook on a fatally flawed variant of classical statistics.

Step-out drilling at Busang’s South East Zone produced nine (9) salted holes on SEZ-44 and eleven (11) salted holes on SEZ-49. Interpolation by kriging gave three (3) lines with nine (9) kriged holes each. Following is the YX plot for Bre-X’s salted and kriged holes.

 

 

Fisher’s F-test is applied to verify spatial dependence. The test is based on comparing the observed F-value between the variance of a set and the first variance of the ordered set with tabulated F-values at different probability levels and with applicable degrees of freedom. Neither set of salted holes displays a significant degree of spatial dependence. By contrast, the observed F-values for sets of kriged holes seem to imply a high degree of spatial dependence.

If I didn’t know kriged holes were functions of salted holes, then I would infer a high degree of spatial dependence between kriged holes but randomness between salted holes. Surely, it’s divine to create order where chaos rules! But do Crirsconians ever wonder about Excel functions such CHIINV, FINV, and TINV? Wouldn’t Weatherstone want to have a metallurgist with a good grasp of classical statistics on his team?

 

 

High variances give low degrees of precision. I like to work with confidence intervals in relative percentages because it easy to compare precision estimates at a glance. SEZ-44 gives 95% CI= ±23.5%rel whereas SEZ-49 gives 95% CI= ±26.4%rel. By contrast, low variances give high degrees of precision. Three (3) lines of kriged holes give confidence intervals of 95% CI= ±0.8%rel to 95% CI= ±1.6%rel. Crirsconians should know not only how to verify spatial dependence by applying Fisher’s F-test but also how to count degrees of freedom. Kriging variances cannot help but going up and down as yoyos!

Going GIGO with CRIRSCO

Snappy acronyms add spice to the way we blog and talk. GIGO has been tagging along with computing science without losing its punch. CRIRSCO is but one tong twisting tour de force for Combined Reserves International Reporting Standards Committee. Its Chairman is Niall Weatherstone of Rio Tinto. Larry Smith of Vale Inco asked Weatherstone about Stetting International Standards. Weatherstone said CRIRSCO was set up in 1993 but its website says it was 1994. CRIRSCO’s website makes a tough read because of its dreadfully long lines. So what have Weatherstone and his Crirsconians been doing during all those years?

Smith should have but didn’t ask what CRIRSCO has accomplished. It would seem some sort of semi-international reporting template has been set up. The problem is the Russian Federation has a code of its own, and China’s is sort of similar. As it stands, Crirsconians have yet to develop valuation codes for mineral properties. At the present pace, valuation codes that give unbiased confidence limits for contents and grades of reserves and resources might be ready in 2020, the year of perfect vision. It had better be based on classical statistics!

Here’s what was happening in my life when CRIRSCO came about either in 1993 or in 1994. I talked to CIM Members in Vancouver, BC, about the use and abuse of statistics in ore reserve estimation. Bre-X Minerals raised money to acquire the Busang property. Clark wanted me to go from Zero to Kriging in 30 Hours at the Mackay School of Mines. I didn’t go because her semi-variograms are rubbish. The international forum on Geostatistics for the Next Century at McGill University didn’t want to hear about The Properties of Variances. David S Robertson, PhD, PEng, CIM President, failed to, “… find support for your desire to debate.” What irked me was Jean-Michel Rendu’s 1994 Jackling Lecture on Mining geostatistics – Forty years passed. What lies ahead? He rambled on about, “…an endless list of other ‘kriging’ methods…” and prophesied geostatistics, “… is here to stay with all its strengths and weaknesses.” At that time, Rendu knew about infinite sets of kriged estimates and zero kriging variances.

Rendu’s lecture stood in sharp contrast to A Geostatistical Monograph of The Mining and Metallurgical Society of America. Robert Shurtz, a mining engineer and a friend of mine, wrote The Geostatistics Machine and the Drill Core Paradox. Harry Parker, a Stanford-bred geostat sage, was to find fault in Shurtz’s work. This great debate got nowhere because neither grasped the properties of variances. Otherwise, both of them could have put in plain words why kriging variances drop off. A few of Parker’s geostat pals had already found out why in 1989.

Figure 2 is rather odd in the sense that, “The kriging variance rises up to a maximum and then drops off.” That’s precisely what Armstrong and Champigny wrote in A Study of Kriging Small Blocks published in CIM Bulletin of March 1989. What I saw kriging variances do is what real variances never do. Armstrong and Champigny alleged kriging variances drop off because mine planners over-smooth small blocks. More research brought to light that kriged block estimates and actual grades were “uncorrelated.” That would make a random number generator of sorts for kriged block grades. It was David himself who approved that blatant nonsense for publication in CIM Bulletin.

 

Figure 2 gives kriging variances as a function of variogram ranges. As such, it was more telling than Parker’s. Neither Shurtz nor Parker scrutinized Armstrong and Champigny’s 1989 A Study of Kriging Small Blocks. Otherwise, Shurtz might have pointed out Parker’s kriging variances looked a touch over-smoothed. Neither did Parker confess he does over-smooth the odd time.

 

Corrected and uncorrected sampling variograms for Bre-X’s bonanza grade borehole BSSE198 show where spatial dependence between bogus gold grades of crushed, salted and ordered core samples from this borehole dissipates into randomness. The adjective “corrected” implies that the variance of selecting a test portion of a crushed and salted core sample, and the variance of analyzing such a test portion, are extraneous to the in situ variance of gold in Bre-X’s Busang resource. Subtracting the sum of extraneous variances gives an unbiased estimate for the intrinsic variance of bogus gold in Busang’s phantom gold resource. Fisher’s F-test proved this intrinsic variance to be statistically identical to zero.

 

Harry Parker and Jean-Michel Rendu appear to speak for the Society for Mining, Metallurgy and Exploration (SME) in the USA. What it takes to cook up ballpark reserves and resources are soothsayers who know how to failingly infer mineralization between boreholes, hardcore krigers and cocksure smoothers. What CRIRSCO ought to have done after the Bre-X fraud is set up an ISO Technical Committee on reserve and resource estimation. It’s never too late to do it! GIGO may be a bit dated but Garbage In does stand the test of time. Nowadays, Good Graphics Bad Statistics Out is a much more likely outcome. What a pity that GIGGBSO lacks GIGO’s punch!

Pneumatic Unloaders: Problems to Avoid

Terminals and factories, receiving their (raw) materials by ship, operate unloaders.

One category of unloaders is the pneumatic unloader.

Although the unloading does not belong to the core business of the company, it can be considered as an umbilical cord to the company’s process or trade.

Without incoming materials there will be no end product nor sales.

A stevedoring company will even stop to perform immediately.

Owners of such installations should be aware of the possible impact on their day to day operations and possible risk in case of failures and therefore should evaluate the offers for their installations with great care.

Purchasing under quality- or under designed and built units will create unpleasant problems (and costs) later on.

In those cases where a pneumatic unloader does not fulfill the specified expectations, the following causes are possible:

  1. Installation does not reach the design specifications
  2. Frequent breakdowns

Ad 1) Installation does not reach the design specifications

In case the capacity is not reached, this could be influenced by:

  • Pneumatic design
  • Product properties
  • Pipe size and – configuration
  • Air volume
  • Pressure / vacuum
  • Back pressure silo or flat storage
  • Kettle outlet configuration
  • Nozzle type

In case the energy consumption is not reached, this could be influenced by:

  • Pneumatic design
  • Type of vacuum pump or/and type of compressor
  • Drive system (electric, diesel or diesel electric)
  • Average rate of product feeding
  • Skill of operators
  • Down time

Operational influences affecting the performance could be caused by:

  • Type of ship
  • Reach of arm
  • Maneuverability of arm
  • Use of auxiliary equipment for nozzle feeding
  • Ship unloading procedure (shifting holds)
  • In case of floating equipment: stability of pontoon
  • Maintainability in case of breakdowns
  • Availability of spare parts
  • Noise level (work place, environment)

Ad 2) Frequent breakdowns

  • Equipment failures
  • Arm failures

Design specifications :

The design specifications are the values against which the performance of a pneumatic shipunloader has to be compared.

The design specifications are the result of a set of considerations in terms of:

Economic basis:

  • Types of commodities
  • Expected annual throughput
  • Future lifetime
  • ship size of import/export
  • Unloader operational properties

Environmental basis

  • Dust generation
  • Noise limits (day or night)
  • Labor conditions

Technical basis

  • Quality
  • Strength
  • Lifetime
  • Dock loads
  • Stability

Capacity / energy consumption

Pneumatic design

Based on the required capacity and pipe routing, a pneumatic design is made to determine:

  • Air volumes
  • Pipe sizes
  • Filter sizes
  • Capacity at different pressures
  • Energy consumption at different pressures
  • Bend forces
  • etc.

These calculated data are then used for additional calculations:

  • System capacity at different pressures
  • System energy consumption at different pressures
  • Expected maximum- and average capacity at maximum- and average pressure
  • Expected maximum- and average energy consumption at maximum- and average pressure
  • Strength calculations of pipe supports
  • PLC program
  • etc.

Before a definitive pneumatic design is accepted, several alternative pneumatic designs can be made.

The parameters (capacity, air volume, pipe size, pressure) can be combined in many ways, with many different overall results.

An installation can be designed for maximum capacity or minimum energy consumption.

As these two parameters will not be possible at the same time, a choice has to be made between those parameters or for a combination of the two.

The consequence of a choice is in most cases:

Maximum capacity design –> lower investment cost –> higher energy consumption per ton

Low energy consumption design –> higher investment cost –> low energy consumption per ton

Investment costs are fixed costs and will be shared by every handled ton during the lifetime of the installation.

Energy costs are variable costs, which will be imposed on

every handled ton during the lifetime of the installation.

As the pneumatic design calculations are the basis for the further design of the installation and for the economics of the operation, it is imperative that these have to be thoroughly and extensively evaluated and executed.

The physical design of the installation is important in order to achieve the designed performance of the unloader.

The most important component, in this respect, is the pressure kettle outlet configuration where the product is mixed with the conveying air in the right loading ratio.

If the mixing capability is to low (not enough product is mixed into the air) the maximum loading ratio is not reached and therefore the maximum designed capacity is not reached

This will result in a lower pressure (or vacuum) than designed.

The application of extra active fluidization of the kettle cone can improve this situation.

Also the loading ratio control of the kettle outlet must be fast and accurate in order to maintain maximum loading ratio and prevent blockages of the pipeline at the same time.

Energy consumption

A given pneumatic design determines the energy consumption per ton conveyed.

As there are various types of air compressors, using different types of compression principle.

Screw compressors with internal compression require less energy than f.i. a water-ring compressor.

The way of power generation also influences the energy consumption per conveyed ton.

F.i. a diesel direct drive is more energy efficient than

a diesel – electric drive.

Operational circumstances s.a. nozzle feeding, operator’s skill and down time influences the energy consumption, mostly by increasing the time that no-load power is demanded while

no product is conveyed.

Operational

The type of ship to be unloaded is of influence on the time being used to unload..

Box type ships with straight and vertical walls are easier to unload than f.i. a bulk carrier with narrow hatches, many holds and open frames.

The unloading arm is designed for a ship size in dwt of average dimensions.

In reality the ship to be unloaded will be of different main dimensions, s.a. width, depth, hatch size, ballast draft, etc.

The reach of the arm will therefore not always be optimal, causing delay in the unloading by increasing the amount of cleaning up in the hold.

This also occurs when a ship is chartered, bigger than the original design was meant for.

A considerable part of the unloading time is spent on the cleaning up of the holds.

The clean up equipment used determines the rate of cleaning up and thereby the time used.

Bigger equipment will speed up the cleaning operation and will save time and energy.

In case a break down of equipment occurs, the time used for correction is depending on the possibility of easy repair.

The accessibility of the equipment is then very important.

Also the skill of the maintenance engineer and the availability of spare parts is crucial to minimize the down time for repairs.

For the diagnosis of the malfunction, it is necessary to have extensive and clear diagrams, drawings and manuals available.

Also the PLC program should have an extensive alarm diagnosis function.

Other operational parameters s.a. noise levels and vibrations are to be coped with by the proper application of normal standard technology.

Frequent breakdowns

Equipment failures

Equipment failures should not occur frequently.

If equipment failures do occur, than it than it can be caused by :

  • Not designed for existing ambient operating circumstances
  • Inferior technical quality of used components
  • Accuracy of assembly not sufficient (f.i. alignment, bad welds)
  • Vibrations
  • Lack of maintenance. (oil change not in prescribed intervals)
  • Improper use of components (f.i. operating at higher pressure than designed for)

Arm failures

Damaging the arm by external causes does not often happen and is mostly caused by improper operation or use or accidents.

Unloader arm failures often occur during operation due to improper design in respect to fatigue.

The load on the arm during operation and thereby the material stresses are to a great extend alternating and subject the material to fatiguing.

If the design does not incorporate fatigue calculations and the construction of the arm is such that there are points where stress rising can occur, the arm will fatigue and eventually crack and break.

This phenomenon is, when unnoticed, very dangerous and constant visual checks should be executed at a regular interval. (f.i. between each unloading)

As soon as fatigue cracks are discovered, corrective action must be undertaken.

Not only should the crack be repaired and strengthened, but also the affected area should be redesigned (if possible) and changed to prevent future fatiguing.

Further causes of arm failures are:

  • Hydraulic hoses, cables, hydraulic valves, etc. in vulnerable places
  • Insufficient control properties
  • Inferior technical quality of used components
  • Lack of maintenance. (lack of greasing)
  • Improper use of components (f.i. operating at higher hydraulic pressure than designed for)

To avoid or even prevent the above scenarios, the technical departments of the operating company should be given the assignment to write an extensive specification with clear descriptions of the required performance and quality of the unloader.

Also the appropriate responsibilities in case of not fulfilling the requested specification must be clearly defined.

Also very important is that the operators of existing unloaders are consulted for their experience.

Teus Tuinenburg

July 2008

Hooked on junk statistics

Our parents told us not to put all our eggs in one basket. This lesson has passed the test of time ever since the Easter Bunny got to working with real eggs. The world’s mining industry has put its basket full of junk statistics and got egg on its façade. Junk statistics does not give unbiased confidence limits for grades and contents of mineral reserves and resources. Annual reports, unlike opinion polls, do not sport 95% confidence intervals and ranges as a measure for the risks mining investors encounter. Many years ago I put classical statistics in my own basket. I thought I couldn’t go wrong because Sir Ronald A Fisher was knighted in 1953. But was I wrong? Matheron, who is often called the creator of geostatistics, knew very little about variances, and even less about the properties of variances.

Matheron deserved some credit because he didn’t put all core samples of a single borehole in one baskett. He would have lost all his degrees of freedom but wouldn’t have missed them anyway. He did derive the length-weighted average grade of a set of grades determined in core samples of variable lengths. What he didn’t derive was the variance of this length-weighted average. Matheron wrote a Synopsis for Gy’s 1967 Minerals sampling. Gy, in turn, referred to Visman’s 1947 thesis on the sampling of coal, and to his 1962 Towards a common basis for the sampling of materials. Visman bridged the gap between sampling theory with its homogeneous populations and sampling practice with its heterogeneous sampling units and sample spaces. Matheron never knew there was a gap.

Should a set of primary increments be put in one basket? Or should it be partitioned into a pair of subsets? Gy proposed in his 1977 Sampling of Particulate Matter a set of primary increments be treated as a single primary sample. He claimed the variance of a primary sample mass derives from the average mass and number of primary increments in a set, the properties of the binomial distribution, and some kind of sampling constant. I explained in Sampling in Mineral Processing why Gy’s sampling theory and his sampling constant should be consumed with a few grains of salt.

When I met G G Gould for the first time at the Port of Rotterdam in the mid 1960s, he told me how Visman’s sampling theory impacted his work on ASTM D2234-Collection of a Gross Sample of Coal. Visman’s sampling experiment is described in this ASTM Standard Method. Visman’s 1947 thesis taught me that the sampling variance is the sum of the composition variance and the distribution variance. I got to know Jan Visman in person here in Canada. I treasure my copy of his thesis. I enjoyed his sense of humor when we were griping about those who try to play games with the rules of classical statistics.

On-stream data for slurries and solids taught me all I needed to grasp about spatial dependence in sampling units and sample spaces. Fisher’s F-test is applied to test for spatial dependence, to chart a sampling variogram, and to optimize a sampling protocol.

Selecting interleaved primary samples by partitioning the set of primary increments into odd- and even-numbered subsets is described in several ISO standards. A pair of A- and B-primary samples gives a single degree of freedom but putting all primary increments in one basket gives none. Shipments of bulk solids are often divided in sets of lots so that lower t-values than t0.05; 1=12.706 apply. Monthly production data give ample degrees of freedom for reliable precision estimates. Those who do not respect degrees of freedom as much as statisticians do may cling to the notion that the cost for preparing and testing a second test sample is too high a price for some invisible degree of freedom. They just don’t grasp why confidence limits and degrees of freedom belong together as much as do ducks and eggs.

The interleaved sampling protocol gives a reliable estimate for the total variance at the lowest possible cost. It takes into account var2(x), the second variance term of the ordered set. It makes sense to take interleaved bulk samples in mineral exploration because they give realistic estimates for intrinsic variances in sample spaces. Both Visman and Volk, the author of Applied Statistics for Engineers, were conversant with classical statistics. Geostatistically gifted gurus made up some new rules, fumbled a few others, got hooked on a basket of junk statistics, and are doomed to end up with egg on their faces. What a waste of human and capital resources!

Pneumatic Conveying, Performance and Calculations!

In many industrial processes and transport, materials have to be stored and moved from one location to another location. For long distances, e.g. from one country to another country (or continents), modalities are used e.g. ships, aircraft, trains, trucks, etc.

Where changes are made in the transport (or storage) modality, various technologies are used to move the material from one modality to the other modality.

The basic applied technologies are :

  • mechanical systems
  • grabs
  • screws
  • belt conveyers
  • buckets
  • etc.
  • Carrying medium systems
  • Hydraulic systems using liquids as carrying edium
  • Pneumatic systems using gas as carrying medium

The bulk handling sector over the world is a key player in economics as it handles all kinds of commodities such as cereals, seeds, derivatives, cement, ore, coal, etc., which are processed in the industry to other commodities, which have to be transported and handled again.

To manufacture all the necessary equipment for the bulk handling alone a whole industry exists. The magnitude of financial investment is tremendous as well as the operating cost involved.

The importance of economic handling is not only a matter of the handlers, but also to third parties such as the transport sector.

The technology of bulk handling equipment is crucial to all the involved parties and therefore it is of the upmost importance that the bulk handling industry employs the best engineers and operators, who design, develop, build, calculate, operate the installations and do research and document their achieved knowledge and experience.

One sector of bulk handling is the pneumatic unloading and conveying of cereals, seeds, derivatives and powdery products such as cement, fly ash, bentonite, etc.

The first pneumatic unloaders were built around 1900. In 1975, there were still, steam driven, floating grain unloaders operating in the ports of Rotterdam and Antwerp. Unloaders, which even dated back from 1904.

How these installations were calculated is not really known as the manufacturers did not reveal their knowledge publicly for obvious (commercial) reasons. Trial and error must have played a significant role in the beginning of this industry.

Calculating a pneumatic system was, before computers were introduced, done by applying practice parameters, based on field data from built machines.

clip_image002.gif

Example 1975

Calculation grain unloader anno 1975

Set Capcity grain                                 440                  tons/hr

Bulk density grain                                0,75                tons/m3

Suction height (elevation)                     30                    m

Air displacement pump                        500                  m3/min

Vacuum air pump                                0,4                   bar

Absolute pressure vacuum pump          0,6                   bar

air density                                            1,2                   kg/m3

pressure drop nozzle                            0,16                 bar

Nozzle diameter                                  0,45                 m

Cross section nozzle                            0,1590             m2

Grain volume                                       9,778               m3/min
(Capacity/grain density/60)

Air volume at nozzle                            357,1               m3/min
(Air displ pump * abs press pump /(1-press drop nozzle))

Transport volume after nozzle               367                  m3/min
(Grain volume + Airvolume at nozzle)

Grain mass                                          7333                kg/min (capacity *1000/60)

Air mass                                              360                  kg/min
(Air displ pump * abs press pump * 1,2)

Transport mass after nozzle                  7693                kg/min (Grain mass + Air mass)

Specific density mixture                       20,97               kg/m3 (Transport mass after nozzle / Transport volume after nozzle)

Mean velocity of
mixture after nozzle                              2307                m/min
38,45              m/sec
(Transport volume after nozzle / Cross section nozzle)

Pressure drop nozzle                           1577                mmWC
11,98              cmHg
(specific density mixture * mean velocity^2 / (2 * 9,83))

pressure drop miscellaneous                  5                      cmHg

pressure drop vacuum pump                30,4                 cmHg
(Vacuum air pump * 76)

Available pressure drop elevation 1       3,416               cmHg
(press. drop vacuum pump – press, drop nozzle –
press, drop misc.)

Elevation per available press. drop        2,2361             m/cmHg (elevation / available pressure drop elevation)

Loading factor from diagram                24,45               kg/m3 (Loading factor = function (elevation per available pressure drop))

Calculated Capacity                            440,1               tons/hr
(60 * loading factor from diagram * air displ vacuum pump * (1- vacuum))

Table loading factor kg/m3 = function (elevation per available pressure drop)
image002.gif

Loading factor  elevation/cmHg
16                               3.6
17                               3.44
18                               3.28
19                               3.16
20                               3.0
21                               2.84
22                               2.64
23                               2.48
24                               2.26
25                               2.04
26                               1.74
27                               1.35
By changing the figures in this calculation, an iteration process is executed until the set capacity equals the calculated capacity The calculation can be started by assuming the pressure drop over the nozzle at 0.15 bar.

If a parameter is not known, assume this parameter and vary until an optimum is found.

It is clear, that this method is not really accurate, nor gives it a scientific insight how the physics of pneumatic conveying work.

Calculation of Pneumatic Systems using Gas as Carrying Media

Since computers are available, it became possible to build an algorithm that can execute calculations in a time domain, whereby the conveying length is divided in differential pipe lengths, which are derived from the elapsed time increment.

The physical principle of this technology is:
A gas flow in a pipeline will induce a force on a particle, which is present in the gas flow. This force (if of sufficient value) will accelerate and/or move that particle in the direction of the flow. (Impulse of air is transferred to particles) The particle is moved from location 1 to location 2.

Between pipe location 1 and pipe location 2 , impulse is transferred from the gas to the particles and to friction.

This transferred impulse is used for:

  • acceleration of particles
  • collisions between particles and from the particles to the wall
  • elevation of  the particles
  • keeping the particles in suspension
  • air friction

Bends are calculated only for product kinetic energy losses by friction against the outer wall and air friction pressure drop. The calculation of velocity losses in a bend are depending on the orientation of the bend in relation to the product flow.

There are 5 bend orientations to be considered:

  • vertical upwards to horizontal       (type 1)
  • horizontal to vertical downwards   (type 2)
  • vertical downwards to horizontal   (type 3)
  • horizontal to vertical upwards       (type 4)
  • horizontal to horizontal                (type 5)

All these energy transfers result in a change in the gas conditions (p,V,T) and changing velocities of the carrying gas and the particles.

All these energies, velocity changes and gas conditions can be calculated and combined into a calculation algorithm.

This algorithm calculates in the time domain (dt=0,01 sec)

The physical laws involved in this algorithm are:

  • Newton laws
  • Bernoulli laws
  • Law of conservation of energy
  • Thermo dynamic laws

From the original (start) conditions, the changes in those conditions are calculated for a time period  of dt. Using the average velocity over the period dt, the covered length dLn can be calculated. At the end of this calculation the energy, acquired by the particles, can be calculated.

By adding those changes to the begin conditions at location 1, the conditions at location 2 can be calculated for the particles as well as for the gas.

From there, the calculation is repeated for the next interval of time dt (and length dLn+1), covering the distance from location 2 to location 3.
The output of section dLn is used as the input for section dLn+1.
This procedure is executed until the end of the whole installation is reached.

All the conditions at the intake of a pneumatic conveying system are known. Therefore the intake is chosen as the start of the calculation.

In vacuum- and pressure pneumatic conveying calculations, the used product properties are identical. The only difference is the mass flow, generated by a compressor in vacuum mode or pressure mode.

The calculation result should be the capacity at a certain pressure drop.

However, both these values are not known. To calculate the capacity, the pressure drop must be set and the capacity must be iterated from a guessed value. The calculated pressure drop from a “wrong” guess will be different from the set pressure drop. Therefore the capacity guess is renewed in such a way that the new, to be calculated, pressure drop, approaches the set pressure drop. This iteration ends when the calculated pressure drop equals the set pressure drop. The capacity that resulted in this pressure drop equality is the wanted value. (Input and output are consistent) (Notice the similarity of the iteration process with the example 1975)

This iteration can also be executed, whereby the capacity is set and the pressure drop is iterated.

Example of a computer calculation 2007

image001.png

image002.png

Example of a modern computer calculation 2008

image0031.png

image004.png

The computer program is originally written in Q-basic under DOS and still operates, although some features are now lost under Windows

By changing the program from Q-basic to VisualBasic, the screens appear in a Windows form and more Windows features can be applied, but the program algorithm stays the same.

A very important feature of this algorithm is that performance data from existing installations can be used to determine the product loss factors for certain products. That opens the opportunity to build a database of various products that can be conveyed pneumatically and be calculated. As the used physics are basic, the calculations work as well as in pressure mode as in vacuum mode with the same formals, product parameters and product loss factors. (Adaptations are made for the different behavior of the gas pumps in pressure mode and vacuum mode)

As the pneumatic conveying calculation is basic, the calculation program can be extended with many other features s.a. booster application, rotary locks, high back pressure at the end of the conveying pipe line, heat exchange along the conveying pipe line, energy consumption per conveyed ton, Δp-filter control, double kettle performance, sedimentation detection, 2 pipelines feeding one pipeline, etc. Also it becomes now possible to evaluate product pneumatic conveying properties from field data and tests and also investigating operating machines for functioning. (Defects were found, just by calculating the actual situation).

Based on the properties of pneumatic conveying, derived from the above described theory, the used technology is chosen. The used technology and operational procedures are also depending on the type of application and product.

The above only describes the calculation of pneumatic conveying based on physics. The connection between theory and practice is made by measured and calculated parameters from field installations. In addition to this theory, there are many technological issues to be addressed, ranging from compressor technology to the structural integrity of a complete unloader as well as PLC controls, hydraulics, pneumatics, electric drives motors, diesel engines, filter technology, ship technology, soil mechanics (product flow), maintenance, methods of operation, etc.

The mathematical approach with the field verification (resulting in many corrections and extra features), documented description and creating the computational software is (was) a matter of many years of persistent labor but worthwhile. This approach also resulted in a better and still growing understanding of the pneumatic conveying technology. The influence of the various parameters and there effects (sometimes hidden by counter action) was revealed step by step.

July, 2008
Teus Tuinenburg
The Netherlands

Welcome!

Good day to all,
 
I just joined the Bulk-Blog and take the liberty to introduce myself to you.
 
Living in The Netherlands and having reached the age of 65 years, I am entitled to retire. So I did.
 
My working career always took place in ports and on the water.
After and during my studies, Electrical Engineering, Mechanics and Shipbuilding, I was a Rhine barge sailor, dredging equipment designer, dredging equipment surveyor, shipyard draftsman, project manager of a stevedoring company in Rotterdam, project manager of a pneumatic unloader manufacturer and technical manager of a stevedoring company again.
 
In those jobs, I travelled to various regions in the world.
 
During the last 30 years, I was involved in pneumatic unloading and spend a lot of thinking and evening hours and spare time on figuring out, how does pneumatic conveying function mathematically, how to calculate and how to design.
 
In the Bulk-Blogs to come, I will try to share experience and knowledge and to bring some logic into the general perception of pneumatic conveying.
 
Any suggestions or remarks from your side (even now already) are very welcome.

Expecting fair, sophisticated, strong and to the point communication.

Take care,
Teus Tuinenburg

A weblog for the worldwide powder and bulk solids handling and processing community.

Single Sign On provided by vBSSO