Let’s be honest, to most of us in the ground engineering community, chemistry is something of a black art. It’s a subject we never properly understood at school and certainly not one we intended to revisit in our professional capacities. We can muddle through the uncertainties of soil mechanics and a few of us claim a vague understanding of finite element analysis. Imagine our horror therefore when chemistry abruptly re-entered our world in the form of contaminated land. Our inattention and tomfoolery at the back of class has suddenly come back to bite us. Hazy schoolday recollections of sodium fizzing around in the sink or the exploding magic green fountain aren’t going to get us out of this one.
And it gets worse! Chemistry is no longer even just a disagreeable side issue for many of us and on many developments it sits, gloating, athwart our critical path, knowing full well that not only do we not know the answer, we are often unsure of the right questions to ask as well.
So where is our white knight, to whom can we turn for help and enlightenment? In the past we might have turned to our laboratory for help. However over the last 15 years or so, there has been a complete rationalisation of the chemical testing market. Laboratories have tended to become bigger and more automated, offering cost effective analysis but consequently less consultancy support. Intense competition amongst the key players means that margins are so tight there is little room in modern production line chemistry for added value services. Testing has become a numbers game.
There are now several degrees of separation between the engineer and the chemist. Yet we are in fact very similar in one key respect and it is here that we close the circle. They don’t understand what we do and we don’t understand what they do.
Laboratories must adopt operating practices which enable them to make a profit under conditions of intense price competition. Their choices fundamentally affect the quality and reliability of the data they produce. We don’t even know what questions to ask and many laboratories in turn are less than forthcoming in disclosing the limitations of their data. We work together in blissful ignorance even though our interaction (or lack of it) has a critical influence on the quality of the data they produce and we then use.
Accreditation schemes such as UKAS, compliance schemes (e.g. Contest, WASP) and the recently developed MCERTS scheme championed by the Environment Agency are all designed to address quality issues in laboratory testing. However all have significant limitations which are not readily apparent and are certainly not advertised to a largely ignorant consumer.
So what’s the problem. Well the example in box 1 below illustrates this nicely. In it we have simulated total soil cadmium data from two simulated laboratories. One of the simulated laboratories is a reputable, highly regarded outfit with excellent quality control and in the case of our simulated sample it has if fact got an answer which approximates to the true value. The only downside is that quality costs and it charges £1 to undertake the analysis. The other laboratory has a less robust quality system and the cost savings allow it to charge just 50p for a cadmium determination. However its reported total cadmium concentration is in fact in this case woefully inaccurate. See if you spot the wrong answer.
Laboratory
|
A
|
B
|
Total Cadmium
|
617.2 mg/kg Cd
|
617.2 mg/kg Cd
|
The problem is of course that you can’t tell by looking whether data is reliable or not. Because buyers of chemistry are largely ignorant of chemistry, and the product we buy does not readily reveal its quality, then the key differentiator becomes price. Whilst there is an industry bottom line which we might say is policed by accreditation schemes such as UKAS, we should not be naïve enough to believe that this is in anyway a guarantee of a right answer. Now the punch line, and you fanatically precise engineers are not going to like this. We should recognise that even good data is not ‘correct’ in the right and wrong sense and that some degree of uncertainty is inherent in every result. Sometimes this uncertainty is very large indeed, 617.2 mg/kg could actually mean anything from about 80-1000 mg/kg and that should give us all some food for thought.
So what are the questions we should ask. Well here are some important ones for starters.
Basis, basis, basis
The basis on which you send your sample to the laboratory could be as follows. It is a cold and wet day. The wind is making life difficult and you are worried about getting caught in the traffic if you don’t get off site soon. You shovel a couple of kilograms of rubble into the bag and leave the bag by the gate for the laboratory to pick it up sometime later in the week. In a couple of weeks the laboratory (UKAS accredited as the contract specified) reports back to you and you are relieved to see the thiocyanate content is 24.7 mg/kg, just below your limit of 25 mg/kg. You’re in the clear, you can sign the site off – or can you? Have you considered these questions?
How was the sample prepared, and by whom?
What is the precision and bias of the method used?
On what basis are method precision and bias measured?
On what basis is the data reported?
On what basis is your acceptance criterion calculated?
Sample Preparation
We can’t emphasise enough how important initial sample preparation is. If it is not right then everything that comes after is wrong. Unfortunately good sample preparation is expensive, labour intensive and very repetitive – it is simply not fashionable and therefore often neglected. You will almost always find the least qualified staff in a laboratory carrying out the most important function – sample preparation.
Accreditation schemes accredit results and sample preparation does not produce a result. It is debatable therefore whether sample preparation falls within the scope of accreditation. Imagine that – you use a UKAS accredited laboratory and their single most important operation is not actually capable of being accredited and is carried out by the least qualified personnel in the company.
The Quality Control Con
Laboratory quality control focuses on the instrumental side of the analysis. QC data is usually generated from a point after samples have been prepared for analysis. Prepared QC samples or certified reference materials are finely ground, dry, inherently homogenous materials. Real samples are sun drenched, windswept, dirty, heterogeneous lumps. Don’t believe, therefore, that quoted QC data will necessarily bear any resemblance to that achievable in your samples.
For example the QC data for samples which are normally analysed wet (like cyanide) may in fact be determined on dry, ground reference soils which are spiked immediately before analysis. Doing it this way ensures excellent QC data but doesn’t really relate all that well to the true bias and recovery one might get from a mixed, wet contaminated soil.
Precision, bias, repeatability & uncertainty
Each measurement a laboratory makes is subject to any number of errors. Good laboratories minimises the impact of such errors by sound methodology and quality control procedures. You cannot however eliminate uncertainty altogether and a knowledge of uncertainty could be critical to your remediation scheme.
For example if you have a clean up criteria of 2500 mg/kg of mineral oil on a scheme and your sample shows a concentration of 2000 mg/kg you might be forgiven for breathing a sigh of relief. If you knew that the true precision of the analytical method is more like +/- 100% you might have cause to re-appraise your hasty signing off of the site.
Bias, or recovery is, in simple terms a measure of the amount you get out knowing what you originally put in. For example a laboratory quotes a UKAS accredited method recovery for DRO as 95%. Fine you think a 95% recovery is very good, and the method is UKAS accredited. What you don’t appreciate is that the recovery is quoted on a reference sample that has been dried and finely ground. In other words the recovery quoted does not account for any volatiles lost during drying and grinding.
We can however welcome the (relatively) new MCERTS scheme championed by the Environment Agency in so far as precision and bias data should now accompany all results of analysis.
How is data actually reported
Understanding the basis for reporting is really, really critical. Some samples are analysed wet, some dry, some with stones removed, some without. Some data is reported dry and some wet, some whole and some just on the fines. Data on the same sample may be reported on a different basis. Do you know on what basis your samples are analysed and reported? Do you know on what basis the acceptance criteria you use (CLEA, Dutch) are generated?
The example below illustrates this point demonstrating the range of total mercury values you can get depending upon how you choose to express the data or how that laboratory chooses to prepare your sample.
A 100g sample of a contaminated clay is submitted for total mercury analysis. It contains 500ug of mercury, and is composed of the fractions set out below. For this example we assume (fairly reasonably) that all of the mercury is present in the fines. Our acceptance criteria are the CLEA Soil Guideline Values of 8 mg/kg Hg for residential uses with plants and 15 mg/kg Hg for residential uses without plants.
|
Soil fines (less than 2mm diameter)
|
30 grams
|
Stones (2-10mm diameter)
|
20 grams
|
Stones (greater than 10mm diameter)
|
25 grams
|
Water
|
25 grams
|
Data reported on:-
|
Result (mg/kg Hg)
|
CLEA 8
SGV
|
CLEA 15
SGV
|
Whole sample
|
5.0
|
pass
|
pass
|
Fines, dry
|
16.7
|
fail
|
fail
|
Whole, dry
|
6.7
|
pass
|
pass
|
<10mm, dry
|
10.0
|
fail
|
pass
|
<10mm, wet
|
6.7
|
pass
|
pass
|
The example illustrates a huge variation in ‘right’ answers which completely span the selected acceptance criteria. It also reveals that the same (or very similar answers) can be obtained by using completely different assumptions – a whole dry basis being very similar to a <10mm wet basis in this example. What is more worrying is that the fines dry result (arguably the most common way of determining mercury in soil) is over three times higher than the result expressed on the whole sample (arguably the true result).
Sample Homogeneity
We all know that reliable data depends upon the sample from which it was extracted. We all also know how difficult it can be to take representative samples from very mixed fill and contaminated ground. The apparent precision of laboratory data can be very misleading. In reality a result of 645.37 mg/kg lead (as Pb) doesn’t actually mean that the horizon we sampled contains a concentration of 645.37 mg/kg lead. The problem is we don’t know what it means because we haven’t estimated the variability of the sampled horizon and we haven’t got a clue about the limitations of the techniques the lab uses to prepare, extract, analyse and then correct the raw data to produce the reported result. Engineers who are used to dealing with relative certainties would be horrified to learn that the true precision of chemical data is very, very poor. In many cases the best you could expect might be orders of magnitude.
Conclusions – The Key Questions
There is no doubt that the reliability of the analytical data our industry routinely uses is seriously limited. There is also no doubt that many (perhaps most) practitioners don’t realise this. This is not because the laboratories are producing poor quality work. Rather it is a combination of the uncertainty inherent in sampling and analysis coupled with the limitations a price driven market places upon laboratories. Factor in a lack of understanding on both sides of the effect (or even existence) of such limitations and it is easy to see that we can easily find ourselves skating on thin ice without even realising it.
What are the answers then? Well the answers are really a series of questions we should routinely ask ourselves when assessing our methods, our laboratories and their data.
1. Laboratories broadly use the same analytical equipment. What gives to allow some laboratories to be a lot cheaper than others? 2. What are the limitations of the selected analytical method? There are always limitations. Do they matter in this case?
3. Absolutely critical. What is the basis on which my data is reported? Does it match the basis on which my acceptance criteria are calculated?
4. Is the laboratory QC data realistic or has it been generated in ideal conditions using ideal samples which are unlikely to represent the conditions on my site.
If you can make a reasonable attempt at answering these questions you will be a long way to understanding the basis on which your data has been generated and in turn you will be reasonably confident when interpreting your data and able to take due account of the uncertainty that exists in your data.
If you can’t immediately answer these questions then you don’t know the basis on which you are interpreting your data/running your computer model/applying your acceptance criteria/remediating your site/providing your client with a collateral warranty. And it’s as fundamental as that.
Richard Puttock Partner
Michael Dinsdale Associate
Peter Brett Associates