WEF Discussion Forums
Laboratory Management and Technical Issues
BOD Testing & Meter Calibration
What is the necessity of the Winkler Method in the conductance of a BOD test? When calibrating the DO meter by using barometric pressure to standardize O2, is that passable?
Is there a necessity to do seeded samples if there is not a question of the
effluent make up? And, is there a necessity to do G-G checks on a constant basis?
There seems to be a problem with the interpretation with Standard Methods
on conducting weekly BODs & would like to fix for the sake of standardizing the SOP and for the sake of legitimacy and reporting.
There is no universal requirement to check calibration of the DO meter with a Winkler titration as far as I know, but most regulatory/accreditation programs require that you check it "periodically." If you are having problems with blanks, however, and can't narrow down the cause, it could be something about your lab environment (e.g., temperature or pressure fluctuations) that would affect the saturated air calibration, but not the Winkler. In that case it would be worthwhile checking calibration with the Winkler often to see if such fluctuations might be affecting readings.
As an example, I evaluated a lab in Chicago that had an entire room devoted to BODs. During temperate weather they would leave the door to the room open, but close it during particularly warm weather to keep things cool, especially when their constantly running hood was drawing out a lot of cool air. With the door closed, the hood ALSO lowered the air pressure in the room, so if they calibrated in the morning while it was cool, and later closed the door, their blanks would be "out".
You will want to check with the regulatory agency using your data about what they require. Calibration in saturated air is used by the majority of environmental labs. How's that for a complex answer to a short question!
When air calibrating, be sure to use actual temperature, and actual pressure (not a constant pressure based on your elevation). The table in Standard Methods does not include pressure as a variable. If you need such a table, you can snatch one from...
You must seed all samples where you are not sure they have sufficient viable, hungry, bacteria. For some labs, and especially those having industrial waste in their influent, that includes seeding influent samples. Effluents before disinfection may, or may not have a viable bacteria population, and effluents after disinfection MUST be seeded (it's in the method).
Thanks for confirming my take on it, Perry; and it's nice to see your name again
You are very welcome. I just realized I didn't address your question on testing the GGA standard. It's a good question...Standard Methods 5210B is rather wishy-washy on the subject.
says the purpose of the GGA test is to check the effectiveness of the
seed, and they finally admit in the 21st Edition that it is also used
to check overall performance for the test. It checks bias when
you compare your average value over several batches (20 is good, 25 is
better and what NELAC/INELA requires) and comparing it to the 198 mg/L
mentioned in the method. It also check precision, the other component of accuracy, by
comparing the standard deviation for those same results to the 10 mg/L
goal implied (but very well disquised) in the method. The method says
the 30.5 mg/L cited as the standard deviation in a study of several
labs throughout the U.S. should be considered a control limit.
Since control limits are normally considered to be three standard
deviations, one standard deviation would be 30.5/3, or ~10 mg/L.
Although a standard deviation of 10 mg/L is achievable, I consider such
a low goal (with "low" being good) to be unnecessary to assure adequate
performance, and suggest 15 mg/L as the standard deviation goal. EPA
seems to agree where in their "interpretation" document of QC for the
CBOD test they said the relative standard deviation (the standard
deviation divided by the average) should not exceed 7.5%. 7.5% of 198
is approximately 15 mg/L.
So why did I explain all that
rather than simply saying "you should run the GGA in every batch"? To
convince you that you should, in my opinion, WANT TO run a GGA in every
batch...it's your only real way of keeping track of your performance
for the BOD/CBOD test. By monitoring the average and standard
deviation of the GGA test, and values for blanks, you can tell if you
are doing a good enough job and take corrective action if you aren't.
If you prefer a simple answer, Standard Methods falls short of saying you must run a GGA. However, most regulatory agencies and lab accreditation programs require
it, but many do not specify how often it must be run. I suggest at
least one GGA bottle be run in every batch of BOD or CBOD, and more
bottles if you are having precision problems so you can see if it is within-batch imprecision as opposed to between-batch imprecision that is causing the problem. Some regulators/accreditation agencies require two or three bottles, but recent guidance for the Standard Methods BOD committee says one bottle is OK for well-characterized samples, and GGA is certainly one of those.