Quality control and method validation for beginners

Off-topic conversations and chit-chat.

8 posts Page 1 of 1
Hello all,

This is a slightly strange one.

I need to convince a community of practice (field biologists) that they need to validate their tools and methods before they use them to generate results that are used as the basis for important management decisions.

While QC and validation have penetrated to the benches in biochemistry and microbiology labs, they are completely foreign to field biologists, for reasons that I will not burden you with here.

What I need is a basic introduction to QC and validation that is not specific to the hard sciences (physics and chemistry), and that sets out the fundamental reasons why knowing that a tool and a method work properly are essential steps in doing good science and generating results that can be relied on.

I have googled all the combinations of validation, QC, fit for purpose, etc etc that I can think of, and everything I find is too specific to particular applications - I need something that sets out the basic need for validation in general. Peer reviewed or a standard textbook is preferred, but anything clearly written will be a help.


Peter Apps
Hi Peter,

The idea is not strange to me. Please see what you think of this suggestion--it may be too much math, but it also conveys the spirit of quality control well, in my opinion:

Statistical Method from the Viewpoint of Quality Control, by Walter A. Shewhart, ISBN 9780486170879, Dover, 1986
Thanks Matt, but I need something a lot more basic than that; not much above the level of "if your tools don't work you can't produce good data, if your data is no good your results are not valid, if your results are not valid your conclusions are suspect"

Peter Apps
I agree with you, however many biologists are using unvalidated methods! While a microscope may have an IOQ a fluorescence antigen method may not be validated. Would you accept a 50% recovery?

The FDA has written a draft method validation guidance for biopharmaceuticals. It the same as the guidance for pharmaceuticals (I don't see why they bothered).
I would be perfectly happy with a 50 % recovery (or even 5%) as long as it was repeatable and reproducible.

ISO 17025 has some clear definitions, but it relates specifically to laboratory methods and my target audience is field biologists, whose approach to everything is to use large numbers of replicates (20 to 30 is not unusual) or simulated replicates and then model the hell out of the data with sophisticated statistics.
Peter Apps
maybe the search terms are more on the side of "process validation" than pure quality control method validation?
Or even on "Good Agricultural Practice"

Haven read this, but maybe this goes in your direction:
Quality Assurance for Research and Development and Non-routine Analysis (1998)
https://www.eurachem.org/index.php/publ ... uides/qard

On the Eurachem page there are other documents in this sections as well as a "reading list" with further refernces
https://www.eurachem.org/index.php/publ ... /mnu-rdlst

Just some more Google search results, no particular evaluation or "quality control" done
Four Steps To Ensure Measurement Data Quality
- http://rube.asq.org/quality-progress/20 ... ality.html (part one)
- http://rube.asq.org/quality-progress/20 ... ality.html (part two)

- https://www.easterbrook.ca/steve/2010/1 ... alidation/

- https://moz.com/blog/how-do-you-know-if ... s-accurate

- https://www.matrix.edu.au/validity-reliability-accuracy

- http://www.bbc.co.uk/guides/z974jty
Thank you; the QP links are very good. The BBC link is appropriate to most field-biologist's understanding of validation, but I doubt that I should use it in its present form !

Peter Apps
Well, hearing "what were they thinking??" anecdotes about terrible lab practices or "wow what a weird way to be fooled by your own data" stories is both fun and educational. Reading about the reproducibility crisis (not to mention p-hacking) in social science and even in areas like cancer research helped me update my prior assumptions about how many ways there are for things to go wrong.

I wonder if it's easy to get agreement from most people that reliable science depends on knowing that a tool and method work properly, but that it's harder to convince someone who believes that they can claim that knowledge ("of course my methods are reliable! I've been doing this 20 years!") that they can't - at least, not with as much confidence as they think they can.

Personally, I'm lazy and QC isn't particularly exciting, but I'm happy to do it when I can see how the particular requirements that my work is subject to actually matters for the project I'm working on. Nobody ever explained to me why I should make control charts for my LCS and Spikes; it was just "this is what auditors look for" or unrealistic "what-ifs." I had to get bored enough to look up "control charts" on Wikipedia to really get it. I have a feeling I'm not the only one in that sort of boat.
8 posts Page 1 of 1

Who is online

In total there are 10 users online :: 0 registered, 0 hidden and 10 guests (based on users active over the past 5 minutes)
Most users ever online was 599 on Tue Sep 18, 2018 9:27 am

Users browsing this forum: No registered users and 10 guests

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry