by
lmh » Mon May 23, 2011 1:25 pm
I did enjoy it! He concentrates on only one facet of the problem. True, half of our profession is "too professional" in that they're so tied in chains of QC that they are no longer allowed to think and innovate. But many others lack training, or have fallen into a belief that their 20-year-old training is sufficient for the rest of their professional life, and are skating on analytical thin ice. Just as regulated labs have out-of-date methods that they can't afford to update and revalidate, academic labs have out-of-date (and frequently never validated) methods that have been handed down from person to person, photocopied until they're fuzzy, reapplied to different scenarios, and never thought-about with any degree of care.
Personally, I'm one of those totally lack any proper professional training. But it's very frustrating. Objectively, I'm scared by dubious lack of QC in some parts of academia, but I'm also scared by the lack of ability to think, reason and explain found in the better parts of professional analysis. I try to understand how things should be done, but when I can't understand, and ask experts who do have the right training, their answers often boil down to "it's like that because it's like that and always was, and if you don't know that, it's because you haven't got enough experience to understand".
I feel that neither camp want me (or anyone) to think. The first camp don't want to rock the boat. They don't want to hear that their method may not be valid. They don't understand what validation entails, they don't understand concepts such as selectivity, and they're happy with their method because "it looks right" and because the first time it was used, it confirmed their professor's belief in what the samples should contain. They don't really want to have an understanding, because understanding analysis is just another job in their already busy lives. It's a thing for people who worry too much...
The second camp don't want me to think either, and are equally challenged by it. They've forgotten that understanding and knowledge are two different things. If I ask "why is it wrong to smooth data?" (as I have!), they don't see a problem with answers such as "because it's not allowed in the ABC regulations" (knowledge) or "because it is a manipulation of the raw data" (undefined ethics). What I want to know is why does this particular operation run a risk of giving me answers that are no longer robust/accurate/precise/selective or otherwise fall short of what an answer ought to be. I don't care tuppence about the ABC regulations since I'm not working in that environment, but presumably whoever wrote the regulations had some reasoning behind their writings. What was it? I don't want a philosophical debate about ethics. I want to know about statistics and metrology, the numerical basis of how we go from a fluctuating electronic signal to a belief about how much of something was in a glass tube.
Another example that has me in giggles is the question "I want to measure something at a range of 1-100nmoles, how many calibration points do I need?"
I'm sure you can guess the range of answers:
PhD student in academia: "What's a calibration curve?"
Post-doc in academia: "Oh, do you need a calibration curve? I just use peak areas to compare"
Professor: "Use however many Jim B uses, I met him at a conference last week and he knows everything there is to know about analysing things. He runs this super facility in X (I know the facility is super because he told me so over a beer, so it must be, mustn't it?)"
RA in academia: "I use the protocol from Y, which I photocopied off some bloke who gave it to my boss at a conference 5 years ago. They got it from a student who used to work at Z, I think, but I'm not really sure. They make up a standard from this thing bought from A, but we can't get it any more, so we buy a different salt from B (but still follow the same recipe despite the fact that it's now got 27 water molecules associated with its crystal structure, and its molecular weight is now twice as big). We use 5 points, but often one of them doesn't work, so we miss out the bad point. Oh, you want to know what I mean by 'doesn't work'? I mean, one point sometimes doesn't sit on the line, so we miss it out."
Expert A: "5" "Why?" "Because that's the right number."
Expert B: "Use however many points it says in the protocol. If you change it, your method is no longer valid"
Expert C: "If you're using Chemstation you could download this macro"
Sorry to rant!