Page 2 of 2
Posted: Mon Nov 23, 2009 9:47 am
by danko
OK, nobody wants to answer Tom’s and Hans’s question. Neither my comments are currently addressed. So, I’ll try to illuminate the “missing linkâ€
Posted: Mon Nov 23, 2009 11:08 am
by lmh
Right, here goes. I might be completely wrong, but I think all the "inferring accuracy" bit means is that if you are 100% sure your standard is perfect, then all errors are either random variations from the measurement mean (i.e. precision), or systematic errors due to deviation from your model calibration curve (an accuracy issue, but an avoidable one if you could only improve your calibration curve model).
This is quite an important issue, because someone, somewhere, "inferred" the accuracy of the USP standard. Someone, somewhere, synthesised something and declared it to have a known composition on some basis that didn't involve comparison to another standard. In a sense, when we check our method against a bought standard, we're merely shunting the accuracy problem into someone else's lab, the lab who sold the standard.
The only other interpretation I've ever been able to come up with for that dubious paragraph in ICH Q2 is that a set of people sat round a table discussing how you check accuracy. Most agreed on standard accepted methods such as comparison to another proven method, or checking a proven standard. But some old codger stood in a corner saying he "Knew" how accurate his methods are, and that he could, by miraculous means, work out his accuracy. Being a very important person, no one dared tell him he was talking rubbish, and as a result we have a meaningless sentence in ICH Q2 for ever more... but perhaps I'm cynical.
Posted: Mon Nov 23, 2009 11:32 am
by danko
lmh,
If you are cynical, that makes two of us! But I think people that assume things, on behalf of other (trusting) people, (or should I say telling “funnyâ€
Posted: Mon Nov 23, 2009 8:15 pm
by Bruce Hamilton
I think I'll go and find a dead horse to flog.
If you find the c) sentence meaningless, fine. But I don't.
For all testing, Q2 specifies " Well-characterized reference materials, with documented purity, should be used throughout the validation study. The degree of purity necessary depends on the intended use."
From my perspective, accetable results from performing defined Specificity, Precision and Linearity tests can be used to infer Accuracy.
If you do not have acceptable results from all of those tests, you can't infer accuracy, and have to determine it using a) and/or b).
Specificity is a critical component, because it is defined as:-
" Specificity is the ability to assess unequivocally the analyte in the presence of components which may be expected to be present. Typically these might include impurities, degradants, matrix, etc.
Lack of specificity of an individual analytical procedure may be compensated by other supporting analytical procedure(s).
This definition has the following implications:
Identification: to ensure the identity of an analyte.
Purity Tests: to ensure that all the analytical procedures performed allow an accurate statement of the content of impurities of an analyte, i.e. related substances test, heavy metals, residual solvents content, etc.
Assay (content or potency): to provide an exact result which allows an accurate statement on the content or potency of the analyte in a sample. "
Note the use of "accurate".
Why is the provision in there, I've no idea, as I wasn't present when the guidance was drafted. I can think of several economic reasons why some firms may want to curtail unnecessary testing.
Found one!, now the flogging can begin, and I won't pollute this thread again.
Bruce Hamilton
Posted: Mon Nov 23, 2009 9:13 pm
by danko
For all testing, Q2 specifies " Well-characterized reference materials, with documented purity, should be used throughout the validation study. The degree of purity necessary depends on the intended use."
That sounds more like the prerequisite for “method b)â€
Posted: Tue Nov 24, 2009 6:09 am
by mohan_2008
This is a standard procedure for validation/accuracy determination.
1. Linearity: Prepare five standard concentrations 50-150%of your nominal. Determine the Linearity.
2. Accuracy: Prepare 80,100, 120% of nominal, of accuracy solutions from a stock solution matrix (Remember: These are not standard solutions, but sample solutions (with the matrix).
Evaluate accuracy of these sample solutions using the linearity curve.
Posted: Tue Nov 24, 2009 7:52 am
by Peter Apps
Quick thought experiment:
You run precision, linearity and specificity using a fictional 100 % pure specimen of your target analyte as a standard, either in clean solutions ("standards") or spiked into matrix. Then you repeat the excercise with 10% of an impurity that is invisible to chromatograph detectors (e.g. water) in your otherwise pure specimen, in other words with a 90% purity for your standard. Precision will remain the same, linearity will remain the same (although the line will be shifted downwards by 10%) and specificity will remain the same. BUT, there will be a 10% bias, in other words a 10% inaccuracy unless you know, and correct for, the purity of the standard. In this scenario you cannot infer accuracy from linearity, precision and specificity alone, you also have to know the purity of whatever you use to calibrate, AND whatever you use to spike the matrix. As Bruce points out; "well characterized reference materials, with documented purity etc"
A certified reference material (whether it be a high purity single compound or a sample in matrix) will have been characterised by a battery of independent techniques such as Karl Fischer for moisture, elemental analysis, insolubles, wet chemistry, NMR etc as well as chromatography. Many of these are primary methods - they do not need a standard of the substance being determined. The primary method most familiar to chemists is weighing; the "standard" is a chunk of metal.
Chromatography does not measure anything, it just separates things so that they can be measured by detectors. Depending on the detector, its response per unit quantity is not the same for all compounds, therefore it has to be calibrated using the target analyte, hence the need to have the purity of reference materials determined by methods that do not use chromatography detectors (or at least not the same detector as would be used in the method being validated). A very few detectors (FID for e.g.) are uniform enough in their responses to different compounds that their response to known quantities of one compound can be used to calibrate their responses to another compound. A drawback with this approach is that such detectors are non-selective, so for trace analytes in complex samples it may not be possible to separate the signal for the analyte from interferences. As far as I know the only analytical method that exploits both selectivity and uniformity of response is isotope dilution mass spectrometry.
Peter
Posted: Tue Nov 24, 2009 9:05 am
by danko
Peter Apps’ thought experiment etc. illustrate the issue just fine.
So, wouldn’t we agree on the fact that “method c)â€
Posted: Tue Nov 24, 2009 9:44 am
by HW Mueller
But what is "accuracy solutions from a stock solution matrix"?
Posted: Tue Nov 24, 2009 12:18 pm
by lmh
It's nice to see we're all saying much the same thing (I think??). Bruce, I didn't mean that I think "c" is meaningless. I just think that the only useful meaning I can find for it is the same as yours: given a 100% guaranteed absolutely pure standard (without any water etc.), or at least a perfectly-characterised impure standard, then I know my method is accurate if I can also show it is precise and specific. That's right, isn't it?
To my mind, if you have any doubts whatsoever about your calibration standards (e.g. they're from an imperfect source, or you don't believe yourself to be 100% fault-free when preparing dilutions - and no one should believe themselves perfect), then to me it makes strong sense to use a completely different source of standard, prepared separately, for the accuracy check. I wish, though, that I always worked as thoroughly as I ought.
Posted: Tue Nov 24, 2009 6:07 pm
by HW Mueller
Way too many "if" here.
I still would appreciate knowing what companies support doing the calibration and check with the same standard.
Just saw in another forum that several people experienced trouble when they were changed to a cheap medication (generic).
Posted: Tue Nov 24, 2009 6:48 pm
by grzesiek
I am amazed at how some people here use the terminology which is clearly defined in the wrong way, I find it hilarious at times, at times it is sad
And so with the above in mind, it is my last post in this thread as there is no way we can get to any coclusion if people don't read and try to understand, sorry but that wont work
I will just finish answering one question posted here
"So, wouldn’t we agree on the fact that “method c)â€
Posted: Thu Nov 26, 2009 10:34 am
by HW Mueller
The people who introduced and maintain Ringversuche (in english apparently "round the robin" trials) are all idiots?
Anybody who thinks that QC regulations have solved accuracy problems should look at the results of round the robin tests.