Advertisement

Method Validation, Accuracy, and LOQ

Discussions about HPLC, CE, TLC, SFC, and other "liquid phase" separation techniques.

6 posts Page 1 of 1
I realize that ICH Q2 is a general guideline, not a complete set of requirements, but I find it strange that there is no mention of evaluating accuracy at the limit of quantitation. How can one quantify a solution at the LOQ if accuracy has not been evaluated at that level? Or is this covered by the caveat that "accuracy may be inferred once precision, linearity and specificity have been established" under the Assay Section 4.1?

A little background: I work with inhalation products and therefore our chromatography methods often evaluate dilute solutions from Aerodynamic Particle Size Distribution (APSD) testing.
Or is this covered by the caveat that "accuracy may be inferred once precision, linearity and specificity have been established" under the Assay Section 4.1?
That's it.
-- Tom Jupille
LC Resources / Separation Science Associates
tjupille@lcresources.com
+ 1 (925) 297-5374
Logically, any method of determining a LOQ actually implies the accuracy that will be achieved at that level. For example, if you use the method of finding the standard deviation of the calibration curve and dividing by slope, the multiplying factor (10 for LOQ, 2.3 for LOD) implies a relative standard deviation on the actual measurement at the LOQ (I think it means that the RSD of a measurement at the LOQ is 10% doesn't it, but my maths is bad? Please, someone, report and delete this post if I'm wrong...)
any method of determining a LOQ actually implies the accuracy
Actually, it implies the *precision*, not the accuracy.

Accuracy (essentially, getting the correct result) is a much more difficult thing to determine than is precision (essentially, getting the same result every time). Determination of a known standard sample or comparison with a known accurate alternate method would be used, but both approaches get "iffy" when you're at LLOQ, which is why ICH left the loophole.
-- Tom Jupille
LC Resources / Separation Science Associates
tjupille@lcresources.com
+ 1 (925) 297-5374
What you actually want to verify is the working range of the method, and ICH Q2 states that you need to confirm that the method "provides an acceptable degree of linearity, accuracy and precision when applied to samples
containing amounts of analyte within or at the extremes of the specified range of the analytical procedure."
So, actually you need to evaluate accuracy at the lower end of your working range - which is not necessarily the LOQ. Actually I doubt that it makes much sense to check accuracy at the LOQ itself since the LOQ is NOT an inherent feature of the method, but of the whole analytical system. Next week, the lamp in your detector will be older leading to more noise and the column will have seen more samples probably leading to peak broadening, so the LOQ next week will be different from the LOQ this week.
Consider a method for evaluating the purity of a pharmaceutical product. Let's say you've established a LOQ of 0.02% (referring to 100% API). Fine. But usually you'll have a reporting threshold , which e.g. is 0.1%. This means that you're simply not interested in anything below 0.1%. Your LOQ is below the reporting threshold, which is a basic prerequisite for the method to be valid. But would you evaluate accuracy at 0.02%? No. You should evaluate it at 0.1% because that's the lower end of your working range.
Ooops, I misread in a hurry and got side-tracked into precision (sorry!).
Actually, while I agree with HPLCaddict about accuracy, I'd go further and say I don't think that even the lowest end of your range is necessarily the best place to assess it, depending on the source of the inaccuracy.
For example, if you've done the typical thing and fitted a straight line through a set of calibration points that have a definite tendency to curve off (because they often do, but no one can admit it, because of the sacred importance assigned to linearity), then your calibration curve line-of-fit starts above the data-points, passes through them, runs underneath them in the middle region of the curve, passes through them again as the data-points curve down, and then continues upwards with ever-decreasing accuracy. Over the range of the calibration curve, the worst possible places with biggest inaccuracies induced by bad fit are the middle of the curve, and the two ends. If you're really unlucky, the region you use is actually somewhere in the middle between the two points where the line actually passes through the data-points, in which case your typical measurements in the middle have the worst possible accuracy, but an assessment at the low end of the working range has the best possible accuracy (oops!).
If you've done the evil thing, that I do more often than I want to admit, of forcing your curve through the origin when actually something weird happened at low concentration, then yes, your accuracy will be dreadful at low concentrations so the lowest working concentration is a very sensible place to assess it.
If the biggest sources of inaccuracy are the quality of the available standard and its dilutions, then you can assess anywhere.
6 posts Page 1 of 1

Who is online

In total there are 360 users online :: 0 registered, 0 hidden and 360 guests (based on users active over the past 5 minutes)
Most users ever online was 4374 on Fri Oct 03, 2025 12:41 am

Users browsing this forum: No registered users and 360 guests

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry