Advertisement

Unknown samples calculation

Discussions about GC-MS, LC-MS, LC-FTIR, and other "coupled" analytical techniques.

13 posts Page 1 of 1
Hi everyone
I want to discuss this case with all whom have an experience regarding this subject.
In BE studies, assume that I have two batches , Batch 1 and Batch 2 and each batch consist of one calibration and unknown samples , if the calibration of the Batch 1 is not accepted and not met the acceptance criteria , is it possible to calculate the unknown samples in Batch 1 using the calibration of Batch 2 ( which met the acceptane criteria) when they injected in small time period without excedding the auto sampler stability limits .

If one calibration is out and another is in in a short period of time, you have no more standing to say that the runs are good and may be calibrated with the second calibration than you have to say that the second calibration was in error and both runs are bad! (At least without further information.)

Do you have any indication of something changing in the instrument or sample, sample prep, or sample handling during this time?

So, the next question is: Does your SOP or validation study for the method give you any guidance? (Is this a validated method with an SOP?)

The failure of the first calibration is telling you something. There is a source of variability that you may not want in your data. It suggests a difference between batch results - or a trend through them. Depending on what you do with the data (and I don't know what a BE study is - at least by that term), small variances may not make a difference. But, extra variances will obscure small differences between samples or treatments, resulting in a failure to detect differences.

thank you Don_Hilton
About BE , it means Bioequivalence studies
I talk about calibration failing in small variance and when i use the second calibration to calculate unknown samples in batch 1 , I have quality controls run within unknown samples and also must be accepted when use the calibration of Batch 2 ,
My method validated but there is no clear guidance or SOP to talk about this and I ask this question to help me in writing new clear SOP .

Don was trying to be polite and thorough. The short answer to your original question is "NO".

As I interpret your post, the method as written amounts to:
1. Run calibrators
2. Run samples

If the calibration fails system suitability for whatever reason, data from the following runs are invalid. You *can* re-run the Batch 1 samples after the Batch 2 calibration, but you cannot retrospectively use a different calibration curve. As Don pointed out, you have no way of knowing at what point the condition(s) that caused the initial failure were remedied.
-- Tom Jupille
LC Resources / Separation Science Associates
tjupille@lcresources.com
+ 1 (925) 297-5374

BTW I prefer to run some kind of check before and after a batch. If either fails, the batch has a problem. You can send out tens of thousands of results that are OK and nobody notices much. Screw up once and...

"I talk about calibration failing in small variance" - maybe the limits are too strict? and how did you come up with the limits? what are they? maybe give us some example of your data?

you check system suitability before, not after the batch

and another thing - the method is validated so there had to be a SOP for the method first right? the document which says how the method is performed and how the results is calculated? you validated the method - which is sample preparation, analysis and results calculation - so you must do your results the way it was validated = the way it was written in your method SOP

in a recent thread, someone gave a helpul link to the US DoD quality systems manual for environmental labs. The thing you're suggesting is actually listed specifically as an example of an "Improper, unethical or illegal action".

Put it this way: your calibration curve in batch 1 is effectively a set of known samples, no different to a QC sample. If it fails, it's exactly equivalent to a QC sample failing, and it undermines certainty that the samples are correctly measured.

Of course the most likely explanation is that someone prepared a standard incorrectly, but the safest approach is to repeat the entire batch again with a new calibration. I sympathise, because all my clients want their results yesterday, but it's back to the old question: the correct answer or the instant answer?

Thanks alot...
I think the answer now is more than clear :)

yeah, repetition is the key to memorizing i guess :)

Noting the comment above that limits may be too strict: Warning. I've watched as manufacturing required that QC limits be widened because they kept failing QC tests. And the QC lab agreed. This may have contributed to the fact that the particular company is no longer in business.

Limits should be set according to good statistical practice so you will know how much variance to expect from the analytical method. If there is too much variance in the analytical method, small variations between samles will go unknoticed. The sloppier the analytical method, the larger the difference between samples or a sample and a target value willbe before the difference is demonstrated.

I hate having to work the seemingly double negative logic of the null hypothesis and I never remember which is type I and type II error. But, I appreciate that when it is all set up correctly, I am able to demonstrate real differences - or lack thereof at known tollerances and with known confidence.

If you can get a statistician involved in method validation - wonderful. If not, there are some good statistics books that will help you along the way.

i'm afraid that the limits could have been pulled out of the hat as there seems to be no SOPs there about validation / results quantitation - i'm assuming it because of the question posted, with no data there is as always not much to comment ( or too much :) ) and help really

of course i'm not advising this; "well we can't make limits so let's change them" :)

[a rant!]

Don_Hilton, thank you thank you so much for confessing to getting confused between type I and type II errors.

Personally, I think the statisticians who came up with that ridiculous, pointless terminology should have been keel-hauled under a muddy rowing boat while generations of students cheered. As a subject, statistics already lacks popular appeal, and to add an "I'm cleverer than you" orientated taxonomy of errors is utterly unhelpful.

Let's banish these wretched terms and replace them with "false positive" and "false negative", which require no more effort to type, and which are considerably more meaningful than "Type I error". This also stops the inclination to make up definitions for "Type III" errors, of which many have been proposed, with total inconsistency.

Yeah, the statisticians should follow the lead of chemists in nomenclature... (Where did amyl - as in amyl acetate come from anyhow?)

I am affraid that every group has it's own language and dance. You just learn the steps the best you can, smile and go on.
13 posts Page 1 of 1

Who is online

In total there are 38 users online :: 2 registered, 0 hidden and 36 guests (based on users active over the past 5 minutes)
Most users ever online was 4374 on Fri Oct 03, 2025 12:41 am

Users browsing this forum: Bing [Bot], Semrush [Bot] and 36 guests

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry