Empower Custom Fields - Average of Two Samples

Discussions about chromatography data systems, LIMS, controllers, computer issues and related topics.

8 posts Page 1 of 1
Hello everyone. I have a (hopefully) quick question about creating a custom field in Empower to take an average of two sample results.

In my lab, we prepare duplicate preparations of each sample and then average the results. The challenge that I'm facing is that I want to include our detection/quantitation thresholds. I understand how to do this for individual samples, displaying ND or <LOQ for certain impurities, but that transforms results into strings and makes it impossible to do an average in the Empower Report.

I think that I'll need to use intersample calculations in order to achieve this, but to be honest I'm a little unsure of how they work. I'll put what my goal is in certain instances, what my best guess is for getting that to happen, and hopefully someone else can help me figure out how to accomplish it in Empower.

  • If at least one result is ND and neither result is >ND, print ND
  • If at least one result is <LOQ and neither result is >LOQ, print <LOQ
  • If one result is >LOQ and the other is not, print the result that is >LOQ
  • If both results are >LOQ, print the average of the results

Points 2 and 3 could possibly be combined into one if I could do something like the following:
AvgResult = (Rep1Result*Boolean1 + Rep2Result*Boolean2)/(Boolean1 + Boolean2)
Where the Boolean is testing whether the result is >0.05.

So the thing I'm having trouble with is figuring out how to make this sort of comparison. Like I said, I think I have to utilize intersample calculations somehow.

Any thoughts?
Label each pair of samples the same in the Label Reference of the sample set eg samples 1 and 2 of one batch as A and A, samples 1 and 2 of a second batch as B and B and so on. Then write a custom field formula of the Peak Real type with the formula (SAME.%..AVE(Amount)+0) and tick the Missing Peak box. Call this cf Average_Amount I presume you are averaging amount? If not substitute for area, height etc. This will average the amount for peaks in samples with the same label and give a 0 for any missing or invalid peaks. Next write another peak, real calculated custom field to transform CConst1 into your LOQ, call the custom field LOQ and just set CConst1 as the formula. Therefore any value you set in CConst1 of processing method against your components will act as the LOQ value, ensure to have this value populated for all components.

Next, write a Peak Enumerated custom field called Final_Reported_Value with "use as" set to field and tick the missing peak box. Formula will be ENUM(EQ(Average_Amount,0),LT(Average_Amount,LOQ),GTE(Average_Amount,LOQ)). Set the translations in the translation definition as follows 0 = N/A 1 = Below LOQ 2 = Average_Amount (fc). Make sure to put a space and then (fc) after the name of the custom field to specify that this is to return the value for Average_Amount. fc stands for Field Code. Finally, you need to add the function Summarize Custom Fields as the last line in the sample set where you run these samples. You need to process sample sets to get result sets to activate this custom field. When you do, you will get a result for each averaged peak over the two samples, either 0 which translates to N/A, a value below LOQ which translates to Below LOQ or a value greater than or = to LOQ which will return the actual value.
Thank you for the help! I think that your answer is definitely on the right track, but the extra wrinkle in my problem is that the samples won't be averaged if one of the results is in the category above the other one.

For instance, if one peak is 0.04 and the other is 0.06, and our LOQ is 0.05, then the result for SPL-1a is <0.05, SPL-1b is 0.06, and the final result is 0.06. So I somehow need to filter out certain replicates prior to taking the average.

You need to process sample sets to get result sets to activate this custom field.


I have a follow up question about this in particular. It's something I've read elsewhere when I've been trying to figure this out and there's one thing I'm not sure about. If I batch process my results, then go in and adjust some of the integration for a sample here or there (add a drop down, fix the baseline, delete a blank peak, etc), how will that affect the average? Does it just take the latest result into consideration, or just what came from processing the whole sample set?

Thanks!
Hi again, I see what you are asking. I'm not sure how that could be done because you would need to know what the result for each sample would be in advance for writing a custom field which wouldn't be possible. The only thing I can suggest is to write an overall CF which will account for each scenario and then average THAT CF. So if you write a CF peak real calculated, Result Set Only and tick the All or Nothing Box and Missing Peak. Formula is: EQ(REPLACE(Amount,0),0)*-1*60000+LT(REPLACE(Amount,0),LOQ)*-1*60008+GTE(REPLACE(Amount,0),LOQ)*Amount.

Then create a second CF called Final_Result Peak, Real Calculated, tick the Missing Peak box, Result Set Only again (or else you will get results from each set of data in the whole project) and the formula is SAME.%..AVE(Final_Result) and that should give you only the result you want for example if the result was N/A in sample 1 but 0.07 in sample 2 it should return 0.07 but I haven't tested this so im going on presumption, Empower might shock out if it sees it has a text string and a number to average and simply give up and return a blank, you will have to test this. As for your other question, if you fiddle around manually with results you need to save these results and then process the result set again using Quantitate Only for the custom field to output the most recent results. Using Result Set Only keeps all your results in one result set.
Thanks again for thinking about this.

So I did the first formula for the reportable peak, and then I just did the average in the report method to see what would happen instead of doing the intersample calculation (I figured the two would probably amount to the same thing). It ended up returning -30003.97 (the average of 0.06 and -60009).

So here's my current plan. I'd appreciate it if you could look it over and see if it's doable.

I have an enum for the peaks that puts it into categories called Type_of_peak, ND, <LOQ, and >LOQ. It would return position. (The way I think that works is that it would show the text in a summary table, but in future custom fields it would Type_of_peak as its position, 0, 1, or 2. Is that right?)

Then, if I could find a way to compare the duplicate reps, I could use an enum for my intersample calculation. Something like:

ENUM(EQ(SAME.%.(Type_of_peak), 0), EQ(SAME.%.(Type_of_peak), 1), EQ(SAME.%.(Type_of_peak), 2))

The returned values of those would be ND, <LOQ, SAME.%..AVE(Result).

Or at least something approaching that, once the kinks get worked out.

To be honest, I'm not entirely sure how the system would call for Type_of_peak in that instance, especially if the two samples fall in different categories.

My other thought is that I could label the results A and B, then insert a line between every pair of results in the sample set and use "Summarize Custom Fields Incrementally." That would allow me to compare the results separately in my intersample calculation.

The problem with that solution is that I'm not sure how summarize custom fields incrementally works. If it works the way I hope, it would compare the results in between lines, then erase that value and compare results between the next set of lines. My worry is that it would actually continuously add the results together, making more of a rolling average instead.
R_Peterson_12 wrote:
Thanks again for thinking about this.

So I did the first formula for the reportable peak, and then I just did the average in the report method to see what would happen instead of doing the intersample calculation (I figured the two would probably amount to the same thing). It ended up returning -30003.97 (the average of 0.06 and -60009).



I suspected this might happen- when there are two data types outputted from the same CF, for example Text and Numbers, Empower cant seem to only select the numbers part to add up and just ignore the text. Its the same reason you cant add up values from an ENUM CF set to use as field if the non-field translation is a text string. Quite annoying from a reporting point of view.

R_Peterson_12 wrote:
So here's my current plan. I'd appreciate it if you could look it over and see if it's doable.

I have an enum for the peaks that puts it into categories called Type_of_peak, ND, <LOQ, and >LOQ. It would return position. (The way I think that works is that it would show the text in a summary table, but in future custom fields it would Type_of_peak as its position, 0, 1, or 2. Is that right?)


How is this CF written though, in terms of formula and what is the use as set to? Is use as text then yes this will be displayed in a report but what criteria are you using to ask Empower to identify peaks? Is it along the lines of ENUM(EQ(Peak Type,"Missing"),LT(Amount,LOQ),GTE(Amount, LOQ))?

R_Peterson_12 wrote:
Then, if I could find a way to compare the duplicate reps, I could use an enum for my intersample calculation. Something like:

ENUM(EQ(SAME.%.(Type_of_peak), 0), EQ(SAME.%.(Type_of_peak), 1), EQ(SAME.%.(Type_of_peak), 2))

The returned values of those would be ND, <LOQ, SAME.%..AVE(Result).


This wont work as you are thinking. Empower would scan the CF from left to right for a matching condition and it will knock each situation off the list. The first translation you are asking for is that, for all samples where the label is the same, every single value for "Type_of_Peak" is 0. If not, move on. The second translation is only true when every single value for Type_Of_Peak is 1 in samples labelled the same and 2 for the third condition. This makes no sense because such conditions will not exist and even if they did, it wouldn't return the output you desire to label your peaks.


So although a peak enum CF to identify where exactly a peak lies in relation to being NA less than LOQ Missing etc would work ok, it wont work in subsequent calculations. Is there no leeway on the method insofar as you don't have to only report the final value if one of the values is a category above the other? Can you post an excel spreadsheet of these results to get a better feel for what exactly you are looking for?

R_Peterson_12 wrote:
My other thought is that I could label the results A and B, then insert a line between every pair of results in the sample set and use "Summarize Custom Fields Incrementally." That would allow me to compare the results separately in my intersample calculation.

The problem with that solution is that I'm not sure how summarize custom fields incrementally works. If it works the way I hope, it would compare the results in between lines, then erase that value and compare results between the next set of lines. My worry is that it would actually continuously add the results together, making more of a rolling average instead.


Summarize Custom Fields incrementally will calculate the results for summary custom fields ABOVE all lines where the function is inserted but BELOW the previous summarize custom field line, so if you ran a standard 10 times throughout the run and wanted Empower to capture the %RSD for each injection then stick a SCFI fuction just below the first standard and again just below the second standard and third and so on such that each stage will give a different result and the final result will give the same %RSD as if you had just used Summarize Custom Fields on its own.
Unfortunately this wont work with the labels, so if you have more than 1 instance of A or B and you call for a summary such that A.%..AVE(Area), Empower will ignore all the SCFI lines and proceed to average the values for ALL samples in the sample set labelled as A and not just the ones between SCFI functions!
I think if you send on a spreadsheet of your final results report it would help matters.
Again, thank you for taking the time to address my question. I feel like I've been banging my head against a wall trying to get the software to do something it's just not meant to do.

How is this CF written though, in terms of formula and what is the use as set to? Is use as text then yes this will be displayed in a report but what criteria are you using to ask Empower to identify peaks? Is it along the lines of ENUM(EQ(Peak Type,"Missing"),LT(Amount,LOQ),GTE(Amount, LOQ))?


You're basically right. It would be a peak, real, enum that the use as would be set to position.

ENUM(LT(Result, LOD), RANGE(Result, 0.03, LOQ), GT(Result, LOQ))
0 - ND
1 - <LOQ
2 - >LOQ

I can't figure out how to attach a spreadsheet, so in the meantime here's a (probably poorly formatted) table of what I'm intending to do.

Actual Results
IMP-1 IMP-2 IMP-3 IMP-4 Imp-5
SPL-1a 0.01 0.01 0.03 0.03 0.05
SPL-1b 0.01 0.03 0.03 0.06 0.07
Average 0.01 0.02 0.03 0.045 0.06

Reportable Results
IMP-1 IMP-2 IMP-3 IMP-4 Imp-5
SPL-1a ND ND <0.05 <0.05 0.05
SPL-1b ND <0.05 <0.05 0.06 0.07
Average ND <0.05 <0.05 0.06 0.06

Category Level Reported As
LOD 0.03 ND
LOQ 0.05 <0.05
Sometimes Empower simply wont do as you are asking and a formula, completely valid in theory, just wont return a result. That's why its best to test custom fields first in a test project and build on it, making sure that each stage works as expected, but always remember that you may not be doing anything wrong but the software does have natural limitations.

Certainly for the first part of your results table, you can use the CF SAME.%..AVE(Amount) and include this as a field in your report table. For the reported results, can I suggest a conditional Boolean cf. So first set up the Average_Amount CF as above, Then set up LOQ as CConst1 and LOD as CConst2 in your processing method. Transform both into peak, real cauclated custom fields by having just CConst1/CConst2 as the formula part of the CF and name them LOD and LOQ. Next create another peak, real, calculated custom field and call it Final_Reported_Result, make sure the all or nothing box is ticked.

The formula for Reported_Result is GT(Average_Amount,LOD)|GT(Average_Amount,LOQ)*Average_Amount+LT(Average_Amount,LOD)*-1*60001+LT(Average_Amount,LOQ)*-1*60008


This formula would first calculate Average_Amount as per SAME.%..AVE(Amount) and then test the value of this against conditions. If this value is greater than the value assigned for LOD and LOQ as per CConst1 and CConst2, then the value will be returned so for example if the average of 0.01 and 0.12 is 0.06 and 0.06 is greater than LOD then the value returned is 0.06. If the value for Average_Amount is less than the value in CConst1 (LOD) then the text string N.D (0r -60001) is returned and if the value for Average_Amount is less than CConst2 (LOD) then the text string Below Peak Quantitation Limit (or -60008) is returned. I know this string is slightly different than what you have in the spreadsheet but its the same thing at the essence of it. Test this first to see if it works. If not, maybe you will have to live with manually typing that Final Reported Result in a spreadsheet but you can still certainly use Average_Amount to get you halfway there. Good luck!
8 posts Page 1 of 1

Who is online

In total there is 1 user online :: 0 registered, 0 hidden and 1 guest (based on users active over the past 5 minutes)
Most users ever online was 1117 on Mon Jan 31, 2022 2:50 pm

Users browsing this forum: No registered users and 1 guest

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry