So many published protocols have unpublished problems?

Off-topic conversations and chit-chat.

24 posts Page 2 of 2
MSCHemist wrote:
Well If I'm fortunate to have several papers on a given method my way of developing a method is usually to look for consensus, pick out the bits and pieces of each I like and combine them with ideas of my own. However, It would be a revelation if I ever downloaded a method that was well designed and actually worked as published.


I do the same thing when working on developing methods to analyze nutraceuticles. I had a client wanting to analyze Quercetin in onion skins, we had to take several methods and combine them then use trial and error to finally come up with a reproducible, accurate method.

I tried to follow another method to analyze for Glucoraphanin in broccoli seeds, the paper and method I was given was from a very prestigious medical university that I won't name. I found my results always were 2x the results they got for the same tests. After tearing the method apart, we found that they stated how they prepared their samples, but their calculation was off because they used a different sample size in the example calculations and then transferred the correction factor for sample size verses prep volume to the final results instead of including the actual sample size and prep volume in the final calculation.

Another client sends us their methods to use, one was for Chloride by IC. In the method it calls for a calibration curve with concentrations of 0.1ppm, 0.5ppm and 1.0ppm, then calls for a check standard of 5.0ppm :shock: On top of that, the prep instructions to make the 5.0ppm check standard actually calculate out to a final concentration of 10ppm. Yet when you talk to their chemist he says they always pass when following the method as written!! We followed it and sure enough the check standard will pass, because the curve is not linear and it will quantitate the 10ppm standard as 5ppm :lol: Makes me wonder where some of these chemist got their degree.
The past is there to guide us into the future, not to dwell in.
James_Ball, while on glucosinolates: there are whole families of papers that say they were analysed according to A who did it according to B who did it according to C... etc., and when you get back right to the original, it's an application note published by the organisation that employed some of the early workers, and that is no longer available even from the library of that organisation. The reality is that the method is passed from father to son, and mother to daughter, on a photocopied bit of paper. Grrrr!

I love your story of the correction factor. Years ago, I was trying to show a post-doc how to work out the concentration of their favourite analyte from its UV absorbance. She wasn't strong on maths, so I went through the whole of her experiment drawing out little tubes and how the sample was diluted 2-fold here, 3-fold there, 2.73-fold somewhere else, and we included all the extinction coefficient of 6.22 units per mM solution, and the fact she'd extracted 2.58 g of plant material or whatever, etc. etc., and by complete miraculous chance, the final result in mmoles per g fresh weight was exactly 10 times the value displayed on the UV/vis spectrometer.

Post-doc said "Oh good, I don't need to worry about all this then" and drew a big line through all my calculations. Wherever she is in the world now, she is probably still multiplying the displayed value by 10. Never mind if the amount of material is different, and the dilutions different. Ten is the answer....

My own confession is that I've got some published pyrophosphate measurements that are out by a factor of 2 because I forgot a stoichiometry in a coupled enzyme assay. But I'm not admitting where. It was a very long time ago, pyrophosphate is hell to measure in plant material anyway (so I'd be quite surprised if the answers were correct to within a factor of 2 even without my error), and the work was only looking at differences on treatment, so the fundamental message is still correct...
There are many quirks in how editors and referees look at methods. In my previous life as a field biologist I needed to know how many cats there were on an island. Since they all looked different I just went out and counted them. This was nowhere near clever enough for the referees when I tried to publish - they wanted a population estimate by one of the fancy methods that proper biologists use (and which under rigorous scrutiny usually give standard deviations at least three times the mean, which is why you see standard errors in biology papers). I was stuck, because there was no way to undo just counting cats, or to go back and estimate cats properly. A desperate trawl of the literature (in the days long long before internet searches) turned up an obscure paper by one Boguslavsky, who had worked out a way of deciding how many was enough when counting things. So I changed "exhaustive counts of recognisable individuals" to "a modification of Boguslavsky's method", and everyone was happy. :roll:

Peter
Peter Apps
lmh wrote:
so the fundamental message is still correct...


I have used this excuse quite a few times to avoid correcting my own mistakes.
When I was relatively new in this business, a PhD had developed a GC procedure (packed column back then, mid-1970s) for assaying a component in bar soap. Some internal standard was added, then acetone added and then the whole shebang blended, then filtered. Then she took like a 5 ml aliquot of that and diluted that to 100 ml with acetone for GC injection. The problem? The first peak was on the acetone solvent tail too much, and prevented decent peak area measurement with the equipment of the day.

My solution then: don't make that dilution, just inject the filtrate as is (and Rf standard made correspondingly), then the first peak will be significantly larger than it had been in relation to the acetone solvent, so one could inject less volume of solution, and get smaller acetone peak; then the analyte and internal standard peaks were tons easier to measure, as they were larger compared to the now-smaller acetone peak.

These days, if we had such assay, we'd use autosampler, external standard, capillary columns, and automated calculations. Times have changed.
CPG, that's a classic case of breaking out of a development spiral. How often does it happen that signal: noise is too low, so take a bigger sample to make the signal higher, but that needs more solvent and a larger SPE cartridge, so the noise goes up with the signal and you take a bigger sample and so on and so on. At whatever generous sample size squeezes a decent signal:noise ratio the method gets written up and published.

Peter
Peter Apps
Consumer Products Guy wrote:
The first peak was on the acetone solvent tail too much, and prevented decent peak area measurement with the equipment of the day.



Well at least even that beat cut an weigh "integration". :)

The young ones in the lab here look at me strange if I make a manual injection when testing new setups. Or when I can get passing results making standards when measuring 1ul with a 50ul syringe. My high school physics teacher taught is interpolation when making measurements, not many I meet now had to use it on slide rules for a high school class.
The past is there to guide us into the future, not to dwell in.
Peter Apps wrote:
There are many quirks in how editors and referees look at methods. In my previous life as a field biologist I needed to know how many cats there were on an island. Since they all looked different I just went out and counted them. This was nowhere near clever enough for the referees when I tried to publish - they wanted a population estimate by one of the fancy methods that proper biologists use (and which under rigorous scrutiny usually give standard deviations at least three times the mean, which is why you see standard errors in biology papers). I was stuck, because there was no way to undo just counting cats, or to go back and estimate cats properly. A desperate trawl of the literature (in the days long long before internet searches) turned up an obscure paper by one Boguslavsky, who had worked out a way of deciding how many was enough when counting things. So I changed "exhaustive counts of recognisable individuals" to "a modification of Boguslavsky's method", and everyone was happy. :roll:

Peter


PeterApps, that is amazing: it's made me realise that an internal standard quantification is actually just a variant on a mark and recapture population measurement... the only difference is that we're releasing pre-marked "animals" into the population. Oooh!
lmh wrote:
Peter Apps wrote:
There are many quirks in how editors and referees look at methods. In my previous life as a field biologist I needed to know how many cats there were on an island. Since they all looked different I just went out and counted them. This was nowhere near clever enough for the referees when I tried to publish - they wanted a population estimate by one of the fancy methods that proper biologists use (and which under rigorous scrutiny usually give standard deviations at least three times the mean, which is why you see standard errors in biology papers). I was stuck, because there was no way to undo just counting cats, or to go back and estimate cats properly. A desperate trawl of the literature (in the days long long before internet searches) turned up an obscure paper by one Boguslavsky, who had worked out a way of deciding how many was enough when counting things. So I changed "exhaustive counts of recognisable individuals" to "a modification of Boguslavsky's method", and everyone was happy. :roll:

Peter


PeterApps, that is amazing: it's made me realise that an internal standard quantification is actually just a variant on a mark and recapture population measurement... the only difference is that we're releasing pre-marked "animals" into the population. Oooh!


Hi lmh

Oooh indeed ! :idea: I think that you are certainly right for isotope labelled standards where the change in isotope ratio is used to calculate amount, but not sure for ordinary internal standards which are a different "species" of animal from the analyte. There are population estimates based on knowing how many ducks there are, and then measuring the ratio of ducks to geese to find out how many geese there are - that's more like an unlabelled internal standard, and the strengths, weaknesses and assumptions (geese behave the same as ducks during sampling for e.g.) are remarkably close. It's a long time since I did any of this - for fun I tried a bottom up determination of the uncertainty of strip count population estimate but I gave up when the relative uncertainty was about five times the mean before I had included even half the steps - strip counts are the field biology equivalent of the old fashioned standard methods, in that the results can be compared as long as all the assumptions are met and they are done in exactly the same way every time, but there is no calibration, and no QC samples !, and they need quite a bit of faith. Can I be a co-author ?!!!

Peter
Peter Apps
24 posts Page 2 of 2

Who is online

In total there is 1 user online :: 0 registered, 0 hidden and 1 guest (based on users active over the past 5 minutes)
Most users ever online was 1117 on Mon Jan 31, 2022 2:50 pm

Users browsing this forum: No registered users and 1 guest

Latest Blog Posts from Separation Science

Separation Science offers free learning from the experts covering methods, applications, webinars, eSeminars, videos, tutorials for users of liquid chromatography, gas chromatography, mass spectrometry, sample preparation and related analytical techniques.

Subscribe to our eNewsletter with daily, weekly or monthly updates: Food & Beverage, Environmental, (Bio)Pharmaceutical, Bioclinical, Liquid Chromatography, Gas Chromatography and Mass Spectrometry.

Liquid Chromatography

Gas Chromatography

Mass Spectrometry