The original is probably not a true LOQ which is why I prefer the statistical calculation of the LOQ based on the sum of the residuals (standard deviation). Factors that influence the LOQ are injection volume, detector, flow, interferences...
Right, so if I have my LOQ calculated from LOQ = Blank response + 10*STD Dev
Of a set of blank samples then that value should not change when I dilute a sample, right?
If you normally see 10ng/ul as your LOQ, then if no prep is involved the LOQ of the sample would be 10ng/ul. If you dilute the sample ten fold, you still see an LOQ of 10ng/ul on the instrument, but that has now become 10ng/0.1ul of sample or 100ng/ul. The LOQ of the instrument doesn't change when you dilute, but you can no longer have the same sensitivity versus the actual sample.
A rule of thumb I have always used, if you inject the sample full strength and have a result that is above the calibration, then dilute to bring the result within the calibration curve, report the original LOQ since you actually have data at that dilution factor of 1X. If you have to dilute because of an interference that prevents you from getting a positive confirmation of detection in the undiluted sample, then you have to report the higher LOQ taking the dilution factor into account.
LOQ for a sample has to take into account all of the preparation steps, so if you change anything it must be accounted for in the final result.