For any type of regulated quantitative work (such as environmental or clinical) overall stability is extremely important.
The equation for Total Analytical Error (which many clinical laboratories use) is:
TE = %BIAS + (1.96 * %CV)
(or similar)
In the environmental world, many of the EPA regulated methods use the:
"Half Range for the Prediction Interval of Results (HRPIR) where: HRPIR = 3.963S; and S is the standard deviation over 7 replicates."
In addition, the US EPA methodology uses standard deviation to directly estimate the limit of detection (MDL) from a series of low level spikes.
Please note that in both cases, the variation (%CV or %RSD) is multiplied by a constant significantly greater than one; therefore, the precision is weighted higher than the bias in the error calculations.
It doesn't matter the source of the error (chromatography, autosampler, MS spray, MS electronics, etc.), overall precision is very important. In my experience, the source settings that yield the highest sensitivity are close, but not always the same, as the optimal settings for RSD. Once we get the sensitivity near maximum, using flow or single injections, we then tweak the final source (temperature, voltage, and gas flows, not just spray position) settings for lowest RSD with 7 injections at each setting.
This may be excessive, but when a method has to run for years and QC failures are costly in terms of CAPA and batch re-runs, optimizing precision pays off.