I'm doing test method development for a new API in a fragranced consumer product using UV detector. I'm running gradient elution, so I chose a signal reference wavelength and a reference wavelength. Using a reference wavength smooths out the angled baseline, as expected. But using that reference wavelength also makes a matrix component ("maybe" 2:1 S/N) that elutes at similar retention time to my API absolutely disappear. My supervisor and QA think this is cheating for cGMP, I figure it's good science. Supervisor and QA would rather have me dilute out that peak and not use reference wavelength to ensure its peak is under 3:1 S/N, but that would end up making my baseline even more angled when zoomed enough to see.
The question is: is what I'd be doing with reference wavelength legitimate science, or is that cheating? I'd of course do linearity, accuracy, precision, etc. all using that. We have to be within 95% - 105% accuracy with spiked samples, this reference wavelength use won't affect that at all. If I had used reference wavelength since 1st try I'd never even know about that matrix tiny peak.