You're right, it's a "can of worms".
My take on it from a scientific point of view (not regulatory) is that the best approach involves the following:
1. Specify what level of repeatability constitutes LOQ and LOD. This is a more contentious issue with LOQ than with LOD, but your values of 8% and 25% seem reasonable to me.
2. Run replicate injections at various levels and generate a "CV vs A" plot (usually log-log, plotting the CV as a function of the amount of analyte). Such a plot will usually be flat over most of the range (dominant errors due mainly to things like sample prep and injection) and turn upward near the bottom (dominant errors due mainly to integration and noise). Identify where the plot crosses your target CVs for LOQ and LOD.
3. Run replicates at those levels to verify that the CV's meet the desired criteria.
From a regulatory perspective, you can use either 10 and 3 for target S/N levels (and I agree that these should be viewed as estimates, not as sacred values!) or better, from 10 and 3.3 times the standard error of the y-intercept. In effect, this uses the standard error of the y-intercept as a surrogate for "noise", but a surrogate that is (hopefully) less biased.
Finally, LOQ and LOD are not fixed characteristics of a method (although we tend to treat them that way

). As you pointed out, they are dynamic and can vary tremendously from day to day, based on the noise characteristics of our system and how the data system deals with it. That's why verification by running standards at those levels is important.