As you might surmise from all of this, "LOQ" is a very large "can of worms". 
Most people will cite "signal/noise = 10", but this gets *very* ambiguous:
- where do you measure the noise? 
- over how long a time do you measure the noise?
- what if there are other peaks present?
- do you measure peak height from the midpoint of the baseline noise or from the top?
- how do you account for noise at the top of the peak?
FDA / USP / ICH allow hint at statistics-based estimation from the standard error of the calibration line. I personally think this is more justifiable, but it has its own issues, several of which have been touched on in this thread (e.g., ordinary least squares is really inappropriate for wide-range chromatographic data). 
In fact any specific value for limit-of-quantitation ignores a relatively large elephant in the room: how precisely do you need to quantitate. I can virtually guarantee that you will not get 1% RSD from a peak that is 10x the noise.
To my mind, the most reasonable approach (unfortunately, also the most work!) is to use a "CV vs. A" plot. CV is the coefficient of variation (%RSD), and A is the "amount" of analyte. What you do is run replicates at each level (minimum of 3 replicates; 5 is better) and plot the log(CV) vs. the log (A). As long as you are well within the linear range, the CV will be fairly constant, because the largest contribution to error is likely to be sample workup and injection, and that will be the same regardless of the mass of analyte. That means that the right-hand part of the plot will be fairly flat. As the analyte level decreases, however, the error contribution from baseline noise increases, and the curve drifts upward. Define the required CV for your purposes, look at where the curve crosses that CV, drop down to find the corresponding amount of analyte, and "Bob's yer uncle".