Inject a diluted sample so the resulting peak is barely detected.
Expand the chromatogram's scale so the noise of the baseline (the width of the baseline) can be measured as well as the height of the peak. Do the math and you have your answer.
ten seconds of searching on the internet gave this answer to your question.
(See how easy it is to find your own answer ! )
http://www.chromatography-online.org/topics/signal-to-noise/ratio.html
Signal-to-Noise Ratio In order to define the sensitivity of a detector it is necessary to define the minimum concentration (for a concentration sensitive detector) that can unambiguously be identified when eluted from the column. Peaks are obscured when their peak height becomes very similar in magnitude to the noise of the detector system. Detector noise is defined as any perturbation on the detector output that is not related to an eluting solute. The source of the noise can arise from the chromatographic system, the sensor or the associated electronics. Thus, the signal from the peak must be sufficiently greater than the noise to unambiguously identify the peak. The ratio of the signal size to that of the noise is termed the signal-to-noise ratio. The choice of the numerical value for the signal-to-noise ratio is somewhat arbitrary and has been borrowed from the signal-to-noise ratio used in electronics i.e. 2. Thus, for the unambiguous identification of an eluted peak in chromatography the signal-to-noise ratio must be 2. That is, the peak height must be twice hat of the noise.
best wishes,
Rod