The starting point is the allowed variation in your process or product (if your QA people don't understand *that*, they are in the wrong line of work!

). That will determine the amount of uncertainty you can have in your measurement. There was a brief discussion early last year:
viewtopic.php?f=1&t=12109
Once you have the uncertainty, you have to define the required confidence interval (usually 95% or 99%). Those, in turn, will set the allowed recovery range (which will depend on the number of replicates!).
Validation is defined as "demonstrating that the method does what it purports to do".
One of the parameters to be determined is robustness: how gracefully the method tolerates variations in conditions. The robustness data are what allow you to set reasonable system suitability parameters.
If your robustness data show that you can attain acceptable accuracy and precision with Rs = 1.1, but not when Rs drops to 1.0, then you are justified in setting Rs = 1.1 as a system suitability parameter. That said, it would be hard to do in practice. Rs = 1.5 basically provides 1% overlap of two identical symmetrical peaks. The US FDA in their "Guidance" *suggest* that Rs should be > 2; that is *not* a requirement, merely a suggestion. In practice, you can get away with less in some cases and more often require substantially more (e.g., in quantitating trace impurities, or if significant tailing is present).
A parallel comment applies to tailing. The FDA suggests that TF should be < 2 (they seem to have a fixation on the number "2"

). As with Rs, you should set system suitability based on what your robustness data justifies. If you can demonstrate acceptable quantitation with TF = 2.5 but not with TF = 2.8, then set TF <= 2.5 as system suit. In practice, quantitation starts to degrade when tailing factor exceeds 1.5, and most chromatographers would consider 2.0 to be generous.
In our Advanced HPLC Method Development course, I suggest to attendees that they add whatever other criteria can be useful in avoiding bogus results. Things like baseline noise (for trace analysis), retention time (or better, k') windows, and so on.
Whatever you do, it must be based on good science. If the SOP is *not* based on good science, it should be modified.
<2% RSD, 1.5 resolution, peak tailing < 2.0, recovery of 98 - 102
are fairly typical, but they may be gross overkill in some situations and grossly inadequate in others. The flaw is assuming that "one size fits all".