In past jobs, I have:
1. Determined a Relative Response Factor (rrf) at specification level of identified impurity: Impurity Response/Main Analyte Response*Impurity Concentration/Main Analyte Concentration = Decimal Number. This can be used in method calculations to convert observed Impurity response to concentration without use of an external impurity standard using main analyte standard response and concentration.
2. If must use Area Percent Specification, then can use RRF determination during validation, e.g. determine what concentration of impurity produces the upper specification Area%. Then, this can be correlated to the different requirements of validation, e.g. accuracy, linearity, etc. For example, Accuracy for an impurity is generally determined at three different levels spanning 50% - 120% of specification. Since we know what concentration produces 100% of specification, we can formulate the Accuracy solutions.
For the Impurity validation, we always included the Main Analyte Peak at 100% of test concentration, e.g. accuracy, linearity, etc. This is important, since the Main Analyte will always be present when the product is tested for impurities.
Generally, we always did all validation tests required for the Main Analyte, except for forced degradation studies.
Hope this helps.