Actually, if there is a sensitivity problem, internal standardization can make things worse.
Look at it this way: imagine that you have two quantities which are subject to errors and those errors are correlated (i.e., an error in one of the quantities implies a proportional error in the other). If you take the ratio of those quantities, the errors will cancel (if the errors are perfectly correlated, the errors will cancel exactly -- unfortunately, real life is never that neat) and the percentage error in the ratio will be less than the percentage errors in the individual quantities.
On the other hand, if the errors are uncorrelated (orthogonal), when you take the ratio, the errors will accumulate and the percentage error in the ratio will be more than the percentage errors in the individual quantities (if the errors are perfectly orthogonal, then the percentage error in the ratio will be the square root of the sum of the squares of the individual percentage errors -- but again, real life is never that neat).
So what kinds of problems will affect the analyte and the IS the same way (correlated errors, remember)? Things like dilution, injection volume, extraction yield (assuming the IS and analyte have similar properties), ionization efficiency in the MS interface (assuming the IS and analyte elute close together and have similar properties). Those are situations where internal standardization can help.
What kinds of problems will affect the analyte and the IS differently (and generate uncorrelated errors)? Things like integration parameters, peak shape, excessive baseline noise (or low S/N ratio generally). Those are situations where internal standardization can make things worse.