I use Newton's Method for this sort of thing. It's an iterative procedure that you can read about in any freshman calculus book. Basically it goes like this for:
R = a1*C + a2*C^2 (could be any polynomial function)
Newton's Method is:
Cn+1 = Cn - (a1*Cn + a2*Cn^2 - Ro)/(a1 + 2*a2*Cn)
Ro is the instrument response for your unknown. a1 and a2 are the regression constants from your calibration curve. You input Ro and the regression constants and keep changing Cn until Cn+1 converges to the answer. The function is only valid over the calibrated range. If your response is outside the calibrated range, you can't use it.
I usually start at zero and walk up in my iterations. For instance, if I put in zero, the Newton's Method expression might say 10.3. Then I put in 10.3 and it might say 12.6. Then I put in 12.6 and it might say 13.1. Then 13.1 might say 13.2. If +/- 0.1 is within the precision of my measurement, I'm probably done.
For R = a0 + a1*C + a2*C^2, Newton's Method is:
Cn+1 = Cn - (a0 + a1*Cn + a2*Cn^2 - Ro)/(a1 + 2*a2*Cn)
The denominator on the right-hand side is basically the derivative of the calibration function with respect to concentration (the slope of the line tangent to the curve at the point).
Good luck.