The recovery question relates to what the method is all about. Is the question how much analyte was in the sample at the time you took the sample or is the question how much analyte is left behind after a process, such as your extraction. Both are valid questions and have different implications.
If you say "I can analyze for XYZ in soil and recover between 40 and 60% when I do the extraction and the results for your sample are 10 ppm." There is a good degree of conficence that the soil concentration was not at 100 ppm when the sample was taken. If there is a remediation target or a lawsuit over the presence of XYZ over a particular value, then the strength of the argument that the sample was or was not above some limit is known.
And, the key issue is "when the sample was taken" in such a test. (Your spiking of a soil sample would correspond more to "when the sample was taken" in this case.)
If the question is how much unchanged material remains in the soil after extraction - for consumption by bugs or analysis by another chemist who is testing to see if your remediation is complete, you have an entirely different story - and one that can be very difficult to answer completely. Note JGK's suggestion that you take the extracted sample and extract it again. This is a good procedure to follow if you have low recoveries as part of the troubleshooting process.
One last thing. Years ago I was concerened that an analysis I was running and reporting samples on might not be quite right. My boss looked at me and told me that no method is 100% perfect. I was shocked that this might be acceptable. He continued on: "The method is validated?" I had to acknowledge that. "The known biases and limitations are published in the method?" This was also true. "You report the results as determined by this method?" -- Yes. And with that he shrugged and walked off.
For many of us, that recovery study is part of obtaining that method documentation -- not that we try for less than 100% recovery. And, if we cannot get 100%, we live with what we can get. But, we make sure that people who use our methods and data know the limits of what the numbers can mean.
I've seen QC a method or two which were fairly precice, but inaccurate. Because the method was precise and the bias was constant, the method was quite usable for keeping the process in controll.