Thank you guys for a warm welcome and a rapid response!
I figured my FID flow was too low. I've been on the same hydrogen bottle for 5 months, and the cylinder pressure has only fallen about 300 psi from the starting 2700 or so. I also assume that stoichiometric mixing produces best results, so I'll look into that when I go back to work tomorrow.
My apologies for using "injection" in two contexts, one of the injection of a gas sample into a headspace vial, and another for the obvious injection of the heated sample in the headspace system. 
The gas mixes I've used (injected via 10 uL GC syringe into a sealed headspace vial through the septum) are small alkanes, propane, isobutane, and butane, with trace impurities (judging by peak area). Due to the inertness of my problematic analytes, active binding sites are probably not the issue.  I've used several different sources, the one I've been using the most is about 99.5% n-butane by peak area, the remainder being mostly propane and isobutane. I have a small beaker covered in tape with a pinhole to discharge an upside down gas cylinder into, and for each gas sample I discharge at least 2 mL of liquid into the beaker, allow it to boil completely, warm the beaker bottom with my hand, and check temperature with an IR thermometer. Once the glass temp is above 20 C, I withdraw 10 uL of gas with the syringe needle about 1 cm from the bottom of the beaker to mitigate the diffusion mixing near the pinhole. This provides same-day repeatability less than 2% in all cases, with better than 1% often the result. Considering how error-prone this bootstrap sampling is, I'm fairly pleased with the results.
I admit that I would much prefer a cryo-equipped GC to aide analyte focusing, but I'm able to almost completely resolve each isomer with more than 3 carbons. Peak widths are about 4 seconds or so, and very consistent. I'm using a column and temperature program suitable for residual solvent analysis, somewhat consistent with the method reported for the USP 467. Initial temperature is 35 C, about as low as I'm willing to go without forced cooling. Carrier flow is about 66 cm/s, based on a dead time of about 0.76 min (0.755 to 0.757 is the ET range for the tiny blip of methane that shows up in every sample) in the 30 m column. Isobutane and n-butane elute at 0.91 and 0.98 minutes, respectively with no resolution problems until a very large amount is used. At the lower amounts I'm using for R&R tests, response returns to baseline within three seconds, making decently well-defined peaks. Shouldering or baseline lifts on the peaks of interest are almost never a problem as far as I can tell.
The actual autosampler loop volume is 10 mL. I can't recall all of the time parameters for sampling, but I believe I have it set for 18 second pressurize time, 6 second vent time, 3 second equilibration (pause) time, and 6 second injection. I will add the actual method details when I return to work tomorrow. The very brief injection time, coupled with splitless injection, massively improved the resolution of the lightest analytes when I was developing the method, so I stuck with these changes.
Most of the routine samples of concern are fairly low-melting solids (<80 C) with very low volatility, with some amount of light residual solvents (alkanes, alcohols, ketones, esters) and other medium-weight hydrocarbon content (C10). Full evaporation of the lightest important analytes is the goal, and can be guaranteed when they are introduced to the vial as a gas! I added calibration for some of the heavier analytes when I noticed I resolved them very well on accident. I modified the temperature program to resolve them even better, and I've calibrated most of the heavier hydrocarbons to similar linear tightness (always forcing a zero intercept). My quantitation for these varies more than I'd like, but they are still quite reproducible as opposed to the gaseous analytes.
Again, I am able to quantify calibration samples same-day with pretty good repeatability (less than 2% of the 10 uL volume placed inside the headspace vial), and I can calibrate over a wide range - including gas dilution (removing gas sample from a headspace vial) with a pretty good Rsq (0.99960 with a forced origin). If the flaw is in the sampling  - that is, anywhere before the sample is introduced into the column - I have a hard time understanding where, unless it's a random, intermittent issue which stays consistent for the entire duration of a run (up to 14 hours). With the amount of probing I've been doing, I hope that I would have caught an intermittent problem that crops up while the instrument is running. The problem seems to be when it goes to standby and then back to running again.
Also, if there was a gas leak before the column head, wouldn't all analytes be affected similarly? Wouldn't I also notice changes in carrier total flow from the three outlets? I wasn't around to witness the plumbing of the system, but the carrier pressure regulator on the HSS and the column head pressure regulator on the GC seem to share the same pressurized space on both sides of the regulator, i.e. adjustment of one can be compensated by adjustment of the other, according to the digital carrier pressure gauge on the HSS and the analog column head pressure gauge on the GC. 
I understand how slight inconsistencies in peak width might manifest as integration problems, but not when peak heights agree with peak areas, and when the variance of concern is FAR larger than the variance of peak shape.
To sum up where my head is right now:
- Probably not sample preparation
- Probably not stationary phase bias or active sites
- Could be HSS inconsistent sampling, if it exaggerates minor inconsistencies in gas flow between days
- Could be FID acquisition (gas flow perturbations), but that doesn't explain the bias for some analytes
- Could be integration, but peak shape is good and consistent
- If gas flow inconsistency is the source, the instrument is exaggerating the inconsistencies quite profoundly (2% variation in flow results in 78% variation of peak area)
Again, thanks guys! I'm already learning and improving the quality of my self-criticism!
Edit: I'll be sure to post my method change history as it applies. I was very good about noting every change as I developed the method. On a side note, my Restek capillary column (with cyanopropyl, phenyl, and C18 functionality) snapped about 0.2 m from the detector inexplicably. It looked like it had been resting on the oven surface based on microscope investigation, but I still can't tell if that was it. The column was only 5 months old, max temp is 240 C, 40 C lower than the limit on the column. Column bleed is only a few counts, shifting baseline from about 6.6 to about 11 or so. I can provide hard data here tomorrow.