-
- Posts: 17
- Joined: Wed Nov 08, 2017 10:49 pm
In fact I even tried to code a simple routine in Matlab to do that.
I'm not clear though on one aspect of the numerical approach to finding the solution based on two training gradient runs (I'm talking about a straight one parameter modelling - %B change).
Assuming I have a model equation lnk'=f(B) with say two parameters (a,b) is the best approach to just numerically simulate 2 gradient runs (identical to the training runs) as a sequence of small isocratic steps using different a and b for each pass (iteration/optimization) until convergence is found between simulated and real tR on both gradients?
It kind of works but seems computationally expensive...
Is this how the commercial softwares do it?
I'm a bit confused by reports of equations linking lnk to the elution time in the simulated gradient as reported by some software...