Originally Posted by
Scrofula
Most statistical models are designed for a nice, simple world where everything is independent, but there's been quite a bit of work recently on learning models in more realistic settings where things can interact in complex ways. Of course, the more complex interactions you model, the more data you need to learn a model that can generalize well. The trick is coming up with a model complex enough to capture the things you need to, but simple enough to learn from the available data.
For instance, in this case, I'm planning to assume that the diet and external factors like sports remain constant (or at least don't change in any non-random way), and take them out of the model. This gives a simpler, less realistic model that's easier to learn.
The nice thing about this problem is that it's possible to evaluate the learned model quite easily -- the model makes testable predictions. For example, it could predict your 1RM squat. You could then test your squat, producing very informative training data for your model. You end up with an exploration/exploitation tradeoff, where you need to choose between (1) producing good training data for your model (making it more accurate, yielding better programming in the future) and (2) using the most effective programming according to the existing model (making you stronger, but yielding less informative training data).