Rip, it seems your two-factor model is taking hold in that other kind of training - of statistical algorithms upon data.
It is now recognized that, for a given domain of data (e.g. visual images), that an algorithm's behavior consists of a task-independent part (e.g. discerning the objects in an image) and task-specific part (e.g. answering whether a pictured squat is to depth or not). Training the first part (in the parlance, "pretraining" or "self-supervised learning") is very time consuming. The second part ("fine-tuning") is less taxing, but does require examples of the input-output behavior to be mimicked ("supervision").
This bifurcation arose from practical considerations: there are only a handful of organizations with the computational resources to perform the first step, and there is limited "supervision" to guide the second step. But now,
some researchers think there's something more fundamental about the split.
In short, the modern practice of artificial intelligence / machine learning has technical analogues to strength (which improves all downstream tasks) and independently-performed strength training - even though the training of organisms and the training of algorithms are seemingly unrelated. And yes, I'm posting in this thread, which needs serious help.