Insight beats predictive power

An artisanal siloed approach that focuses too heavily on model tuning can miss the bigger picture.  The typical data scientist is happy to spend her days beavering away at tuning a model that produces the minimum average error relative for a given data set.  That by itself does produce value.

However, that value may not be sustained in the face of these realities:

  1. Your training set doesn’t reflect the entire world
  2. The world is constantly changing
  3. Deploying your model may accelerate and redirect those changes
  4. Variable noise will generate phantom changes

What produces even more value than a narrow focus on model tuning, is using the experience of creating our model to generate new insights into the underlying system that we are modeling.

A recent paper from Google well articulated an archetypal example of why this is true.

Machine learning systems often have a difficult time distinguishing the impact of correlated features.  This may not seem like a major problem: if two features are always correlated, but only one is truly causal, it may still seem okay to ascribe credit to both and rely on their observed co-occurrence. However, if the world suddenly stops making these features co-occur, prediction behavior may change significantly.

If we gain insight into the true causal relationships that exist in the underlying system than we can modify our model so that it is more robust in the face of changing world. Even if this somewhat lessons the predictive power as measured against the current data set, this is typically is the right approach.

Leave a Reply