The degradation of an AI model’s performance over time as real-world conditions change from those represented in the training data. Model drift can occur due to changes in input data distributions (data drift) or changes in the underlying relationships the model aims to capture (concept drift).
Model drift is a significant challenge for deployed AI systems in dynamic environments. It occurs naturally as the world changes: customer preferences evolve, competitive landscapes shift, or new regulations emerge. Without monitoring and maintenance, previously accurate models become increasingly unreliable. Addressing drift requires continuous monitoring of model performance, periodic retraining, and in some cases, redesigning models to better accommodate changing conditions.
A financial fraud detection model becoming less effective over time as criminals develop new tactics that weren’t represented in the original training data, requiring regular updates to maintain detection accuracy.