Why Weighted Averages Beat Static Timetables for Journey Predictions
#Machine Learning

Why Weighted Averages Beat Static Timetables for Journey Predictions

Backend Reporter
3 min read

Static timetables fail to capture real-world variability, but raw user data is noisy. Weighted averaging solves this by giving recent, relevant journeys more influence while naturally filtering outliers—creating predictions that adapt to changing conditions without overreacting.

Predicting arrival times seems straightforward until you realize that reality rarely matches schedules. Static timetables assume journeys behave identically every day, but traffic patterns shift, delays accumulate, and personal habits evolve. Meanwhile, user-reported data captures reality but comes with its own problems: some reports are outdated, others are anomalies, and sometimes there's simply not enough data to work with.

This fundamental tension—between theoretical schedules and messy real-world data—is what makes journey prediction so challenging. The solution isn't to choose one source over the other, but to intelligently combine them using weighted averaging.

The Problem with Traditional Approaches

Traditional systems fall into two traps. First, they rely too heavily on static timetables that assume journeys behave the same every day. These systems don't adapt to traffic, delays, or personal habits, and they fail badly when conditions change. Second, when systems do incorporate user data, they often treat all reports equally, which means a single outlier can dramatically skew predictions.

The real challenge is finding a way to trust real-world data without letting bad or outdated information ruin predictions. You need a system that learns from recent behavior while maintaining stability.

How Weighted Averaging Works

Instead of treating all journey data equally, weighted averaging assigns different importance levels to different reports. Recent journeys have more influence, older journeys gradually matter less, and outliers are naturally diluted through the averaging process.

Here's a practical example of how this works:

  • Most recent journey: weight 0.5
  • Journey before that: weight 0.3
  • Older journey: weight 0.2

If a user reports these arrival times:

  • Most recent: 37 minutes
  • Previous: 35 minutes
  • Older: 29 minutes

The weighted prediction becomes: (37 × 0.5) + (35 × 0.3) + (29 × 0.2) = 18.5 + 10.5 + 5.8 = 34.8 minutes

This approach predicts approximately 35 minutes instead of blindly following the 29-minute timetable or the 37-minute most recent journey. The result is far more realistic.

Real-World Intuition

Humans naturally use weighted averaging when estimating travel times. If you're trying to predict how long your commute will take, you don't simply say "Google Maps says 30 minutes, so it's always 30 minutes." Instead, you think:

  • Yesterday it took 38 minutes (traffic)
  • The day before it took 35
  • Last month it was closer to 28

Naturally, you trust yesterday's experience more than last month's. That's weighted averaging in action—and it's exactly what V1.1 implements programmatically.

The System in Practice

When conditions change—roadworks, new routes, traffic pattern shifts—the system quickly adapts. A new journey report pulls the prediction in the right direction without instantly discarding historical context. This creates a balance between responsiveness and stability.

What's New in V1.1

The latest release introduces several key improvements:

  • Weighted averaging of recent journey data: Recent reports influence predictions more heavily
  • Confidence scoring: Predictions now include uncertainty estimates
  • Clear UI distinctions: Users can easily differentiate between predicted times and user-reported events
  • Improved fallback logic: Better handling when data is sparse or unavailable
  • Enhanced error handling: More robust processing of invalid or malformed data

Why This Matters

This approach makes the system more adaptive, more honest about uncertainty, and more reflective of real-world behavior. Instead of pretending predictions are always correct, V1.1 embraces probability, confidence, and learning—which is exactly how good systems should behave.

By combining the stability of historical patterns with the accuracy of recent real-world data, weighted averaging creates predictions that users can actually trust. The system learns over time without overreacting to anomalies, building user confidence through demonstrated accuracy rather than false precision.

Featured image

Comments

Loading comments...