Modelling sits at the centre of investment planning across the UK wastewater sector. Models are used to assess network performance, test intervention strategies and provide the evidence base for regulatory submissions. Model verification is therefore critical. They determine whether models are accurate, defensible, and suitable for decision-making.
Many organisations still rely on traditional processes. These approaches are familiar and widely used, but they introduce a range of inefficiencies that are rarely quantified. Under AMP-8, where delivery timelines are compressed and expectations around evidence are higher, these hidden costs accumulate quickly and affect both project outcomes and programme delivery.
Model review involves three key areas. Confirming the entire network representation against asset records and site surveys. Comparing flows and levels against monitoring data across multiple/many locations. Finally understanding system behaviour across multiple rainfall events and dry weather conditions. In manual workflows, this process is constrained by the number and amount of times this needs doing.
Long-duration simulations, particularly those using Event Duration Monitoring (EDM) data, create two related challenges. First, their runtime can push teams towards simplified models or hand-picked storm events, which can reduce confidence in the results. Second, the volume of output data makes it difficult to assess model fit across the full time domain. A calibration change that improves spill-count prediction in one season or year can worsen it elsewhere, making it hard to identify the best overall fit manually.
This limits the ability to identify subtle issues or interactions within the model. Important behaviours can remain undetected, and model performance may appear acceptable without being fully understood.
Manual approaches are inherently iterative. Modellers assess outputs, identify discrepancies, adjust parameters and rerun simulations. This process can repeat many times before a model is considered fit for purpose.
Calibration and verification can take weeks or months for a single catchment when performed manually. Each iteration requires engineering time, computational resource and coordination between teams.
For consultancies operating under fixed project budgets, this creates pressure on margins. Time spent on repeated review cycles reduces the capacity available for additional work and can delay downstream project stages such as optioneering and design.
Manual review processes can only examine a subset of scenarios and outputs. This means manual methods can only go so far in verifying model behaviour across the full range of conditions the network may experience.
When relationships between tuning parameters and verification metrics are not understood fully, the model will predict differently between storm events, across seasons and different years. This means that any solution developed, can’t be assured to achieve the level of service predicted by the model.
UK water companies are required to justify investment decisions with robust, auditable evidence, particularly under frameworks such as Drainage and Wastewater Management Plans (DWMPs). Limited verification quality increases the risk that decisions are based on incomplete insight, which can lead to challenges during regulatory scrutiny or later project stages.
Model verification typically spans multiple tools and data sources, including hydraulic models, GIS platforms, spreadsheets and custom scripts. Each component contributes to the overall assessment process. Manual coordination between these systems introduces additional overhead as data must be transferred and reformatted, and assumptions must be aligned across tools.
In the verification process, fragmented toolchains can make it harder to maintain consistency and traceability. Model outputs may be reviewed in ICM, compared against monitoring data in Excel, summarised in spreadsheets, and documented through separate QA processes. When these steps are coordinated manually, assumptions, thresholds and review notes can become separated from the model evidence itself. This makes it harder to reproduce results, explain calibration choices, or maintain a consistent audit trail.
One of the most significant hidden costs arises when issues are identified late in the project lifecycle. If model limitations or inaccuracies are discovered after design or costing has begun, work may need to be revisited. This can result in redesign of proposed interventions, additional modelling cycles and delays to project delivery timelines
Manual approaches can lead to solutions being costed and then rejected, requiring modellers to revisit earlier stages of analysis. This creates inefficiency across both engineering and commercial workflows.
Auto-calibration as part of the verification process introduces a different approach. Instead of relying on sequential, manual checks, workflows can be configured to assess model performance systematically across large datasets and multiple scenarios.
Automation enables:
In DWMP contexts, automated workflows have demonstrated significant reductions in manual effort while improving consistency and auditability. Engineers can focus on interpreting results and resolving complex issues, rather than repeating routine checks.
Manual model verification introduces a range of hidden costs that directly affect delivery timelines, resource efficiency and the quality of investment decisions:
Auto-calibration provides a structured and scalable alternative. By applying consistent logic across large datasets and integrating review processes, it enables broader coverage of model behaviour, reduces reliance on repetitive manual checks, and creates clear, auditable outputs aligned with regulatory expectations. This allows modelling teams to focus on interpretation and decision-making, rather than processing and verification.
STRIDE applies this approach through HEEDS, combining optimisation technology with water sector expertise to deliver efficient, repeatable modelling processes. The result is faster progression from analysis to decision, improved utilisation of modelling resources, and stronger confidence in the evidence used to support capital investment across wastewater programmes.