Comparison#
The comparison modules provide tools for comparing IWFM models and generating reports with various metrics.
Differ Module#
The differ module provides classes for comparing model components and generating structured diffs.
Model differ for comparing IWFM models.
This module provides classes for comparing IWFM model components and generating structured diffs.
- class pyiwfm.comparison.differ.DiffType(*values)[source]#
Bases:
EnumType of difference detected.
- ADDED = 'added'#
- REMOVED = 'removed'#
- MODIFIED = 'modified'#
- class pyiwfm.comparison.differ.DiffItem(path, diff_type, old_value=None, new_value=None)[source]#
Bases:
objectA single difference item.
- Variables:
path (str) – Path to the differing item (e.g., ‘mesh.nodes.5.x’)
diff_type (pyiwfm.comparison.differ.DiffType) – Type of difference (added, removed, modified)
old_value (Any) – Original value (None if added)
new_value (Any) – New value (None if removed)
- __init__(path, diff_type, old_value=None, new_value=None)#
- class pyiwfm.comparison.differ.MeshDiff(items=<factory>, nodes_added=0, nodes_removed=0, nodes_modified=0, elements_added=0, elements_removed=0, elements_modified=0)[source]#
Bases:
objectDifference between two meshes.
- Variables:
items (list[pyiwfm.comparison.differ.DiffItem]) – List of difference items
nodes_added (int) – Number of nodes added
nodes_removed (int) – Number of nodes removed
nodes_modified (int) – Number of nodes modified
elements_added (int) – Number of elements added
elements_removed (int) – Number of elements removed
elements_modified (int) – Number of elements modified
- __init__(items=<factory>, nodes_added=0, nodes_removed=0, nodes_modified=0, elements_added=0, elements_removed=0, elements_modified=0)#
- class pyiwfm.comparison.differ.StratigraphyDiff(items=<factory>)[source]#
Bases:
objectDifference between two stratigraphy definitions.
- Variables:
items (list[pyiwfm.comparison.differ.DiffItem]) – List of difference items
- classmethod compare(strat1, strat2, tolerance=1e-06)[source]#
Compare two stratigraphy definitions.
- Parameters:
strat1 (Stratigraphy) – First stratigraphy (original)
strat2 (Stratigraphy) – Second stratigraphy (modified)
tolerance (float) – Tolerance for floating point comparisons
- Returns:
StratigraphyDiff containing all differences
- Return type:
- __init__(items=<factory>)#
- class pyiwfm.comparison.differ.ModelDiff(mesh_diff=None, stratigraphy_diff=None)[source]#
Bases:
objectContainer for all model differences.
- Variables:
mesh_diff (pyiwfm.comparison.differ.MeshDiff | None) – Mesh differences
stratigraphy_diff (pyiwfm.comparison.differ.StratigraphyDiff | None) – Stratigraphy differences
- stratigraphy_diff: StratigraphyDiff | None = None#
- summary()[source]#
Generate a human-readable summary of differences.
- Returns:
Summary string
- Return type:
- __init__(mesh_diff=None, stratigraphy_diff=None)#
- class pyiwfm.comparison.differ.ModelDiffer(tolerance=1e-06)[source]#
Bases:
objectCompare two IWFM models and generate differences.
This class provides methods to compare individual model components or entire models.
- __init__(tolerance=1e-06)[source]#
Initialize the model differ.
- Parameters:
tolerance (float) – Tolerance for floating point comparisons
- diff_stratigraphy(strat1, strat2)[source]#
Compare two stratigraphy definitions.
- Parameters:
strat1 (Stratigraphy) – First stratigraphy (original)
strat2 (Stratigraphy) – Second stratigraphy (modified)
- Returns:
StratigraphyDiff containing differences
- Return type:
- diff(mesh1=None, mesh2=None, strat1=None, strat2=None)[source]#
Compare model components.
- Parameters:
mesh1 (AppGrid | None) – First mesh (original)
mesh2 (AppGrid | None) – Second mesh (modified)
strat1 (Stratigraphy | None) – First stratigraphy (original)
strat2 (Stratigraphy | None) – Second stratigraphy (modified)
- Returns:
ModelDiff containing all differences
- Return type:
Metrics Module#
The metrics module provides functions and classes for computing comparison metrics between observed and simulated data.
Comparison metrics for IWFM model outputs.
This module provides functions and classes for computing comparison metrics between observed and simulated data, commonly used for model calibration and validation.
Metric Functions#
rmse(): Root Mean Square Errormae(): Mean Absolute Errormbe(): Mean Bias Errornash_sutcliffe(): Nash-Sutcliffe Efficiency (NSE)percent_bias(): Percent Bias (PBIAS)correlation_coefficient(): Pearson correlationmax_error(): Maximum absolute errorscaled_rmse(): Scaled RMSE (dimensionless)index_of_agreement(): Willmott Index of Agreement
Classes#
ComparisonMetrics: Container for all metricsTimeSeriesComparison: Compare time series dataSpatialComparison: Compare spatial fields
Example
Compute metrics for head comparison:
>>> import numpy as np
>>> from pyiwfm.comparison.metrics import ComparisonMetrics, rmse
>>>
>>> observed = np.array([50.0, 52.0, 48.0, 55.0, 51.0])
>>> simulated = np.array([51.0, 51.5, 49.0, 54.0, 52.0])
>>>
>>> # Individual metrics
>>> print(f"RMSE: {rmse(observed, simulated):.3f}")
RMSE: 0.894
>>>
>>> # All metrics at once
>>> metrics = ComparisonMetrics.compute(observed, simulated)
>>> print(metrics.summary())
>>> print(f"Model rating: {metrics.rating()}")
- pyiwfm.comparison.metrics.mbe(observed, simulated)[source]#
Calculate Mean Bias Error.
Positive values indicate over-prediction, negative values indicate under-prediction.
- pyiwfm.comparison.metrics.nash_sutcliffe(observed, simulated)[source]#
Calculate Nash-Sutcliffe Efficiency.
NSE = 1 - [sum((obs - sim)^2) / sum((obs - mean(obs))^2)]
Values range from -inf to 1.0: - NSE = 1: Perfect model - NSE = 0: Model is as good as using mean observed value - NSE < 0: Model is worse than using mean
- pyiwfm.comparison.metrics.percent_bias(observed, simulated)[source]#
Calculate Percent Bias.
PBIAS = 100 * [sum(sim - obs) / sum(obs)]
Positive values indicate over-prediction, negative values indicate under-prediction.
- pyiwfm.comparison.metrics.correlation_coefficient(observed, simulated)[source]#
Calculate Pearson correlation coefficient.
- pyiwfm.comparison.metrics.relative_error(observed, simulated)[source]#
Calculate relative error at each point.
- pyiwfm.comparison.metrics.scaled_rmse(observed, simulated)[source]#
Calculate Scaled Root Mean Square Error.
SRMSE = RMSE / (max(obs) - min(obs))
A dimensionless metric that allows comparison across sites with different magnitudes. Values closer to 0 indicate better fit.
- pyiwfm.comparison.metrics.index_of_agreement(observed, simulated)[source]#
Calculate Willmott Index of Agreement (d).
d = 1 - [sum((sim - obs)^2) / sum((|sim - mean(obs)| + |obs - mean(obs)|)^2)]Values range from 0 to 1.0: - d = 1: Perfect agreement - d = 0: No agreement
Reference: Willmott, C. J. (1981). On the validation of models. Physical Geography, 2(2), 184-194.
- class pyiwfm.comparison.metrics.ComparisonMetrics(rmse, mae, mbe, nash_sutcliffe, percent_bias, correlation, max_error, scaled_rmse, index_of_agreement, n_points)[source]#
Bases:
objectContainer for all comparison metrics.
- Variables:
rmse (float) – Root Mean Square Error
mae (float) – Mean Absolute Error
mbe (float) – Mean Bias Error
nash_sutcliffe (float) – Nash-Sutcliffe Efficiency
percent_bias (float) – Percent Bias
correlation (float) – Pearson correlation coefficient
max_error (float) – Maximum absolute error
scaled_rmse (float) – Scaled RMSE (dimensionless)
index_of_agreement (float) – Willmott Index of Agreement
n_points (int) – Number of data points
- classmethod compute(observed, simulated)[source]#
Compute all metrics from observed and simulated data.
- rating()[source]#
Provide a qualitative rating based on NSE.
- Returns:
Rating string (‘excellent’, ‘good’, ‘fair’, ‘poor’)
- Return type:
- __init__(rmse, mae, mbe, nash_sutcliffe, percent_bias, correlation, max_error, scaled_rmse, index_of_agreement, n_points)#
- class pyiwfm.comparison.metrics.TimeSeriesComparison(times, observed, simulated)[source]#
Bases:
objectCompare time series data.
- Variables:
times (numpy.ndarray[tuple[Any, ...], numpy.dtype[numpy.float64]]) – Time values
observed (numpy.ndarray[tuple[Any, ...], numpy.dtype[numpy.float64]]) – Observed values
simulated (numpy.ndarray[tuple[Any, ...], numpy.dtype[numpy.float64]]) – Simulated values
- property metrics: ComparisonMetrics#
Get comparison metrics.
- property residuals: ndarray[tuple[Any, ...], dtype[float64]]#
Calculate residuals (simulated - observed).
- __init__(times, observed, simulated)#
- class pyiwfm.comparison.metrics.SpatialComparison(x, y, observed, simulated)[source]#
Bases:
objectCompare spatial field data.
- Variables:
x (numpy.ndarray[tuple[Any, ...], numpy.dtype[numpy.float64]]) – X coordinates
y (numpy.ndarray[tuple[Any, ...], numpy.dtype[numpy.float64]]) – Y coordinates
observed (numpy.ndarray[tuple[Any, ...], numpy.dtype[numpy.float64]]) – Observed values at each point
simulated (numpy.ndarray[tuple[Any, ...], numpy.dtype[numpy.float64]]) – Simulated values at each point
- property metrics: ComparisonMetrics#
Get comparison metrics.
- property error_field: ndarray[tuple[Any, ...], dtype[float64]]#
Calculate error at each point (simulated - observed).
- property relative_error_field: ndarray[tuple[Any, ...], dtype[float64]]#
Calculate relative error at each point.
- __init__(x, y, observed, simulated)#
Report Module#
The report module provides classes for generating comparison reports in various formats (text, JSON, HTML).
Report generation for model comparisons.
This module provides classes for generating comparison reports in various formats (text, JSON, HTML).
- class pyiwfm.comparison.report.BaseReport[source]#
Bases:
ABCAbstract base class for report generators.
- abstractmethod generate_metrics_report(metrics)[source]#
Generate report content from metrics.
- Parameters:
metrics (ComparisonMetrics) – Comparison metrics object
- Returns:
Report content as string
- Return type:
- class pyiwfm.comparison.report.TextReport[source]#
Bases:
BaseReportGenerate plain text reports.
- class pyiwfm.comparison.report.JsonReport(indent=2)[source]#
Bases:
BaseReportGenerate JSON reports.
- class pyiwfm.comparison.report.HtmlReport(title='Model Comparison Report')[source]#
Bases:
BaseReportGenerate HTML reports.
- class pyiwfm.comparison.report.ReportGenerator[source]#
Bases:
objectFactory class for generating reports in various formats.
Provides a unified interface for generating reports in text, JSON, or HTML format.
- generate(model_diff, format='text')[source]#
Generate a report in the specified format.
- Parameters:
model_diff (ModelDiff) – Model difference object
format (Literal['text', 'json', 'html']) – Output format (‘text’, ‘json’, ‘html’)
- Returns:
Report content as string
- Raises:
ValueError – If format is not recognized
- Return type:
- generate_metrics(metrics, format='text')[source]#
Generate a metrics report in the specified format.
- Parameters:
metrics (ComparisonMetrics) – Comparison metrics object
format (Literal['text', 'json', 'html']) – Output format
- Returns:
Report content as string
- Return type:
- class pyiwfm.comparison.report.ComparisonReport(title, model_diff=None, head_metrics=None, flow_metrics=None, description='', metadata=<factory>)[source]#
Bases:
objectContainer for a complete comparison report.
Combines model diff, metrics, and metadata into a single report object.
- Variables:
title (str) – Report title
model_diff (ModelDiff | None) – Model difference (optional)
head_metrics (ComparisonMetrics | None) – Head comparison metrics (optional)
flow_metrics (ComparisonMetrics | None) – Flow comparison metrics (optional)
description (str) – Report description
- head_metrics: ComparisonMetrics | None = None#
- flow_metrics: ComparisonMetrics | None = None#
- __init__(title, model_diff=None, head_metrics=None, flow_metrics=None, description='', metadata=<factory>)#