Comparison#

The comparison modules provide tools for comparing IWFM models and generating reports with various metrics.

Differ Module#

The differ module provides classes for comparing model components and generating structured diffs.

Model differ for comparing IWFM models.

This module provides classes for comparing IWFM model components and generating structured diffs.

class pyiwfm.comparison.differ.DiffType(*values)[source]#

Bases: Enum

Type of difference detected.

ADDED = 'added'#
REMOVED = 'removed'#
MODIFIED = 'modified'#
class pyiwfm.comparison.differ.DiffItem(path, diff_type, old_value=None, new_value=None)[source]#

Bases: object

A single difference item.

Variables:
  • path (str) – Path to the differing item (e.g., ‘mesh.nodes.5.x’)

  • diff_type (pyiwfm.comparison.differ.DiffType) – Type of difference (added, removed, modified)

  • old_value (Any) – Original value (None if added)

  • new_value (Any) – New value (None if removed)

path: str#
diff_type: DiffType#
old_value: Any = None#
new_value: Any = None#
__init__(path, diff_type, old_value=None, new_value=None)#
class pyiwfm.comparison.differ.MeshDiff(items=<factory>, nodes_added=0, nodes_removed=0, nodes_modified=0, elements_added=0, elements_removed=0, elements_modified=0)[source]#

Bases: object

Difference between two meshes.

Variables:
  • items (list[pyiwfm.comparison.differ.DiffItem]) – List of difference items

  • nodes_added (int) – Number of nodes added

  • nodes_removed (int) – Number of nodes removed

  • nodes_modified (int) – Number of nodes modified

  • elements_added (int) – Number of elements added

  • elements_removed (int) – Number of elements removed

  • elements_modified (int) – Number of elements modified

items: list[DiffItem]#
nodes_added: int = 0#
nodes_removed: int = 0#
nodes_modified: int = 0#
elements_added: int = 0#
elements_removed: int = 0#
elements_modified: int = 0#
property is_identical: bool#

Check if meshes are identical.

classmethod compare(mesh1, mesh2)[source]#

Compare two meshes and return their differences.

Parameters:
  • mesh1 (AppGrid) – First mesh (original)

  • mesh2 (AppGrid) – Second mesh (modified)

Returns:

MeshDiff containing all differences

Return type:

MeshDiff

__init__(items=<factory>, nodes_added=0, nodes_removed=0, nodes_modified=0, elements_added=0, elements_removed=0, elements_modified=0)#
class pyiwfm.comparison.differ.StratigraphyDiff(items=<factory>)[source]#

Bases: object

Difference between two stratigraphy definitions.

Variables:

items (list[pyiwfm.comparison.differ.DiffItem]) – List of difference items

items: list[DiffItem]#
property is_identical: bool#

Check if stratigraphy is identical.

classmethod compare(strat1, strat2, tolerance=1e-06)[source]#

Compare two stratigraphy definitions.

Parameters:
  • strat1 (Stratigraphy) – First stratigraphy (original)

  • strat2 (Stratigraphy) – Second stratigraphy (modified)

  • tolerance (float) – Tolerance for floating point comparisons

Returns:

StratigraphyDiff containing all differences

Return type:

StratigraphyDiff

__init__(items=<factory>)#
class pyiwfm.comparison.differ.ModelDiff(mesh_diff=None, stratigraphy_diff=None)[source]#

Bases: object

Container for all model differences.

Variables:
mesh_diff: MeshDiff | None = None#
stratigraphy_diff: StratigraphyDiff | None = None#
property items: list[DiffItem]#

Get all difference items.

property is_identical: bool#

Check if models are identical.

summary()[source]#

Generate a human-readable summary of differences.

Returns:

Summary string

Return type:

str

filter_by_path(prefix)[source]#

Filter diff items by path prefix.

Parameters:

prefix (str) – Path prefix to filter by

Returns:

New ModelDiff with filtered items

Return type:

ModelDiff

filter_by_type(diff_type)[source]#

Filter diff items by diff type.

Parameters:

diff_type (DiffType) – Type of diff to filter by

Returns:

New ModelDiff with filtered items

Return type:

ModelDiff

statistics()[source]#

Calculate diff statistics.

Returns:

Dictionary with statistics

Return type:

dict[str, int]

to_dict()[source]#

Convert diff to dictionary representation.

Returns:

Dictionary representation of diff

Return type:

dict[str, Any]

__init__(mesh_diff=None, stratigraphy_diff=None)#
class pyiwfm.comparison.differ.ModelDiffer(tolerance=1e-06)[source]#

Bases: object

Compare two IWFM models and generate differences.

This class provides methods to compare individual model components or entire models.

__init__(tolerance=1e-06)[source]#

Initialize the model differ.

Parameters:

tolerance (float) – Tolerance for floating point comparisons

diff_meshes(mesh1, mesh2)[source]#

Compare two meshes.

Parameters:
  • mesh1 (AppGrid) – First mesh (original)

  • mesh2 (AppGrid) – Second mesh (modified)

Returns:

MeshDiff containing differences

Return type:

MeshDiff

diff_stratigraphy(strat1, strat2)[source]#

Compare two stratigraphy definitions.

Parameters:
Returns:

StratigraphyDiff containing differences

Return type:

StratigraphyDiff

diff(mesh1=None, mesh2=None, strat1=None, strat2=None)[source]#

Compare model components.

Parameters:
  • mesh1 (AppGrid | None) – First mesh (original)

  • mesh2 (AppGrid | None) – Second mesh (modified)

  • strat1 (Stratigraphy | None) – First stratigraphy (original)

  • strat2 (Stratigraphy | None) – Second stratigraphy (modified)

Returns:

ModelDiff containing all differences

Return type:

ModelDiff

Metrics Module#

The metrics module provides functions and classes for computing comparison metrics between observed and simulated data.

Comparison metrics for IWFM model outputs.

This module provides functions and classes for computing comparison metrics between observed and simulated data, commonly used for model calibration and validation.

Metric Functions#

Classes#

Example

Compute metrics for head comparison:

>>> import numpy as np
>>> from pyiwfm.comparison.metrics import ComparisonMetrics, rmse
>>>
>>> observed = np.array([50.0, 52.0, 48.0, 55.0, 51.0])
>>> simulated = np.array([51.0, 51.5, 49.0, 54.0, 52.0])
>>>
>>> # Individual metrics
>>> print(f"RMSE: {rmse(observed, simulated):.3f}")
RMSE: 0.894
>>>
>>> # All metrics at once
>>> metrics = ComparisonMetrics.compute(observed, simulated)
>>> print(metrics.summary())
>>> print(f"Model rating: {metrics.rating()}")
pyiwfm.comparison.metrics.rmse(observed, simulated)[source]#

Calculate Root Mean Square Error.

Parameters:
Returns:

RMSE value

Return type:

float

pyiwfm.comparison.metrics.mae(observed, simulated)[source]#

Calculate Mean Absolute Error.

Parameters:
Returns:

MAE value

Return type:

float

pyiwfm.comparison.metrics.mbe(observed, simulated)[source]#

Calculate Mean Bias Error.

Positive values indicate over-prediction, negative values indicate under-prediction.

Parameters:
Returns:

MBE value

Return type:

float

pyiwfm.comparison.metrics.nash_sutcliffe(observed, simulated)[source]#

Calculate Nash-Sutcliffe Efficiency.

NSE = 1 - [sum((obs - sim)^2) / sum((obs - mean(obs))^2)]

Values range from -inf to 1.0: - NSE = 1: Perfect model - NSE = 0: Model is as good as using mean observed value - NSE < 0: Model is worse than using mean

Parameters:
Returns:

NSE value

Return type:

float

pyiwfm.comparison.metrics.percent_bias(observed, simulated)[source]#

Calculate Percent Bias.

PBIAS = 100 * [sum(sim - obs) / sum(obs)]

Positive values indicate over-prediction, negative values indicate under-prediction.

Parameters:
Returns:

Percent bias value

Return type:

float

pyiwfm.comparison.metrics.correlation_coefficient(observed, simulated)[source]#

Calculate Pearson correlation coefficient.

Parameters:
Returns:

Correlation coefficient (-1 to 1)

Return type:

float

pyiwfm.comparison.metrics.relative_error(observed, simulated)[source]#

Calculate relative error at each point.

Parameters:
Returns:

Array of relative errors

Return type:

ndarray[tuple[Any, …], dtype[float64]]

pyiwfm.comparison.metrics.max_error(observed, simulated)[source]#

Calculate maximum absolute error.

Parameters:
Returns:

Maximum absolute error

Return type:

float

pyiwfm.comparison.metrics.scaled_rmse(observed, simulated)[source]#

Calculate Scaled Root Mean Square Error.

SRMSE = RMSE / (max(obs) - min(obs))

A dimensionless metric that allows comparison across sites with different magnitudes. Values closer to 0 indicate better fit.

Parameters:
Returns:

Scaled RMSE value. Returns inf if observed range is zero.

Return type:

float

pyiwfm.comparison.metrics.index_of_agreement(observed, simulated)[source]#

Calculate Willmott Index of Agreement (d).

d = 1 - [sum((sim - obs)^2) / sum((|sim - mean(obs)| + |obs - mean(obs)|)^2)]

Values range from 0 to 1.0: - d = 1: Perfect agreement - d = 0: No agreement

Reference: Willmott, C. J. (1981). On the validation of models. Physical Geography, 2(2), 184-194.

Parameters:
Returns:

Index of agreement (0 to 1)

Return type:

float

class pyiwfm.comparison.metrics.ComparisonMetrics(rmse, mae, mbe, nash_sutcliffe, percent_bias, correlation, max_error, scaled_rmse, index_of_agreement, n_points)[source]#

Bases: object

Container for all comparison metrics.

Variables:
  • rmse (float) – Root Mean Square Error

  • mae (float) – Mean Absolute Error

  • mbe (float) – Mean Bias Error

  • nash_sutcliffe (float) – Nash-Sutcliffe Efficiency

  • percent_bias (float) – Percent Bias

  • correlation (float) – Pearson correlation coefficient

  • max_error (float) – Maximum absolute error

  • scaled_rmse (float) – Scaled RMSE (dimensionless)

  • index_of_agreement (float) – Willmott Index of Agreement

  • n_points (int) – Number of data points

rmse: float#
mae: float#
mbe: float#
nash_sutcliffe: float#
percent_bias: float#
correlation: float#
max_error: float#
scaled_rmse: float#
index_of_agreement: float#
n_points: int#
classmethod compute(observed, simulated)[source]#

Compute all metrics from observed and simulated data.

Parameters:
Returns:

ComparisonMetrics instance with all metrics computed

Return type:

ComparisonMetrics

to_dict()[source]#

Convert metrics to dictionary.

Returns:

Dictionary with all metrics

Return type:

dict[str, Any]

summary()[source]#

Generate a human-readable summary.

Returns:

Summary string

Return type:

str

rating()[source]#

Provide a qualitative rating based on NSE.

Returns:

Rating string (‘excellent’, ‘good’, ‘fair’, ‘poor’)

Return type:

str

__init__(rmse, mae, mbe, nash_sutcliffe, percent_bias, correlation, max_error, scaled_rmse, index_of_agreement, n_points)#
class pyiwfm.comparison.metrics.TimeSeriesComparison(times, observed, simulated)[source]#

Bases: object

Compare time series data.

Variables:
times: ndarray[tuple[Any, ...], dtype[float64]]#
observed: ndarray[tuple[Any, ...], dtype[float64]]#
simulated: ndarray[tuple[Any, ...], dtype[float64]]#
__post_init__()[source]#

Compute metrics after initialization.

property metrics: ComparisonMetrics#

Get comparison metrics.

property n_points: int#

Total number of time points.

property n_valid_points: int#

Number of valid (non-NaN) time points.

property residuals: ndarray[tuple[Any, ...], dtype[float64]]#

Calculate residuals (simulated - observed).

to_dict()[source]#

Convert to dictionary.

Returns:

Dictionary representation

Return type:

dict[str, Any]

__init__(times, observed, simulated)#
class pyiwfm.comparison.metrics.SpatialComparison(x, y, observed, simulated)[source]#

Bases: object

Compare spatial field data.

Variables:
x: ndarray[tuple[Any, ...], dtype[float64]]#
y: ndarray[tuple[Any, ...], dtype[float64]]#
observed: ndarray[tuple[Any, ...], dtype[float64]]#
simulated: ndarray[tuple[Any, ...], dtype[float64]]#
__post_init__()[source]#

Compute metrics after initialization.

property metrics: ComparisonMetrics#

Get comparison metrics.

property n_points: int#

Total number of spatial points.

property error_field: ndarray[tuple[Any, ...], dtype[float64]]#

Calculate error at each point (simulated - observed).

property relative_error_field: ndarray[tuple[Any, ...], dtype[float64]]#

Calculate relative error at each point.

metrics_by_region(regions)[source]#

Calculate metrics for each region.

Parameters:

regions (ndarray[tuple[Any, ...], dtype[int32]]) – Region ID for each point

Returns:

Dictionary mapping region ID to metrics

Return type:

dict[int, ComparisonMetrics]

to_dict()[source]#

Convert to dictionary.

Returns:

Dictionary representation

Return type:

dict[str, Any]

__init__(x, y, observed, simulated)#

Report Module#

The report module provides classes for generating comparison reports in various formats (text, JSON, HTML).

Report generation for model comparisons.

This module provides classes for generating comparison reports in various formats (text, JSON, HTML).

class pyiwfm.comparison.report.BaseReport[source]#

Bases: ABC

Abstract base class for report generators.

abstractmethod generate(model_diff)[source]#

Generate report content from model diff.

Parameters:

model_diff (ModelDiff) – Model difference object

Returns:

Report content as string

Return type:

str

abstractmethod generate_metrics_report(metrics)[source]#

Generate report content from metrics.

Parameters:

metrics (ComparisonMetrics) – Comparison metrics object

Returns:

Report content as string

Return type:

str

save(model_diff, output_path)[source]#

Save report to file.

Parameters:
  • model_diff (ModelDiff) – Model difference object

  • output_path (Path | str) – Output file path

class pyiwfm.comparison.report.TextReport[source]#

Bases: BaseReport

Generate plain text reports.

generate(model_diff)[source]#

Generate text report from model diff.

generate_metrics_report(metrics)[source]#

Generate text report from metrics.

class pyiwfm.comparison.report.JsonReport(indent=2)[source]#

Bases: BaseReport

Generate JSON reports.

__init__(indent=2)[source]#

Initialize JSON report generator.

Parameters:

indent (int) – JSON indentation level

generate(model_diff)[source]#

Generate JSON report from model diff.

generate_metrics_report(metrics)[source]#

Generate JSON report from metrics.

class pyiwfm.comparison.report.HtmlReport(title='Model Comparison Report')[source]#

Bases: BaseReport

Generate HTML reports.

__init__(title='Model Comparison Report')[source]#

Initialize HTML report generator.

Parameters:

title (str) – HTML page title

generate(model_diff)[source]#

Generate HTML report from model diff.

generate_metrics_report(metrics)[source]#

Generate HTML report from metrics.

class pyiwfm.comparison.report.ReportGenerator[source]#

Bases: object

Factory class for generating reports in various formats.

Provides a unified interface for generating reports in text, JSON, or HTML format.

__init__()[source]#

Initialize the report generator.

generate(model_diff, format='text')[source]#

Generate a report in the specified format.

Parameters:
  • model_diff (ModelDiff) – Model difference object

  • format (Literal['text', 'json', 'html']) – Output format (‘text’, ‘json’, ‘html’)

Returns:

Report content as string

Raises:

ValueError – If format is not recognized

Return type:

str

generate_metrics(metrics, format='text')[source]#

Generate a metrics report in the specified format.

Parameters:
  • metrics (ComparisonMetrics) – Comparison metrics object

  • format (Literal['text', 'json', 'html']) – Output format

Returns:

Report content as string

Return type:

str

save(model_diff, output_path, format=None)[source]#

Save report to file.

Parameters:
  • model_diff (ModelDiff) – Model difference object

  • output_path (Path | str) – Output file path

  • format (Literal['text', 'json', 'html'] | None) – Output format (auto-detected from extension if None)

class pyiwfm.comparison.report.ComparisonReport(title, model_diff=None, head_metrics=None, flow_metrics=None, description='', metadata=<factory>)[source]#

Bases: object

Container for a complete comparison report.

Combines model diff, metrics, and metadata into a single report object.

Variables:
  • title (str) – Report title

  • model_diff (ModelDiff | None) – Model difference (optional)

  • head_metrics (ComparisonMetrics | None) – Head comparison metrics (optional)

  • flow_metrics (ComparisonMetrics | None) – Flow comparison metrics (optional)

  • description (str) – Report description

  • metadata (dict[str, Any]) – Additional metadata

title: str#
model_diff: ModelDiff | None = None#
head_metrics: ComparisonMetrics | None = None#
flow_metrics: ComparisonMetrics | None = None#
description: str = ''#
metadata: dict[str, Any]#
to_text()[source]#

Convert report to text format.

to_json()[source]#

Convert report to JSON format.

to_html()[source]#

Convert report to HTML format.

save(output_path, format=None)[source]#

Save report to file.

Parameters:
  • output_path (Path | str) – Output file path

  • format (Literal['text', 'json', 'html'] | None) – Output format (auto-detected if None)

__init__(title, model_diff=None, head_metrics=None, flow_metrics=None, description='', metadata=<factory>)#