Python (ixmp package)

The ix modeling platform application programming interface (API) is organized around three classes:

Platform([name, backend])

Instance of the modeling platform.

TimeSeries(mp, model, scenario[, version, ...])

Collection of data in time series format.

Scenario(mp, model, scenario[, version, ...])

Collection of model-related data.

Platform

class ixmp.Platform(name: str | None = None, backend: str | None = None, **backend_args)[source]

Instance of the modeling platform.

A Platform connects two key components:

  1. A back end for storing data such as model inputs and outputs.

  2. One or more model(s); codes in Python or other languages or frameworks that run, via Scenario.solve(), on the data stored in the Platform.

The Platform parameters control these components. TimeSeries and Scenario objects tied to a single Platform; to move data between platforms, see Scenario.clone().

Parameters:
  • name (str) – Name of a specific configured backend.

  • backend ('jdbc') – Storage backend type. ‘jdbc’ corresponds to the built-in JDBCBackend; see BACKENDS.

  • backend_args – Keyword arguments to specific to the backend. See JDBCBackend.

Platforms have the following methods:

add_region(region, hierarchy[, parent])

Define a region including a hierarchy level and a 'parent' region.

add_region_synonym(region, mapped_to)

Define a synonym for a region.

add_unit(unit[, comment])

Define a unit.

check_access(user, models[, access])

Check access to specific models.

regions()

Return all regions defined time series data, including synonyms.

scenario_list([default, model, scen])

Return information about TimeSeries and Scenarios on the Platform.

set_log_level(level)

Set log level for the Platform and its storage Backend.

units()

Return all units defined on the Platform.

The following backend methods are available via Platform too:

backend.base.Backend.add_model_name(name)

Add (register) new model name.

backend.base.Backend.add_scenario_name(name)

Add (register) new scenario name.

backend.base.Backend.close_db()

OPTIONAL: Close database connection(s).

backend.base.Backend.get_doc(domain[, name])

Read documentation from database

backend.base.Backend.get_meta(model, ...)

Retrieve all metadata attached to a specific target.

backend.base.Backend.get_model_names()

List existing model names.

backend.base.Backend.get_scenario_names()

List existing scenario names.

backend.base.Backend.open_db()

OPTIONAL: (Re-)open database connection(s).

backend.base.Backend.remove_meta(names, ...)

Remove metadata attached to a target.

backend.base.Backend.set_doc(domain, docs)

Save documentation to database

backend.base.Backend.set_meta(meta, model, ...)

Set metadata on a target.

These methods can be called like normal Platform methods, e.g.:

$ platform_instance.close_db()
add_region(region: str, hierarchy: str, parent: str = 'World') None[source]

Define a region including a hierarchy level and a ‘parent’ region.

Tip

On a Platform backed by a shared database, a region may already exist with a different spelling. Use regions() first to check, and consider calling add_region_synonym() instead.

Parameters:
  • region (str) – Name of the region.

  • hierarchy (str) – Hierarchy level of the region (e.g., country, R11, basin)

  • parent (str, optional) – Assign a ‘parent’ region.

add_region_synonym(region: str, mapped_to: str) None[source]

Define a synonym for a region.

When adding timeseries data using the synonym in the region column, it will be converted to mapped_to.

Parameters:
  • region (str) – Name of the region synonym.

  • mapped_to (str) – Name of the region to which the synonym should be mapped.

add_timeslice(name: str, category: str, duration: float) None[source]

Define a subannual timeslice including a category and duration.

See timeslices() for a detailed description of timeslices.

Parameters:
  • name (str) – Unique name of the timeslice.

  • category (str) – Timeslice category (e.g. ‘common’, ‘month’, etc).

  • duration (float) – Duration of timeslice as fraction of year.

add_unit(unit: str, comment: str = 'None') None[source]

Define a unit.

Parameters:
  • unit (str) – Name of the unit.

  • comment (str, optional) – Annotation describing the unit or why it was added. The current database user and timestamp are appended automatically.

check_access(user: str, models: str | Sequence[str], access: str = 'view') bool | Dict[str, bool][source]

Check access to specific models.

Parameters:
  • user (str) – Registered user name

  • models (str or list of str) – Model(s) name

  • access (str, optional) – Access type - view or edit

Return type:

bool or dict of bool

export_timeseries_data(path: PathLike, default: bool = True, model: str | None = None, scenario: str | None = None, variable=None, unit=None, region=None, export_all_runs: bool = False) None[source]

Export time series data to CSV file across multiple TimeSeries.

Refer TimeSeries.add_timeseries() about adding time series data.

Parameters:
  • path (os.PathLike) –

    File name to export data to; must have the suffix ‘.csv’.

    Result file will contain the following columns:

    • model

    • scenario

    • version

    • variable

    • unit

    • region

    • meta

    • subannual

    • year

    • value

  • default (bool, optional) – True to include only TimeSeries versions marked as default.

  • model (str, optional) – Only return data for this model name.

  • scenario (str, optional) – Only return data for this scenario name.

  • variable (list of str, optional) – Only return data for variable name(s) in this list.

  • unit (list of str, optional) – Only return data for unit name(s) in this list.

  • region (list of str, optional) – Only return data for region(s) in this list.

  • export_all_runs (bool, optional) – Export all existing model+scenario run combinations.

get_log_level() str[source]

Return log level of the storage Backend, if any.

Returns:

Name of a Python logging level.

Return type:

str

regions() DataFrame[source]

Return all regions defined time series data, including synonyms.

Return type:

pandas.DataFrame

scenario_list(default: bool = True, model: str | None = None, scen: str | None = None) DataFrame[source]

Return information about TimeSeries and Scenarios on the Platform.

Parameters:
  • default (bool, optional) – Return only the default version of each TimeSeries/Scenario (see TimeSeries.set_as_default()). Any (model, scenario) without a default version is omitted. If False, return all versions.

  • model (str, optional) – A model name. If given, only return information for model.

  • scen (str, optional) – A scenario name. If given, only return information for scen.

Returns:

Scenario information, with the columns:

  • model, scenario, version, and scheme—Scenario identifiers; see TimeSeries and Scenario.

  • is_defaultTrue if the version is the default version for the (model, scenario).

  • is_lockedTrue if the Scenario has been locked for use.

  • cre_user, cre_date—database user that created the Scenario, and creation time.

  • upd_user, upd_date—user and time for last modification of the Scenario.

  • lock_user, lock_date—user that locked the Scenario and lock time.

  • annotation: description of the Scenario or changelog.

Return type:

pandas.DataFrame

set_log_level(level: str | int) None[source]

Set log level for the Platform and its storage Backend.

Parameters:

level (str) – Name of a Python logging level.

timeslices() DataFrame[source]

Return all subannual time slices defined in this Platform instance.

See the Data model documentation for further details.

The category and duration do not have any functional relevance within the ixmp framework, but they may be useful for pre- or post-processing. For example, they can be used to filter all timeslices of a certain category (e.g., all months) from the pandas.DataFrame returned by this function or to aggregate subannual data to full-year results.

Returns:

Data frame with columns ‘timeslice’, ‘category’, and ‘duration’.

Return type:

pandas.DataFrame

See also

add_timeslice

units() List[str][source]

Return all units defined on the Platform.

Return type:

list of str

TimeSeries

class ixmp.TimeSeries(mp: Platform, model: str, scenario: str, version: int | str | None = None, annotation: str | None = None, **kwargs)[source]

Collection of data in time series format.

TimeSeries is the parent/super-class of Scenario.

Parameters:
  • mp (Platform) – ixmp instance in which to store data.

  • model (str) – Model name.

  • scenario (str) – Scenario name.

  • version (int or str, optional) – If omitted and a default version of the (model, scenario) has been designated (see set_as_default()), load that version. If int, load a specific version. If 'new', create a new TimeSeries.

  • annotation (str, optional) – A short annotation/comment used when version='new'.

A TimeSeries is uniquely identified on its Platform by its model, scenario, and version attributes. For more details, see the data model documentation.

A new version is created by:

  • Instantiating a new TimeSeries with the same model and scenario as an existing TimeSeries.

  • Calling Scenario.clone().

TimeSeries objects have the following methods and attributes:

add_geodata(df)

Add geodata.

add_timeseries(df[, meta, year_lim])

Add time series data.

check_out([timeseries_only])

Check out the TimeSeries.

commit(comment)

Commit all changed data to the database.

discard_changes()

Discard all changes and reload from the database.

get_geodata()

Fetch geodata and return it as dataframe.

get_meta([name])

Get Metadata for this object.

is_default()

Return True if the version is the default version.

last_update()

Get the timestamp of the last update/edit of this TimeSeries.

preload_timeseries()

Preload timeseries data to in-memory cache.

read_file(path[, firstyear, lastyear])

Read time series data from a CSV or Microsoft Excel file.

remove_geodata(df)

Remove geodata from the TimeSeries instance.

remove_timeseries(df)

Remove time series data.

run_id()

Get the run id of this TimeSeries.

set_as_default()

Set the current version as the default.

set_meta(name_or_dict[, value])

Set Metadata for this object.

timeseries([region, variable, unit, year, ...])

Retrieve time series data.

transact([message, condition, discard_on_error])

Context manager to wrap code in a 'transaction'.

url

URL fragment for the TimeSeries.

add_geodata(df: DataFrame) None[source]

Add geodata.

Parameters:

df (pandas.DataFrame) –

Data to add. df must have the following columns:

  • region

  • variable

  • subannual

  • unit

  • year

  • value

  • meta

add_timeseries(df: DataFrame, meta: bool = False, year_lim: Tuple[int | None, int | None] = (None, None)) None[source]

Add time series data.

Parameters:
  • df (pandas.DataFrame) –

    Data to add. df must have the following columns:

    • region or node

    • variable

    • unit

    Additional column names may be either of:

    • year and value—long, or ‘tabular’, format.

    • one or more specific years—wide, or ‘IAMC’ format.

    To support subannual temporal resolution of timeseries data, a column subannual is optional in df. The entries in this column must have been defined in the Platform instance using add_timeslice() beforehand. If no column subannual is included in df, the data is assumed to contain yearly values. See timeslices() for a detailed description of the feature.

  • meta (bool, optional) – If True, store df as metadata. Metadata is treated specially when Scenario.clone() is called for Scenarios created with scheme='MESSAGE'.

  • year_lim (tuple, optional) – Respectively, earliest and latest years to add from df; data for other years is ignored.

check_out(timeseries_only: bool = False) None[source]

Check out the TimeSeries.

Data in the TimeSeries can only be modified when it is in a checked-out state.

commit(comment: str) None[source]

Commit all changed data to the database.

If the TimeSeries was newly created (with version='new'), version is updated with a new version number assigned by the backend. Otherwise, commit() does not change the version.

Parameters:

comment (str) – Description of the changes being committed.

delete_meta(*args, **kwargs) None[source]

Remove Metadata for this object.

Deprecated since version 3.1: Use remove_meta().

Parameters:

name (str or list of str) – Either single metadata name/identifier, or list of names.

discard_changes() None[source]

Discard all changes and reload from the database.

classmethod from_url(url: str, errors: Literal['warn', 'raise'] = 'warn') Tuple[TimeSeries | None, Platform][source]

Instantiate a TimeSeries (or Scenario) given an ixmp:// URL.

The following are equivalent:

from ixmp import Platform, TimeSeries
mp = Platform(name='example')
scen = TimeSeries(mp 'model', 'scenario', version=42)

and:

from ixmp import TimeSeries
scen, mp = TimeSeries.from_url('ixmp://example/model/scenario#42')
Parameters:
  • url (str) – See parse_url.

  • errors ('warn' or 'raise') – If ‘warn’, a failure to load the TimeSeries is logged as a warning, and the platform is still returned. If ‘raise’, the exception is raised.

Returns:

with 2 elements:

  • The TimeSeries referenced by the url.

  • The Platform referenced by the url, on which the first element is stored.

Return type:

tuple

get_geodata() DataFrame[source]

Fetch geodata and return it as dataframe.

Returns:

Specified data.

Return type:

pandas.DataFrame

get_meta(name: str | None = None)[source]

Get Metadata for this object.

Metadata with the given name, attached to this (model name, scenario name, version), is retrieved.

Parameters:

name (str, optional) – Metadata name/identifier.

is_default() bool[source]

Return True if the version is the default version.

last_update() str[source]

Get the timestamp of the last update/edit of this TimeSeries.

model: str[source]

Name of the model associated with the TimeSeries.

preload_timeseries() None[source]

Preload timeseries data to in-memory cache. Useful for bulk updates.

read_file(path: PathLike, firstyear: int | None = None, lastyear: int | None = None) None[source]

Read time series data from a CSV or Microsoft Excel file.

Parameters:
  • path (os.PathLike) – File to read. Must have suffix ‘.csv’ or ‘.xlsx’.

  • firstyear (int, optional) – Only read data from years equal to or later than this year.

  • lastyear (int, optional) – Only read data from years equal to or earlier than this year.

remove_geodata(df: DataFrame) None[source]

Remove geodata from the TimeSeries instance.

Parameters:

df (pandas.DataFrame) –

Data to remove. df must have the following columns:

  • region

  • variable

  • unit

  • subannual

  • year

remove_meta(name: str | Sequence[str]) None[source]

Remove Metadata for this object.

Parameters:

name (str or list of str) – Either single metadata name/identifier, or list of names.

remove_timeseries(df: DataFrame) None[source]

Remove time series data.

Parameters:

df (pandas.DataFrame) –

Data to remove. df must have the following columns:

  • region or node

  • variable

  • unit

  • year

run_id() int[source]

Get the run id of this TimeSeries.

scenario: str[source]

Name of the scenario associated with the TimeSeries.

set_as_default() None[source]

Set the current version as the default.

set_meta(name_or_dict: str | Dict[str, Any], value=None) None[source]

Set Metadata for this object.

Parameters:
  • name_or_dict (str or dict) – If dict, a mapping of names/identifiers to values. Otherwise, use the metadata identifier.

  • value (str or float or int or bool, optional) – Metadata value.

timeseries(region: str | Sequence[str] | None = None, variable: str | Sequence[str] | None = None, unit: str | Sequence[str] | None = None, year: int | Sequence[int] | None = None, iamc: bool = False, subannual: bool | str = 'auto') DataFrame[source]

Retrieve time series data.

Parameters:
  • iamc (bool, optional) – Return data in wide/’IAMC’ format. If False, return data in long format; see add_timeseries().

  • region (str or list of str, optional) – Regions to include in returned data.

  • variable (str or list of str, optional) – Variables to include in returned data.

  • unit (str or list of str, optional) – Units to include in returned data.

  • year (int or list of int, optional) – Years to include in returned data.

  • subannual (bool or 'auto', optional) – Whether to include column for sub-annual specification (if bool); if ‘auto’, include column if sub-annual data (other than ‘Year’) exists in returned data frame.

Raises:

ValueError – If subannual is False but Scenario has (filtered) sub-annual data.

Returns:

Specified data.

Return type:

pandas.DataFrame

transact(message: str = '', condition: bool = True, discard_on_error: bool = False)[source]

Context manager to wrap code in a ‘transaction’.

Parameters:
  • message (str) – Commit message to use, if any commit is performed.

  • condition (bool) –

    If True (the default):

    • Before entering the code block, the TimeSeries (or Scenario) is checked out.

    • On exiting the code block normally (without an exception), changes are committed with message.

    If False, nothing occurs on entry or exit.

  • discard_on_error (bool) – If True (default False), then the anti-locking behaviour of discard_on_error() also applies to any exception raised in the block.

Example

>>> # `ts` is currently checked in/locked
>>> with ts.transact(message="replace 'foo' with 'bar' in set x"):
>>>    # `ts` is checked out and may be modified
>>>    ts.remove_set("x", "foo")
>>>    ts.add_set("x", "bar")
>>> # Changes to `ts` have been committed
property url: str[source]

URL fragment for the TimeSeries.

This has the format {model name}/{scenario name}#{version}, with the same values passed when creating the TimeSeries instance.

Examples

To form a complete URL (e.g. to use with from_url()), use a configured ixmp.Platform name:

>>> platform_name = "my-ixmp-platform"
>>> mp = Platform(platform_name)
>>> ts = TimeSeries(mp, "foo", "bar", 34)
>>> ts.url
"foo/bar#34"
>>> f"ixmp://{platform_name}/{ts.url}"
"ixmp://platform_name/foo/bar#34"

Note

Use caution: because Platform configuration is system-specific, other systems must have the same configuration for platform_name in order for the URL to refer to the same TimeSeries/Scenario.

version = None[source]

Version of the TimeSeries. Immutable for a specific instance.

Scenario

class ixmp.Scenario(mp: Platform, model: str, scenario: str, version: int | str | None = None, scheme: str | None = None, annotation: str | None = None, **model_init_args)[source]

Bases: TimeSeries

Collection of model-related data.

See TimeSeries for the meaning of parameters mp, model, scenario, version, and annotation.

Parameters:
  • scheme (str, optional) – Use an explicit scheme to initialize the new scenario. The initialize() method of the corresponding Model class in MODELS is used to initialize items in the Scenario.

  • cache

    Deprecated since version 3.0: The cache keyword argument to Scenario has no effect and raises a warning. Use cache as one of the backend_args to Platform to disable/enable caching for storage backends that support it. Use load_scenario_data() to load all data in the Scenario into an in-memory cache.

A Scenario is a TimeSeries that also contains model data, including model solution data. See the data model documentation.

The Scenario class provides methods to manipulate model data items. In addition to generic methods (init_item(), items(), list_items(), has_item()), there are methods for each of the four item types:

add_par(name[, key_or_data, value, unit, ...])

Set the values of a parameter.

add_set(name, key[, comment])

Add elements to an existing set.

change_scalar(name, val, unit[, comment])

Set the value and unit of a scalar.

clone([model, scenario, annotation, ...])

Clone the current scenario and return the clone.

equ(name[, filters])

Return a dataframe of (filtered) elements for a specific equation.

has_item(name[, item_type])

Check whether the Scenario has an item name of item_type.

has_solution()

Return True if the Scenario contains model solution data.

idx_names(name)

Return the list of index names for an item (set, par, var, equ).

idx_sets(name)

Return the list of index sets for an item (set, par, var, equ).

init_item(item_type, name[, idx_sets, idx_names])

Initialize a new item name of type item_type.

init_scalar(name, val, unit[, comment])

Initialize a new scalar and set its value.

items([type, filters, indexed_by, par_data])

Iterate over model data items.

list_items(item_type[, indexed_by])

List all defined items of type item_type.

load_scenario_data()

Load all Scenario data into memory.

par(name[, filters])

Return parameter data.

read_excel(path[, add_units, init_items, ...])

Read a Microsoft Excel file into the Scenario.

remove_par(name[, key])

Remove parameter values or an entire parameter.

remove_set(name[, key])

Delete set elements or an entire set.

remove_solution([first_model_year])

Remove the solution from the scenario.

scalar(name)

Return the value and unit of a scalar.

set(name[, filters])

Return the (filtered) elements of a set.

solve([model, callback, cb_kwargs])

Solve the model and store output.

to_excel(path[, items, filters, max_row])

Write Scenario to a Microsoft Excel file.

var(name[, filters])

Return a dataframe of (filtered) elements for a specific variable.

add_par(name: str, key_or_data: str | Sequence[str] | Dict | DataFrame | None = None, value=None, unit: str | None = None, comment: str | None = None) None[source]

Set the values of a parameter.

Parameters:
add_set(name: str, key: str | Sequence[str] | Dict | DataFrame, comment: str | Sequence[str] | None = None) None[source]

Add elements to an existing set.

Parameters:
Raises:
change_scalar(name: str, val: Real, unit: str, comment: str | None = None) None[source]

Set the value and unit of a scalar.

Parameters:
  • name (str) – Name of the scalar.

  • val (float or int) – New value of the scalar.

  • unit (str) – New unit of the scalar.

  • comment (str, optional) – Description of the change.

check_out(timeseries_only: bool = False) None[source]

Check out the Scenario.

Raises:

ValueError – If has_solution() is True.

clone(model: str | None = None, scenario: str | None = None, annotation: str | None = None, keep_solution: bool = True, shift_first_model_year: int | None = None, platform: Platform | None = None) Scenario[source]

Clone the current scenario and return the clone.

If the (model, scenario) given already exist on the Platform, the version for the cloned Scenario follows the last existing version. Otherwise, the version for the cloned Scenario is 1.

Note

clone() does not set or alter default versions. This means that a clone to new (model, scenario) names has no default version, and will not be returned by Platform.scenario_list() unless default=False is given.

Parameters:
  • model (str, optional) – New model name. If not given, use the existing model name.

  • scenario (str, optional) – New scenario name. If not given, use the existing scenario name.

  • annotation (str, optional) – Explanatory comment for the clone commit message to the database.

  • keep_solution (bool, optional) – If True, include all timeseries data and the solution (vars and equs) from the source scenario in the clone. If False, only include timeseries data marked meta=True (see add_timeseries()).

  • shift_first_model_year (int, optional) – If given, all timeseries data in the Scenario is omitted from the clone for years from first_model_year onwards. Timeseries data with the meta flag (see add_timeseries()) are cloned for all years.

  • platform (Platform, optional) – Platform to clone to (default: current platform)

equ(name: str, filters=None, **kwargs) DataFrame[source]

Return a dataframe of (filtered) elements for a specific equation.

Parameters:
  • name (str) – name of the equation

  • filters (dict) – index names mapped list of index set elements

equ_list(indexed_by: str | None = None) List[str][source]

List all defined equations. See list_items().

has_equ(name: str, *, item_type=ItemType.EQU) bool[source]

Check whether the scenario has a equation name. See has_item().

has_item(name: str, item_type=ItemType.MODEL) bool[source]

Check whether the Scenario has an item name of item_type.

In general, user code should call one of has_equ(), has_par(), has_set(), or has_var() instead of calling this method directly.

Returns:

  • True – if the Scenario contains an item of item_type with name name.

  • False – otherwise

See also

items

has_par(name: str, *, item_type=ItemType.PAR) bool[source]

Check whether the scenario has a parameter name. See has_item().

has_set(name: str, *, item_type=ItemType.SET) bool[source]

Check whether the scenario has a set name. See has_item().

has_solution() bool[source]

Return True if the Scenario contains model solution data.

has_var(name: str, *, item_type=ItemType.VAR) bool[source]

Check whether the scenario has a variable name. See has_item().

idx_names(name: str) List[str][source]

Return the list of index names for an item (set, par, var, equ).

Parameters:

name (str) – name of the item

idx_sets(name: str) List[str][source]

Return the list of index sets for an item (set, par, var, equ).

Parameters:

name (str) – name of the item

init_equ(name: str, idx_sets: Sequence[str] | None = None, idx_names: Sequence[str] | None = None)[source]

Initialize a new equation. See init_item().

init_item(item_type: ItemType, name: str, idx_sets: Sequence[str] | None = None, idx_names: Sequence[str] | None = None)[source]

Initialize a new item name of type item_type.

In general, user code should call one of init_set(), init_par(), init_var(), or init_equ() instead of calling this method directly.

Parameters:
  • item_type (ItemType) – The type of the item.

  • name (str) – Name of the item.

  • idx_sets (collections.abc.Sequence of str or str, optional) – Name(s) of index sets for a 1+-dimensional item. If none are given, the item is scalar (zero dimensional).

  • idx_names (collections.abc.Sequence of str or str, optional) – Names of the dimensions indexed by idx_sets. If given, they must be the same length as idx_sets.

Raises:
  • ValueError

    • if idx_names are given but do not match the length of idx_sets. - if an item with the same name, of any item_type, already exists.

  • RuntimeError – if the Scenario is not checked out (see check_out()).

init_par(name: str, idx_sets: Sequence[str] | None = None, idx_names: Sequence[str] | None = None)[source]

Initialize a new parameter. See init_item().

init_scalar(name: str, val: Real, unit: str, comment=None) None[source]

Initialize a new scalar and set its value.

Parameters:
  • name (str) – Name of the scalar

  • val (float or int) – Initial value of the scalar.

  • unit (str) – Unit of the scalar.

  • comment (str, optional) – Description of the scalar.

init_set(name: str, idx_sets: Sequence[str] | None = None, idx_names: Sequence[str] | None = None)[source]

Initialize a new set. See init_item().

init_var(name: str, idx_sets: Sequence[str] | None = None, idx_names: Sequence[str] | None = None)[source]

Initialize a new variable. See init_item().

items(type: ItemType = ItemType.PAR, filters: Dict[str, Sequence[str]] | None = None, *, indexed_by: str | None = None, par_data: bool | None = None) Iterable[str][source]

Iterate over model data items.

Parameters:
  • type (ItemType, optional) – Types of items to iterate, for instance ItemType.PAR for parameters.

  • filters (dict, optional) – Filters for values along dimensions; same as the filters argument to par(). Only value for ItemType.PAR.

  • indexed_by (str, optional) – If given, only iterate over items where one of the item dimensions is indexed_by the set of this name.

  • par_data (bool, optional) – If True (the default) and type is ItemType.PAR, also iterate over data for each parameter.

Yields:
list_items(item_type: ItemType, indexed_by: str | None = None) List[str][source]

List all defined items of type item_type.

See also

items

load_scenario_data() None[source]

Load all Scenario data into memory.

Raises:

ValueError – If the Scenario was instantiated with cache=False.

par(name: str, filters: Dict[str, Sequence[str]] | None = None, **kwargs) DataFrame[source]

Return parameter data.

If filters is provided, only a subset of data, matching the filters, is returned.

Parameters:
  • name (str) – Name of the parameter

  • filters (dict, optional) – Keys are index names. Values are lists of index set elements. Elements not appearing in the respective index set(s) are silently ignored.

par_list(indexed_by: str | None = None) List[str][source]

List all defined parameters. See list_items().

read_excel(path: PathLike, add_units: bool = False, init_items: bool = False, commit_steps: bool = False) None[source]

Read a Microsoft Excel file into the Scenario.

Parameters:
  • path (os.PathLike) – File to read. Must have suffix ‘.xlsx’.

  • add_units (bool, optional) – Add missing units, if any, to the Platform instance.

  • init_items (bool, optional) – Initialize sets and parameters that do not already exist in the Scenario.

  • commit_steps (bool, optional) – Commit changes after every data addition.

remove_par(name: str, key=None) None[source]

Remove parameter values or an entire parameter.

Parameters:
  • name (str) – Name of the parameter.

  • key (pandas.DataFrame or list or str, optional) – Elements to be removed. If a pandas.DataFrame, must contain the same columns (indices/dimensions) as the parameter. If a list, a single key for a single data point; the individual elements must correspond to the indices/dimensions of the parameter.

remove_set(name: str, key: str | Sequence[str] | Dict | DataFrame | None = None) None[source]

Delete set elements or an entire set.

Parameters:
  • name (str) – Name of the set to remove (if key is None) or from which to remove elements.

  • key (pandas.DataFrame or list of str, optional) – Elements to be removed from set name.

remove_solution(first_model_year: int | None = None) None[source]

Remove the solution from the scenario.

This function removes the solution (variables and equations) and timeseries data marked as meta=False from the scenario (see add_timeseries()).

Parameters:

first_model_year (int, optional) – If given, timeseries data marked as meta=False is removed only for years from first_model_year onwards.

Raises:

ValueError – If Scenario has no solution or if first_model_year is not int.

scalar(name: str) Dict[str, Real | str][source]

Return the value and unit of a scalar.

Parameters:

name (str) – Name of the scalar.

Returns:

with the keys “value” and “unit”.

Return type:

dict

scheme = None[source]

Scheme of the Scenario.

set(name: str, filters: Dict[str, Sequence[str]] | None = None, **kwargs) List[str] | DataFrame[source]

Return the (filtered) elements of a set.

Parameters:
  • name (str) – Name of the set.

  • filters (dict) – Mapping of dimension_nameelements, where dimension_name is one of the idx_names given when the set was initialized (see init_set()), and elements is an iterable of labels to include in the return value.

Return type:

pandas.DataFrame

set_list(indexed_by: str | None = None) List[str][source]

List all defined sets. See list_items().

solve(model: str | None = None, callback: Callable | None = None, cb_kwargs: Dict[str, Any] = {}, **model_options) None[source]

Solve the model and store output.

ixmp ‘solves’ a model by invoking the run() method of a Model subclass—for instance, GAMSModel.run(). Depending on the underlying model code, different steps are taken; see each model class for details. In general:

  1. Data from the Scenario are written to a model input file.

  2. Code or an external program is invoked to perform calculations or optimizations, solving the model.

  3. Data representing the model outputs or solution are read from a model output file and stored in the Scenario.

If the optional argument callback is given, additional steps are performed:

  1. Execute the callback with the Scenario as an argument. The Scenario has an iteration attribute that stores the number of times the underlying model has been solved (#2).

  2. If the callback returns False or similar, iterate by repeating from step #1. Otherwise, exit.

Parameters:
  • model (str) – model (e.g., MESSAGE) or GAMS file name (excluding ‘.gms’)

  • callback (callable, optional) – Method to execute arbitrary non-model code. Must accept a single argument: the Scenario. Must return a non-False value to indicate convergence.

  • cb_kwargs (dict, optional) – Keyword arguments to pass to callback.

  • model_options – Keyword arguments specific to the model. See GAMSModel.

Warns:

UserWarning – If callback is given and returns None. This may indicate that the user has forgotten a return statement, in which case the iteration will continue indefinitely.

Raises:

ValueError – If the Scenario has already been solved.

to_excel(path: PathLike, items: ItemType = ItemType.SET | PAR, filters: Dict[str, Sequence[str] | Scenario] | None = None, max_row: int | None = None) None[source]

Write Scenario to a Microsoft Excel file.

Parameters:
  • path (os.PathLike) – File to write. Must have suffix .xlsx.

  • items (ItemType, optional) – Types of items to write. Either SET | PAR (i.e. only sets and parameters), or MODEL (also variables and equations, i.e. model solution data).

  • filters (dict, optional) – Filters for values along dimensions; same as the filters argument to par().

  • max_row (int, optional) – Maximum number of rows in each sheet. If the number of elements in an item exceeds this number or EXCEL_MAX_ROWS, then an item is written to multiple sheets named, e.g. ‘foo’, ‘foo(2)’, ‘foo(3)’, etc.

var(name: str, filters=None, **kwargs)[source]

Return a dataframe of (filtered) elements for a specific variable.

Parameters:
  • name (str) – name of the variable

  • filters (dict) – index names mapped list of index set elements

var_list(indexed_by: str | None = None) List[str][source]

List all defined variables. See list_items().

Configuration

When imported, ixmp reads configuration from the first file named config.json found in one of the following directories:

  1. The directory given by the environment variable IXMP_DATA, if defined,

  2. ${XDG_DATA_HOME}/ixmp, if the environment variable is defined, or

  3. $HOME/.local/share/ixmp.

Tip

For most users, #2 or #3 is a sensible default; platform information for many local and remote databases can be stored in config.json and retrieved by name.

Advanced users wishing to use a project-specific config.json can set IXMP_DATA to the path for any directory containing a file with this name.

To manipulate the configuration file, use the platform command in the ixmp command-line interface:

# Add a platform named 'p1' backed by a local HSQL database
$ ixmp platform add p1 jdbc hsqldb /path/to/database/files

# Add a platform named 'p2' backed by a remote Oracle database
$ ixmp platform add p2 jdbc oracle \
       database.server.example.com:PORT:SCHEMA username password

# Add a platform named 'p3' with specific JVM arguments
$ ixmp platform add p3 jdbc hsqldb /path/to/database/files -Xmx12G

# Make 'p2' the default Platform
$ ixmp platform add default p2

…or, use the methods of ixmp.config.

ixmp.config[source]

An instance of Config.

class ixmp._config.Config(read: bool = True)[source]

Configuration for ixmp.

For most purposes, there is only one instance of this class, available at ixmp.config and automatically read() from the ixmp configuration file at the moment the package is imported. (save() writes the current values to file.)

Config is a key-value store. Key names are strings; each key has values of a fixed type. Individual keys can be accessed with get() and set(), or by accessing the values attribute.

Spaces in names are automatically replaced with underscores, e.g. “my key” is stored as “my_key”, but may be set and retrieved as “my key”.

Downstream packages (e.g. message_ix, message_ix_models) may register() additional keys to be stored in and read from the ixmp configuration file.

The default configuration (restored by clear()) is:

{
  "platform": {
    "default": "local",
    "local": {
      "class": "jdbc",
      "driver": "hsqldb",
      "path": "~/.local/share/ixmp/localdb/default"
    },
}

clear()

Clear all configuration keys by setting empty or default values.

get(name)

Return the value of a configuration key name.

keys()

Return the names of all registered configuration keys.

read()

Try to read configuration keys from file.

save()

Write configuration keys to file.

set(name, value[, _strict])

Set configuration key name to value.

register(name, type_[, default])

Register a new configuration key.

unregister(name)

Unregister and clear the configuration key name.

add_platform(name, *args, **kwargs)

Add or overwrite information about a platform.

get_platform_info(name)

Return information on configured Platform name.

remove_platform(name)

Remove the configuration for platform name.

Parameters:

read (bool) – Read config.json on startup.

add_platform(name: str, *args, **kwargs)[source]

Add or overwrite information about a platform.

Parameters:
  • name (str) – New or existing platform name.

  • args – Positional arguments. If name is ‘default’, args must be a single string: the name of an existing configured Platform. Otherwise, the first of args specifies one of the BACKENDS, and the remaining args differ according to the backend.

  • kwargs – Keyword arguments. These differ according to backend.

clear()[source]

Clear all configuration keys by setting empty or default values.

get(name: str) Any[source]

Return the value of a configuration key name.

get_platform_info(name: str) Tuple[str, Dict[str, Any]][source]

Return information on configured Platform name.

Parameters:

name (str) – Existing platform. If name is “default”, the information for the default platform is returned.

Returns:

  • str – The name of the platform. If name was “default”, this will be the actual name of platform that is designated default.

  • dict – The “class” key specifies one of the BACKENDS. Other keys vary by backend class.

Raises:

ValueError – If name is not configured as a platform.

keys() Tuple[str, ...][source]

Return the names of all registered configuration keys.

path: Path | None = None[source]

Fully-resolved path of the config.json file.

read()[source]

Try to read configuration keys from file.

If successful, the attribute path is set to the path of the file.

register(name: str, type_: type, default: Any | None = None, **kwargs)[source]

Register a new configuration key.

Parameters:
  • name (str) – Name of the new key.

  • type (object) – Type of valid values for the key, e.g. str or pathlib.Path.

  • default (optional) – Default value for the key. If not supplied, the type is called to supply the default value, e.g. str().

Raises:

ValueError – if the key name is already registered.

remove_platform(name: str)[source]

Remove the configuration for platform name.

save()[source]

Write configuration keys to file.

config.json is created in the first of the ixmp configuration directories that exists. Only non-null values are written.

set(name: str, value: Any, _strict: bool = True)[source]

Set configuration key name to value.

Parameters:

value – Value to store. If None, set() has no effect.

unregister(name: str) None[source]

Unregister and clear the configuration key name.

values: BaseValues[source]

Configuration values. These can be accessed using Python item access syntax, e.g. ixmp.config.values["platform"]["platform name"]….

class ixmp._config.BaseValues(platform: dict = <factory>)[source]

Base class for storing configuration values.

get_field(name)[source]

For name = “field name”, retrieve a field “field_name”, if any.

munge(name)[source]

Return a field name matching name.

Utilities

diff(a, b[, filters])

Compute the difference between Scenarios a and b.

discard_on_error(ts)

Context manager to discard changes to ts and close the DB on any exception.

format_scenario_list(platform[, model, ...])

Return a formatted list of TimeSeries on platform.

maybe_check_out(timeseries[, state])

Check out timeseries depending on state.

maybe_commit(timeseries, condition, message)

Commit timeseries with message if condition is True.

parse_url(url)

Parse url and return Platform and Scenario information.

show_versions([file])

Print information about ixmp and its dependencies to file.

update_par(scenario, name, data)

Update parameter name in scenario using data, without overwriting.

to_iamc_layout(df)

Transform df to the IAMC structure/layout.

class ixmp.util.DeprecatedPathFinder(package: str, name_map: Mapping[str, str])[source]

Handle imports from deprecated module locations.

ixmp.util.diff(a, b, filters=None) Iterator[Tuple[str, DataFrame]][source]

Compute the difference between Scenarios a and b.

diff() combines pandas.merge() and Scenario.items(). Only parameters are compared. merge() is called with the arguments how="outer", sort=True, suffixes=("_a", "_b"), indicator=True; the merge is performed on all columns except ‘value’ or ‘unit’.

Yields:

tuple of str, pandas.DataFrame – Tuples of item name and data.

ixmp.util.discard_on_error(ts: TimeSeries)[source]

Context manager to discard changes to ts and close the DB on any exception.

For JDBCBackend, this can avoid leaving ts in a “locked” state in the database.

Examples

>>> mp = ixmp.Platform()
>>> s = ixmp.Scenario(mp, ...)
>>> with discard_on_error(s):
...     s.add_par(...)  # Any code
...     s.not_a_method()  # Code that raises some exception

Before the the exception in the final line is raised (and possibly handled by surrounding code):

  • Any changes—for example, here changes due to the call to add_par()—are discarded/not committed;

  • s is guaranteed to be in a non-locked state; and

  • close_db() is called on mp.

ixmp.util.format_scenario_list(platform, model=None, scenario=None, match=None, default_only=False, as_url=False)[source]

Return a formatted list of TimeSeries on platform.

Parameters:
  • platform (Platform) –

  • model (str, optional) – Model name to restrict results. Passed to scenario_list().

  • scenario (str, optional) – Scenario name to restrict results. Passed to scenario_list().

  • match (str, optional) – Regular expression to restrict results. Only results where the model or scenario name matches are returned.

  • default_only (bool, optional) – Only return TimeSeries where a default version has been set with TimeSeries.set_as_default().

  • as_url (bool, optional) – Format results as ixmp URLs.

Returns:

If as_url is False, also include summary information.

Return type:

list of str

ixmp.util.logger()[source]

Access global logger.

Deprecated since version 3.3: To control logging from ixmp, instead use logging to retrieve it:

import logging
ixmp_logger = logging.getLogger("ixmp")

# Example: set the level to INFO
ixmp_logger.setLevel(logging.INFO)
ixmp.util.maybe_check_out(timeseries, state=None)[source]

Check out timeseries depending on state.

If state is None, then TimeSeries.check_out() is called.

Returns:

  • True – if state was None and a check out was performed, i.e. timeseries was previously in a checked-in state.

  • False – if state was None and no check out was performed, i.e. timeseries was already in a checked-out state.

  • state – if state was not None and no check out was attempted.

Raises:

ValueError – If timeseries is a Scenario object and has_solution() is True.

ixmp.util.maybe_commit(timeseries, condition, message)[source]

Commit timeseries with message if condition is True.

Returns:

  • True – if a commit is performed.

  • False – if any exception is raised during the attempted commit. The exception is logged with level INFO.

ixmp.util.maybe_convert_scalar(obj) DataFrame[source]

Convert obj to pandas.DataFrame.

Parameters:

obj – Any value returned by Scenario.par(). For a scalar (0-dimensional) parameter, this will be dict.

Returns:

maybe_convert_scalar() always returns a data frame.

Return type:

pandas.DataFrame

ixmp.util.parse_url(url)[source]

Parse url and return Platform and Scenario information.

A URL (Uniform Resource Locator), as the name implies, uniquely identifies a specific scenario and (optionally) version of a model, as well as (optionally) the database in which it is stored. ixmp URLs take forms like:

ixmp://PLATFORM/MODEL/SCENARIO[#VERSION]
MODEL/SCENARIO[#VERSION]

where:

  • The PLATFORM is a configured platform name; see ixmp.config.

  • MODEL may not contain the forward slash character (‘/’); SCENARIO may contain any number of forward slashes. Both must be supplied.

  • VERSION is optional but, if supplied, must be an integer.

Returns:

  • platform_info (dict) – Keyword argument ‘name’ for the Platform constructor.

  • scenario_info (dict) – Keyword arguments for a Scenario on the above platform: ‘model’, ‘scenario’ and, optionally, ‘version’.

Raises:

ValueError – For malformed URLs.

ixmp.util.show_versions(file=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]

Print information about ixmp and its dependencies to file.

ixmp.util.to_iamc_layout(df: DataFrame) DataFrame[source]

Transform df to the IAMC structure/layout.

The returned object has:

  • Any (Multi)Index levels reset as columns.

  • Lower-case column names ‘region’, ‘variable’, ‘subannual’, and ‘unit’.

  • If not present in df, the value ‘Year’ in the ‘subannual’ column.

Parameters:

df (pandas.DataFrame) – May have a ‘node’ column, which will be renamed to ‘region’.

Return type:

pandas.DataFrame

Raises:

ValueError – If ‘region’, ‘variable’, or ‘unit’ is not among the column names.

ixmp.util.update_par(scenario, name, data)[source]

Update parameter name in scenario using data, without overwriting.

Only values which do not already appear in the parameter data are added.

Utilities for documentation

GitHub adapter for sphinx.ext.linkcode.

To use this extension, add it to the extensions setting in the Sphinx configuration file (usually conf.py), and set the required linkcode_github_repo_slug:

extensions = [
    ...,
    "ixmp.util.sphinx_linkcode_github",
    ...,
]

linkcode_github_repo_slug = "iiasa/ixmp"  # Required
linkcode_github_remote_head = "feature/example"  # Optional

The extension uses GitPython (if installed) or linkcode_github_remote_head (optional override) to match a local commit to a remote head (~branch name), and construct links like:

https://github.com/{repo_slug}/blob/{remote_head}/path/to/source.py#L123-L456
class ixmp.util.sphinx_linkcode_github.GitHubLinker[source]

Handler for storing files/line numbers for code objects and formatting links.

autodoc_process_docstring(app: sphinx.application.Sphinx, what, name: str, obj, options, lines)[source]

Handler for the Sphinx autodoc-process-docstring event.

Records the file and source line numbers containing obj.

config_inited(app: sphinx.application.Sphinx, config)[source]

Handler for the Sphinx config-inited event.

linkcode_resolve(domain: str, info: dict) str | None[source]

Function for the sphinx.ext.linkcode setting of the same name.

Returns URLs for code objects on GitHub, using information stored by autodoc_process_docstring().

ixmp.util.sphinx_linkcode_github.find_remote_head(app: sphinx.application.Sphinx) str[source]

Return a name for the remote branch containing the code.

ixmp.util.sphinx_linkcode_github.find_remote_head_git(app: sphinx.application.Sphinx) str | None[source]

Use git to identify the name of the remote branch containing the code.

ixmp.util.sphinx_linkcode_github.package_base_path(obj) Path[source]

Return the base path of the package containing obj.

ixmp.util.sphinx_linkcode_github.setup(app: sphinx.application.Sphinx)[source]

Sphinx extension registration hook.

Utilities for testing

Utilities for testing ixmp.

These include:

  • pytest hooks, fixtures:

    ixmp_cli

    A CliRunner object that invokes the ixmp command-line interface.

    tmp_env

    Return the os.environ dict with the IXMP_DATA variable set.

    test_mp

    An empty Platform connected to a temporary, in-memory database.

    …and assertions:

    assert_logs(caplog[, message_or_messages, ...])

    Assert that message_or_messages appear in logs.

  • Methods for setting up and populating test ixmp databases:

    add_test_data(scen)

    Populate scen with test data.

    create_test_platform(tmp_path, data_path, ...)

    Create a Platform for testing using specimen files 'name.*'.

    make_dantzig(mp[, solve, quiet])

    Return ixmp.Scenario of Dantzig's canning/transport problem.

    populate_test_platform(platform)

    Populate platform with data for testing.

  • Methods to run and retrieve values from Jupyter notebooks:

    run_notebook(nb_path, tmp_path[, env])

    Execute a Jupyter notebook via nbclient and collect output.

    get_cell_output(nb, name_or_index[, kind])

    Retrieve a cell from nb according to its metadata name_or_index:

ixmp.testing.add_random_model_data(scenario, length)[source]

Add a set and parameter with given length to scenario.

The set is named ‘random_set’. The parameter is named ‘random_par’, and has two dimensions indexed by ‘random_set’.

ixmp.testing.add_test_data(scen: Scenario)[source]

Populate scen with test data.

ixmp.testing.assert_logs(caplog, message_or_messages=None, at_level=None)[source]

Assert that message_or_messages appear in logs.

Use assert_logs as a context manager for a statement that is expected to trigger certain log messages. assert_logs checks that these messages are generated.

Example

def test_foo(caplog):
with assert_logs(caplog, ‘a message’):

logging.getLogger(__name__).info(‘this is a message!’)

Parameters:
  • caplog (object) – The pytest caplog fixture.

  • message_or_messages (str or list of str) – String(s) that must appear in log messages.

  • at_level (int, optional) – Messages must appear on ‘ixmp’ or a sub-logger with at least this level.

ixmp.testing.create_test_platform(tmp_path, data_path, name, **properties)[source]

Create a Platform for testing using specimen files ‘name.*’.

Any of the following files from data_path are copied to tmp_path:

  • name.lobs, name.script, i.e. the contents of a JDBCBackend HyperSQL database.

  • name.properties.

The contents of name.properties (if it exists) are formatted using the properties keyword arguments.

Returns:

the path to the .properties file, if any, else the .lobs file without suffix.

Return type:

pathlib.Path

ixmp.testing.get_cell_output(nb, name_or_index, kind='data')[source]

Retrieve a cell from nb according to its metadata name_or_index:

The Jupyter notebook format allows specifying a document-wide unique ‘name’ metadata attribute for each cell:

https://nbformat.readthedocs.io/en/latest/format_description.html #cell-metadata

Return the cell matching name_or_index if str; or the cell at the int index; or raise ValueError.

Parameters:

kind (str, optional) – Kind of cell output to retrieve. For ‘data’, the data in format ‘text/plain’ is run through eval(). To retrieve an exception message, use ‘evalue’.

ixmp.testing.ixmp_cli(tmp_env)[source]

A CliRunner object that invokes the ixmp command-line interface.

ixmp.testing.make_dantzig(mp: Platform, solve: bool = False, quiet: bool = False) Scenario[source]

Return ixmp.Scenario of Dantzig’s canning/transport problem.

Parameters:
  • mp (Platform) – Platform on which to create the scenario.

  • solve (bool, optional) – If True. then solve the scenario before returning. Default False.

  • quiet (bool, optional) – If True, suppress console output when solving.

Return type:

Scenario

See also

DantzigModel

ixmp.testing.populate_test_platform(platform)[source]

Populate platform with data for testing.

Many of the tests in ixmp.tests.core depend on this set of data.

The data consist of:

  • 3 versions of the Dantzig cannery/transport Scenario.

    • Version 2 is the default.

    • All have HIST_DF and TS_DF as time-series data.

  • 1 version of a TimeSeries with model name ‘Douglas Adams’ and scenario name ‘Hitchhiker’, containing 2 values.

ixmp.testing.random_model_data(length)[source]

Random (set, parameter) data with at least length elements.

ixmp.testing.random_ts_data(length)[source]

A pandas.DataFrame of time series data with length rows.

Suitable for passage to TimeSeries.add_timeseries().

ixmp.testing.resource_limit(request)[source]

A fixture that limits Python resources.

See the documentation (pytest --help) for the --resource-limit command-line option that selects (1) the specific resource and (2) the level of the limit.

The original limit, if any, is reset after the test function in which the fixture is used.

ixmp.testing.run_notebook(nb_path, tmp_path, env=None, **kwargs)[source]

Execute a Jupyter notebook via nbclient and collect output.

Parameters:
  • nb_path (os.PathLike) – The notebook file to execute.

  • tmp_path (os.PathLike) – A directory in which to create temporary output.

  • env (dict, optional) – Execution environment for nbclient. Default: os.environ.

  • kwargs

    Keyword arguments for nbclient.NotebookClient. Defaults are set for:

    ”allow_errors”

    Default False. If True, the execution always succeeds, and cell output contains exception information rather than code outputs.

    ”kernel_version”

    Jupyter kernel to use. Default: either “python2” or “python3”, matching the current Python major version.

    Warning

    Any existing configuration for this kernel on the local system— such as an IPython start-up file—will be executed when the kernel starts. Code that enables GUI features can interfere with run_notebook().

    ”timeout”

    in seconds; default 10.

Returns:

ixmp.testing.test_mp(request, tmp_env, test_data_path)[source]

An empty Platform connected to a temporary, in-memory database.

This fixture has module scope: the same Platform is reused for all tests in a module.

ixmp.testing.tmp_env(pytestconfig, tmp_path_factory)[source]

Return the os.environ dict with the IXMP_DATA variable set.

IXMP_DATA will point to a temporary directory that is unique to the test session. ixmp configuration (i.e. the ‘config.json’ file) can be written and read in this directory without modifying the current user’s configuration.

ixmp.testing.data.HIST_DF = model  scenario       region variable unit   2000   2005   2010 0  canning problem  standard  DantzigLand      GDP  USD  850.0  900.0  950.0[source]

Time series data for testing.

ixmp.testing.data.TS_DF = model  scenario       region variable   unit   2000   2005   2010 0  canning problem  standard  DantzigLand   Demand  cases  850.0  900.0    NaN 1  canning problem  standard  DantzigLand      GDP    USD  850.0  900.0  950.0[source]

Time series data for testing.