Python (ixmp
package)
The ix modeling platform application programming interface (API) is organized around three classes:
|
Instance of the modeling platform. |
|
Collection of data in time series format. |
|
Collection of model-related data. |
Platform
- class ixmp.Platform(name: str | None = None, backend: str | None = None, **backend_args)[source]
Instance of the modeling platform.
A Platform connects two key components:
A back end for storing data such as model inputs and outputs.
One or more model(s); codes in Python or other languages or frameworks that run, via
Scenario.solve()
, on the data stored in the Platform.
The Platform parameters control these components.
TimeSeries
andScenario
objects tied to a single Platform; to move data between platforms, seeScenario.clone()
.- Parameters:
name (
str
) – Name of a specific configured backend.backend (
'jdbc'
) – Storage backend type. ‘jdbc’ corresponds to the built-inJDBCBackend
; seeBACKENDS
.backend_args – Keyword arguments to specific to the backend. See
JDBCBackend
.
Platforms have the following methods:
add_region
(region, hierarchy[, parent])Define a region including a hierarchy level and a 'parent' region.
add_region_synonym
(region, mapped_to)Define a synonym for a region.
add_unit
(unit[, comment])Define a unit.
check_access
(user, models[, access])Check access to specific models.
regions
()Return all regions defined time series data, including synonyms.
scenario_list
([default, model, scen])Return information about TimeSeries and Scenarios on the Platform.
set_log_level
(level)Set log level for the Platform and its storage
Backend
.units
()Return all units defined on the Platform.
The following backend methods are available via Platform too:
Add (register) new model name.
Add (register) new scenario name.
OPTIONAL: Close database connection(s).
backend.base.Backend.get_doc
(domain[, name])Read documentation from database
backend.base.Backend.get_meta
(model, ...)Retrieve all metadata attached to a specific target.
List existing model names.
List existing scenario names.
OPTIONAL: (Re-)open database connection(s).
backend.base.Backend.remove_meta
(names, ...)Remove metadata attached to a target.
backend.base.Backend.set_doc
(domain, docs)Save documentation to database
backend.base.Backend.set_meta
(meta, model, ...)Set metadata on a target.
These methods can be called like normal Platform methods, e.g.:
$ platform_instance.close_db()
- add_region(region: str, hierarchy: str, parent: str = 'World') None [source]
Define a region including a hierarchy level and a ‘parent’ region.
Tip
On a
Platform
backed by a shared database, a region may already exist with a different spelling. Useregions()
first to check, and consider callingadd_region_synonym()
instead.
- add_region_synonym(region: str, mapped_to: str) None [source]
Define a synonym for a region.
When adding timeseries data using the synonym in the region column, it will be converted to mapped_to.
- add_timeslice(name: str, category: str, duration: float) None [source]
Define a subannual timeslice including a category and duration.
See
timeslices()
for a detailed description of timeslices.
- check_access(user: str, models: str | Sequence[str], access: str = 'view') bool | dict[str, bool] [source]
Check access to specific models.
- export_timeseries_data(path: PathLike, default: bool = True, model: str | None = None, scenario: str | None = None, variable=None, unit=None, region=None, export_all_runs: bool = False) None [source]
Export time series data to CSV file across multiple
TimeSeries
.Refer
TimeSeries.add_timeseries()
about adding time series data.- Parameters:
path (
os.PathLike
) –File name to export data to; must have the suffix ‘.csv’.
Result file will contain the following columns:
model
scenario
version
variable
unit
region
meta
subannual
year
value
default (
bool
, optional) –True
to include only TimeSeries versions marked as default.model (
str
, optional) – Only return data for this model name.scenario (
str
, optional) – Only return data for this scenario name.variable (
list
ofstr
, optional) – Only return data for variable name(s) in this list.unit (
list
ofstr
, optional) – Only return data for unit name(s) in this list.region (
list
ofstr
, optional) – Only return data for region(s) in this list.export_all_runs (
bool
, optional) – Export all existing model+scenario run combinations.
- get_log_level() str [source]
Return log level of the storage
Backend
, if any.- Returns:
Name of a Python logging level.
- Return type:
- regions() DataFrame [source]
Return all regions defined time series data, including synonyms.
- Return type:
- scenario_list(default: bool = True, model: str | None = None, scen: str | None = None) DataFrame [source]
Return information about TimeSeries and Scenarios on the Platform.
- Parameters:
default (
bool
, optional) – Return only the default version of each TimeSeries/Scenario (seeTimeSeries.set_as_default()
). Any (model, scenario) without a default version is omitted. IfFalse
, return all versions.model (
str
, optional) – A model name. If given, only return information for model.scen (
str
, optional) – A scenario name. If given, only return information for scen.
- Returns:
Scenario information, with the columns:
model
,scenario
,version
, andscheme
—Scenario identifiers; seeTimeSeries
andScenario
.is_default
—True
if theversion
is the default version for the (model
,scenario
).is_locked
—True
if the Scenario has been locked for use.cre_user
,cre_date
—database user that created the Scenario, and creation time.upd_user
,upd_date
—user and time for last modification of the Scenario.lock_user
,lock_date
—user that locked the Scenario and lock time.annotation
: description of the Scenario or changelog.
- Return type:
- set_log_level(level: str | int) None [source]
Set log level for the Platform and its storage
Backend
.- Parameters:
level (
str
) – Name of a Python logging level.
- timeslices() DataFrame [source]
Return all subannual time slices defined in this Platform instance.
See the Data model documentation for further details.
The category and duration do not have any functional relevance within the ixmp framework, but they may be useful for pre- or post-processing. For example, they can be used to filter all timeslices of a certain category (e.g., all months) from the
pandas.DataFrame
returned by this function or to aggregate subannual data to full-year results.- Returns:
Data frame with columns ‘timeslice’, ‘category’, and ‘duration’.
- Return type:
See also
TimeSeries
- class ixmp.TimeSeries(mp: Platform, model: str, scenario: str, version: int | str | None = None, annotation: str | None = None, **kwargs)[source]
Collection of data in time series format.
TimeSeries is the parent/super-class of
Scenario
.- Parameters:
mp (
Platform
) – ixmp instance in which to store data.model (
str
) – Model name.scenario (
str
) – Scenario name.version (
int
orstr
, optional) – If omitted and a default version of the (model, scenario) has been designated (seeset_as_default()
), load that version. Ifint
, load a specific version. If'new'
, create a new TimeSeries.annotation (
str
, optional) – A short annotation/comment used whenversion='new'
.
A TimeSeries is uniquely identified on its
Platform
by itsmodel
,scenario
, andversion
attributes. For more details, see the data model documentation.A new
version
is created by:Instantiating a new TimeSeries with the same model and scenario as an existing TimeSeries.
Calling
Scenario.clone()
.
TimeSeries objects have the following methods and attributes:
add_geodata
(df)Add geodata.
add_timeseries
(df[, meta, year_lim])Add time series data.
check_out
([timeseries_only])Check out the TimeSeries.
commit
(comment)Commit all changed data to the database.
Discard all changes and reload from the database.
Fetch geodata and return it as dataframe.
get_meta
([name])Get Metadata for this object.
Get the timestamp of the last update/edit of this TimeSeries.
Preload timeseries data to in-memory cache.
read_file
(path[, firstyear, lastyear])Read time series data from a CSV or Microsoft Excel file.
remove_geodata
(df)Remove geodata from the TimeSeries instance.
Remove time series data.
run_id
()Get the run id of this TimeSeries.
Set the current
version
as the default.set_meta
(name_or_dict[, value])Set Metadata for this object.
timeseries
([region, variable, unit, year, ...])Retrieve time series data.
transact
([message, condition, discard_on_error])Context manager to wrap code in a 'transaction'.
URL fragment for the TimeSeries.
- add_geodata(df: DataFrame) None [source]
Add geodata.
- Parameters:
df (
pandas.DataFrame
) –Data to add. df must have the following columns:
region
variable
subannual
unit
year
value
meta
- add_timeseries(df: DataFrame, meta: bool = False, year_lim: tuple[Optional[int], Optional[int]] = (None, None)) None [source]
Add time series data.
- Parameters:
df (
pandas.DataFrame
) –Data to add. df must have the following columns:
region or node
variable
unit
Additional column names may be either of:
year and value—long, or ‘tabular’, format.
one or more specific years—wide, or ‘IAMC’ format.
To support subannual temporal resolution of timeseries data, a column subannual is optional in df. The entries in this column must have been defined in the Platform instance using
add_timeslice()
beforehand. If no column subannual is included in df, the data is assumed to contain yearly values. Seetimeslices()
for a detailed description of the feature.meta (
bool
, optional) – IfTrue
, store df as metadata. Metadata is treated specially whenScenario.clone()
is called for Scenarios created withscheme='MESSAGE'
.year_lim (
tuple
, optional) – Respectively, earliest and latest years to add from df; data for other years is ignored.
- check_out(timeseries_only: bool = False) None [source]
Check out the TimeSeries.
Data in the TimeSeries can only be modified when it is in a checked-out state.
See also
- commit(comment: str) None [source]
Commit all changed data to the database.
If the TimeSeries was newly created (with
version='new'
),version
is updated with a new version number assigned by the backend. Otherwise,commit()
does not change theversion
.- Parameters:
comment (
str
) – Description of the changes being committed.
See also
- delete_meta(*args, **kwargs) None [source]
Remove Metadata for this object.
Deprecated since version 3.1: Use
remove_meta()
.
- classmethod from_url(url: str, errors: Literal['warn', 'raise'] = 'warn') tuple[Optional[ixmp.core.timeseries.TimeSeries], ixmp.core.platform.Platform] [source]
Instantiate a TimeSeries (or Scenario) given an
ixmp://
URL.The following are equivalent:
from ixmp import Platform, TimeSeries mp = Platform(name='example') scen = TimeSeries(mp 'model', 'scenario', version=42)
and:
from ixmp import TimeSeries scen, mp = TimeSeries.from_url('ixmp://example/model/scenario#42')
- Parameters:
- Returns:
with 2 elements:
The
TimeSeries
referenced by the url.The
Platform
referenced by the url, on which the first element is stored.
- Return type:
- get_geodata() DataFrame [source]
Fetch geodata and return it as dataframe.
- Returns:
Specified data.
- Return type:
- get_meta(name: str | None = None)[source]
Get Metadata for this object.
Metadata with the given name, attached to this (
model
name,scenario
name,version
), is retrieved.- Parameters:
name (
str
, optional) – Metadata name/identifier.
- preload_timeseries() None [source]
Preload timeseries data to in-memory cache. Useful for bulk updates.
- read_file(path: PathLike, firstyear: int | None = None, lastyear: int | None = None) None [source]
Read time series data from a CSV or Microsoft Excel file.
- Parameters:
path (
os.PathLike
) – File to read. Must have suffix ‘.csv’ or ‘.xlsx’.firstyear (
int
, optional) – Only read data from years equal to or later than this year.lastyear (
int
, optional) – Only read data from years equal to or earlier than this year.
See also
- remove_geodata(df: DataFrame) None [source]
Remove geodata from the TimeSeries instance.
- Parameters:
df (
pandas.DataFrame
) –Data to remove. df must have the following columns:
region
variable
unit
subannual
year
- remove_timeseries(df: DataFrame) None [source]
Remove time series data.
- Parameters:
df (
pandas.DataFrame
) –Data to remove. df must have the following columns:
region or node
variable
unit
year
- set_meta(name_or_dict: str | dict[str, Any], value=None) None [source]
Set Metadata for this object.
- timeseries(region: str | Sequence[str] | None = None, variable: str | Sequence[str] | None = None, unit: str | Sequence[str] | None = None, year: int | Sequence[int] | None = None, iamc: bool = False, subannual: bool | str = 'auto') DataFrame [source]
Retrieve time series data.
- Parameters:
iamc (
bool
, optional) – Return data in wide/’IAMC’ format. IfFalse
, return data in long format; seeadd_timeseries()
.region (
str
orlist
ofstr
, optional) – Regions to include in returned data.variable (
str
orlist
ofstr
, optional) – Variables to include in returned data.unit (
str
orlist
ofstr
, optional) – Units to include in returned data.year (
int
orlist
ofint
, optional) – Years to include in returned data.subannual (
bool
or'auto'
, optional) – Whether to include column for sub-annual specification (ifbool
); if ‘auto’, include column if sub-annual data (other than ‘Year’) exists in returned data frame.
- Raises:
ValueError – If subannual is
False
but Scenario has (filtered) sub-annual data.- Returns:
Specified data.
- Return type:
- transact(message: str = '', condition: bool = True, discard_on_error: bool = False)[source]
Context manager to wrap code in a ‘transaction’.
- Parameters:
message (
str
) – Commit message to use, if any commit is performed.condition (
bool
) –If
True
(the default):Before entering the code block, the TimeSeries (or
Scenario
) is checked out.On exiting the code block normally (without an exception), changes are committed with message.
If
False
, nothing occurs on entry or exit.discard_on_error (
bool
) – IfTrue
(defaultFalse
), then the anti-locking behaviour ofdiscard_on_error()
also applies to any exception raised in the block.
Example
>>> # `ts` is currently checked in/locked >>> with ts.transact(message="replace 'foo' with 'bar' in set x"): >>> # `ts` is checked out and may be modified >>> ts.remove_set("x", "foo") >>> ts.add_set("x", "bar") >>> # Changes to `ts` have been committed
- property url: str[source]
URL fragment for the TimeSeries.
This has the format
{model name}/{scenario name}#{version}
, with the same values passed when creating the TimeSeries instance.Examples
To form a complete URL (e.g. to use with
from_url()
), use a configuredixmp.Platform
name:>>> platform_name = "my-ixmp-platform" >>> mp = Platform(platform_name) >>> ts = TimeSeries(mp, "foo", "bar", 34) >>> ts.url "foo/bar#34" >>> f"ixmp://{platform_name}/{ts.url}" "ixmp://platform_name/foo/bar#34"
Note
Use caution: because Platform configuration is system-specific, other systems must have the same configuration for platform_name in order for the URL to refer to the same TimeSeries/Scenario.
Scenario
- class ixmp.Scenario(mp: Platform, model: str, scenario: str, version: int | str | None = None, scheme: str | None = None, annotation: str | None = None, **model_init_args)[source]
Bases:
TimeSeries
Collection of model-related data.
See
TimeSeries
for the meaning of parameters mp, model, scenario, version, and annotation.- Parameters:
scheme (
str
, optional) – Use an explicit scheme to initialize the new scenario. Theinitialize()
method of the correspondingModel
class inMODELS
is used to initialize items in the Scenario.cache –
Deprecated since version 3.0: The cache keyword argument to
Scenario
has no effect and raises a warning. Use cache as one of the backend_args toPlatform
to disable/enable caching for storage backends that support it. Useload_scenario_data()
to load all data in the Scenario into an in-memory cache.
A Scenario is a
TimeSeries
that also contains model data, including model solution data. See the data model documentation.The Scenario class provides methods to manipulate model data items. In addition to generic methods (
init_item()
,items()
,list_items()
,has_item()
), there are methods for each of the four item types:Set:
init_set()
,add_set()
,set()
,remove_set()
,has_set()
Parameter:
≥1-dimensional:
init_par()
,add_par()
,par()
,remove_par()
,par_list()
, andhas_par()
.0-dimensional:
init_scalar()
,change_scalar()
, andscalar()
. These are thin wrappers around the corresponding*_par
methods, which can also be used to manipulate 0-dimensional parameters.
Variable:
init_var()
,var()
,var_list()
, andhas_var()
.Equation:
init_equ()
,equ()
,equ_list()
, andhas_equ()
.
add_par
(name[, key_or_data, value, unit, ...])Set the values of a parameter.
add_set
(name, key[, comment])Add elements to an existing set.
change_scalar
(name, val, unit[, comment])Set the value and unit of a scalar.
clone
([model, scenario, annotation, ...])Clone the current scenario and return the clone.
equ
(name[, filters])Return a dataframe of (filtered) elements for a specific equation.
has_item
(name[, item_type])Check whether the Scenario has an item name of item_type.
Return
True
if the Scenario contains model solution data.idx_names
(name)Return the list of index names for an item (set, par, var, equ).
idx_sets
(name)Return the list of index sets for an item (set, par, var, equ).
init_item
(item_type, name[, idx_sets, idx_names])Initialize a new item name of type item_type.
init_scalar
(name, val, unit[, comment])Initialize a new scalar and set its value.
items
([type, filters, indexed_by, par_data])Iterate over model data items.
list_items
(item_type[, indexed_by])List all defined items of type item_type.
Load all Scenario data into memory.
par
(name[, filters])Return parameter data.
read_excel
(path[, add_units, init_items, ...])Read a Microsoft Excel file into the Scenario.
remove_par
(name[, key])Remove parameter values or an entire parameter.
remove_set
(name[, key])Delete set elements or an entire set.
remove_solution
([first_model_year])Remove the solution from the scenario.
scalar
(name)Return the value and unit of a scalar.
set
(name[, filters])Return the (filtered) elements of a set.
solve
([model, callback, cb_kwargs])Solve the model and store output.
to_excel
(path[, items, filters, max_row])Write Scenario to a Microsoft Excel file.
var
(name[, filters])Return a dataframe of (filtered) elements for a specific variable.
- add_par(name: str, key_or_data: str | Sequence[str] | dict | DataFrame | None = None, value=None, unit: str | None = None, comment: str | None = None) None [source]
Set the values of a parameter.
- Parameters:
name (
str
) – Name of the parameter.key_or_data (
str
orcollections.abc.Iterable
ofstr
orrange
ordict
orpandas.DataFrame
) – Element(s) to be added.value (
float
orcollections.abc.Iterable
offloat
, optional) – Values.unit (
str
orcollections.abc.Iterable
ofstr
, optional) – Unit symbols.comment (
str
orcollections.abc.Iterable
ofstr
, optional) – Comment(s) for the added values.
- add_set(name: str, key: str | Sequence[str] | dict | DataFrame, comment: str | Sequence[str] | None = None) None [source]
Add elements to an existing set.
- Parameters:
name (
str
) – Name of the set.key (
str
orcollections.abc.Iterable
ofstr
ordict
orpandas.DataFrame
) – Element(s) to be added. If name exists, the elements are appended to existing elements.comment (
str
orcollections.abc.Iterable
ofstr
, optional) – Comment describing the element(s). If given, there must be the same number of comments as elements.
- Raises:
KeyError – If the set name does not exist.
init_set()
must be called beforeadd_set()
.ValueError – For invalid forms or combinations of key and comment.
- change_scalar(name: str, val: Real, unit: str, comment: str | None = None) None [source]
Set the value and unit of a scalar.
- check_out(timeseries_only: bool = False) None [source]
Check out the Scenario.
- Raises:
ValueError – If
has_solution()
isTrue
.
See also
- clone(model: str | None = None, scenario: str | None = None, annotation: str | None = None, keep_solution: bool = True, shift_first_model_year: int | None = None, platform: Platform | None = None) Scenario [source]
Clone the current scenario and return the clone.
If the (model, scenario) given already exist on the
Platform
, the version for the cloned Scenario follows the last existing version. Otherwise, the version for the cloned Scenario is 1.Note
clone()
does not set or alter default versions. This means that a clone to new (model, scenario) names has no default version, and will not be returned byPlatform.scenario_list()
unless default=False is given.- Parameters:
model (
str
, optional) – New model name. If not given, use the existing model name.scenario (
str
, optional) – New scenario name. If not given, use the existing scenario name.annotation (
str
, optional) – Explanatory comment for the clone commit message to the database.keep_solution (
bool
, optional) – IfTrue
, include all timeseries data and the solution (vars and equs) from the source scenario in the clone. IfFalse
, only include timeseries data marked meta=True (seeadd_timeseries()
).shift_first_model_year (
int
, optional) – If given, all timeseries data in the Scenario is omitted from the clone for years from first_model_year onwards. Timeseries data with the meta flag (seeadd_timeseries()
) are cloned for all years.platform (
Platform
, optional) – Platform to clone to (default: current platform)
- equ(name: str, filters=None, **kwargs) DataFrame [source]
Return a dataframe of (filtered) elements for a specific equation.
- equ_list(indexed_by: str | None = None) list[str] [source]
List all defined equations. See
list_items()
.
- has_equ(name: str, *, item_type=ItemType.EQU) bool [source]
Check whether the scenario has a equation name. See
has_item()
.
- has_item(name: str, item_type=ItemType.MODEL) bool [source]
Check whether the Scenario has an item name of item_type.
In general, user code should call one of
has_equ()
,has_par()
,has_set()
, orhas_var()
instead of calling this method directly.See also
- has_par(name: str, *, item_type=ItemType.PAR) bool [source]
Check whether the scenario has a parameter name. See
has_item()
.
- has_set(name: str, *, item_type=ItemType.SET) bool [source]
Check whether the scenario has a set name. See
has_item()
.
- has_var(name: str, *, item_type=ItemType.VAR) bool [source]
Check whether the scenario has a variable name. See
has_item()
.
- idx_names(name: str) list[str] [source]
Return the list of index names for an item (set, par, var, equ).
- Parameters:
name (
str
) – name of the item
- idx_sets(name: str) list[str] [source]
Return the list of index sets for an item (set, par, var, equ).
- Parameters:
name (
str
) – name of the item
- init_equ(name: str, idx_sets: Sequence[str] | None = None, idx_names: Sequence[str] | None = None)[source]
Initialize a new equation. See
init_item()
.
- init_item(item_type: ItemType, name: str, idx_sets: Sequence[str] | None = None, idx_names: Sequence[str] | None = None)[source]
Initialize a new item name of type item_type.
In general, user code should call one of
init_set()
,init_par()
,init_var()
, orinit_equ()
instead of calling this method directly.- Parameters:
item_type (
ItemType
) – The type of the item.name (
str
) – Name of the item.idx_sets (
collections.abc.Sequence
ofstr
orstr
, optional) – Name(s) of index sets for a 1+-dimensional item. If none are given, the item is scalar (zero dimensional).idx_names (
collections.abc.Sequence
ofstr
orstr
, optional) – Names of the dimensions indexed by idx_sets. If given, they must be the same length as idx_sets.
- Raises:
if idx_names are given but do not match the length of idx_sets. - if an item with the same name, of any item_type, already exists.
RuntimeError – if the Scenario is not checked out (see
check_out()
).
- init_par(name: str, idx_sets: Sequence[str] | None = None, idx_names: Sequence[str] | None = None)[source]
Initialize a new parameter. See
init_item()
.
- init_scalar(name: str, val: Real, unit: str, comment=None) None [source]
Initialize a new scalar and set its value.
- init_set(name: str, idx_sets: Sequence[str] | None = None, idx_names: Sequence[str] | None = None)[source]
Initialize a new set. See
init_item()
.
- init_var(name: str, idx_sets: Sequence[str] | None = None, idx_names: Sequence[str] | None = None)[source]
Initialize a new variable. See
init_item()
.
- items(type: ItemType = ItemType.PAR, filters: dict[str, collections.abc.Sequence[str]] | None = None, *, indexed_by: str | None = None, par_data: bool | None = None) Iterable[str] [source]
Iterate over model data items.
- Parameters:
type (
ItemType
, optional) – Types of items to iterate, for instanceItemType.PAR
for parameters.filters (
dict
, optional) – Filters for values along dimensions; same as the filters argument topar()
. Only value forItemType.PAR
.indexed_by (
str
, optional) – If given, only iterate over items where one of the item dimensions is indexed_by the set of this name.par_data (
bool
, optional) – IfTrue
(the default) and type isItemType.PAR
, also iterate over data for each parameter.
- Yields:
str
– if type is notItemType.PAR
, or par_data isFalse
: names of items.tuple
– if type isItemType.PAR
and par_data isTrue
: each tuple is (item name, item data).
- list_items(item_type: ItemType, indexed_by: str | None = None) list[str] [source]
List all defined items of type item_type.
See also
- load_scenario_data() None [source]
Load all Scenario data into memory.
- Raises:
ValueError – If the Scenario was instantiated with
cache=False
.
- par(name: str, filters: dict[str, collections.abc.Sequence[str]] | None = None, **kwargs) DataFrame [source]
Return parameter data.
If filters is provided, only a subset of data, matching the filters, is returned.
- par_list(indexed_by: str | None = None) list[str] [source]
List all defined parameters. See
list_items()
.
- read_excel(path: PathLike, add_units: bool = False, init_items: bool = False, commit_steps: bool = False) None [source]
Read a Microsoft Excel file into the Scenario.
- Parameters:
path (
os.PathLike
) – File to read. Must have suffix ‘.xlsx’.add_units (
bool
, optional) – Add missing units, if any, to the Platform instance.init_items (
bool
, optional) – Initialize sets and parameters that do not already exist in the Scenario.commit_steps (
bool
, optional) – Commit changes after every data addition.
See also
- remove_par(name: str, key=None) None [source]
Remove parameter values or an entire parameter.
- Parameters:
name (
str
) – Name of the parameter.key (
pandas.DataFrame
orlist
orstr
, optional) – Elements to be removed. If apandas.DataFrame
, must contain the same columns (indices/dimensions) as the parameter. If alist
, a single key for a single data point; the individual elements must correspond to the indices/dimensions of the parameter.
- remove_set(name: str, key: str | Sequence[str] | dict | DataFrame | None = None) None [source]
Delete set elements or an entire set.
- Parameters:
name (
str
) – Name of the set to remove (if key isNone
) or from which to remove elements.key (
pandas.DataFrame
orlist
ofstr
, optional) – Elements to be removed from set name.
- remove_solution(first_model_year: int | None = None) None [source]
Remove the solution from the scenario.
This function removes the solution (variables and equations) and timeseries data marked as meta=False from the scenario (see
add_timeseries()
).- Parameters:
first_model_year (
int
, optional) – If given, timeseries data marked as meta=False is removed only for years from first_model_year onwards.- Raises:
ValueError – If Scenario has no solution or if first_model_year is not int.
- scalar(name: str) dict[str, Union[numbers.Real, str]] [source]
Return the value and unit of a scalar.
- set(name: str, filters: dict[str, collections.abc.Sequence[str]] | None = None, **kwargs) list[str] | DataFrame [source]
Return the (filtered) elements of a set.
- Parameters:
name (
str
) – Name of the set.filters (
dict
) – Mapping of dimension_name → elements, where dimension_name is one of the idx_names given when the set was initialized (seeinit_set()
), and elements is an iterable of labels to include in the return value.
- Return type:
- set_list(indexed_by: str | None = None) list[str] [source]
List all defined sets. See
list_items()
.
- solve(model: str | None = None, callback: Callable | None = None, cb_kwargs: dict[str, Any] = {}, **model_options) None [source]
Solve the model and store output.
ixmp ‘solves’ a model by invoking the run() method of a
Model
subclass—for instance,GAMSModel.run()
. Depending on the underlying model code, different steps are taken; see each model class for details. In general:Data from the Scenario are written to a model input file.
Code or an external program is invoked to perform calculations or optimizations, solving the model.
Data representing the model outputs or solution are read from a model output file and stored in the Scenario.
If the optional argument callback is given, additional steps are performed:
Execute the callback with the Scenario as an argument. The Scenario has an iteration attribute that stores the number of times the underlying model has been solved (#2).
If the callback returns
False
or similar, iterate by repeating from step #1. Otherwise, exit.
- Parameters:
model (
str
) – model (e.g., MESSAGE) or GAMS file name (excluding ‘.gms’)callback (callable, optional) – Method to execute arbitrary non-model code. Must accept a single argument: the Scenario. Must return a non-
False
value to indicate convergence.cb_kwargs (
dict
, optional) – Keyword arguments to pass to callback.model_options – Keyword arguments specific to the model. See
GAMSModel
.
- Warns:
UserWarning – If callback is given and returns
None
. This may indicate that the user has forgotten areturn
statement, in which case the iteration will continue indefinitely.- Raises:
ValueError – If the Scenario has already been solved.
- to_excel(path: PathLike, items: ItemType = ItemType.SET | PAR, filters: dict[str, Union[collections.abc.Sequence[str], Scenario]] | None = None, max_row: int | None = None) None [source]
Write Scenario to a Microsoft Excel file.
- Parameters:
path (
os.PathLike
) – File to write. Must have suffix.xlsx
.items (
ItemType
, optional) – Types of items to write. EitherSET
|PAR
(i.e. only sets and parameters), orMODEL
(also variables and equations, i.e. model solution data).filters (
dict
, optional) – Filters for values along dimensions; same as the filters argument topar()
.max_row (
int
, optional) – Maximum number of rows in each sheet. If the number of elements in an item exceeds this number orEXCEL_MAX_ROWS
, then an item is written to multiple sheets named, e.g. ‘foo’, ‘foo(2)’, ‘foo(3)’, etc.
See also
Configuration
When imported, ixmp
reads configuration from the first file named
config.json
found in one of the following directories:
The directory given by the environment variable
IXMP_DATA
, if defined,${XDG_DATA_HOME}/ixmp
, if the environment variable is defined, or$HOME/.local/share/ixmp
.
Tip
For most users, #2 or #3 is a sensible default; platform information for many local and remote databases can be stored in config.json
and retrieved by name.
Advanced users wishing to use a project-specific config.json
can set IXMP_DATA
to the path for any directory containing a file with this name.
To manipulate the configuration file, use the platform
command in the ixmp command-line interface:
# Add a platform named 'p1' backed by a local HSQL database
$ ixmp platform add p1 jdbc hsqldb /path/to/database/files
# Add a platform named 'p2' backed by a remote Oracle database
$ ixmp platform add p2 jdbc oracle \
database.server.example.com:PORT:SCHEMA username password
# Add a platform named 'p3' with specific JVM arguments
$ ixmp platform add p3 jdbc hsqldb /path/to/database/files -Xmx12G
# Make 'p2' the default Platform
$ ixmp platform add default p2
…or, use the methods of ixmp.config
.
- class ixmp._config.Config(read: bool = True)[source]
Configuration for ixmp.
For most purposes, there is only one instance of this class, available at
ixmp.config
and automaticallyread()
from the ixmp configuration file at the moment the package is imported. (save()
writes the current values to file.)Config is a key-value store. Key names are strings; each key has values of a fixed type. Individual keys can be accessed with
get()
andset()
, or by accessing thevalues
attribute.Spaces in names are automatically replaced with underscores, e.g. “my key” is stored as “my_key”, but may be set and retrieved as “my key”.
Downstream packages (e.g.
message_ix
,message_ix_models
) mayregister()
additional keys to be stored in and read from the ixmp configuration file.The default configuration (restored by
clear()
) is:{ "platform": { "default": "local", "local": { "class": "jdbc", "driver": "hsqldb", "path": "~/.local/share/ixmp/localdb/default" }, }
clear
()Clear all configuration keys by setting empty or default values.
get
(name)Return the value of a configuration key name.
keys
()Return the names of all registered configuration keys.
read
()Try to read configuration keys from file.
save
()Write configuration keys to file.
set
(name, value[, _strict])Set configuration key name to value.
register
(name, type_[, default])Register a new configuration key.
unregister
(name)Unregister and clear the configuration key name.
add_platform
(name, *args, **kwargs)Add or overwrite information about a platform.
get_platform_info
(name)Return information on configured Platform name.
remove_platform
(name)Remove the configuration for platform name.
- Parameters:
read (
bool
) – Readconfig.json
on startup.
- add_platform(name: str, *args, **kwargs)[source]
Add or overwrite information about a platform.
- Parameters:
name (
str
) – New or existing platform name.args – Positional arguments. If name is ‘default’, args must be a single string: the name of an existing configured Platform. Otherwise, the first of args specifies one of the
BACKENDS
, and the remaining args differ according to the backend.kwargs – Keyword arguments. These differ according to backend.
- get_platform_info(name: str) tuple[str, dict[str, Any]] [source]
Return information on configured Platform name.
- Parameters:
name (
str
) – Existing platform. If name is “default”, the information for the default platform is returned.- Returns:
- Raises:
ValueError – If name is not configured as a platform.
- read()[source]
Try to read configuration keys from file.
If successful, the attribute
path
is set to the path of the file.
- register(name: str, type_: type, default: Any | None = None, **kwargs)[source]
Register a new configuration key.
- Parameters:
name (
str
) – Name of the new key.type (
object
) – Type of valid values for the key, e.g.str
orpathlib.Path
.default (optional) – Default value for the key. If not supplied, the type is called to supply the default value, e.g.
str()
.
- Raises:
ValueError – if the key name is already registered.
- save()[source]
Write configuration keys to file.
config.json
is created in the first of the ixmp configuration directories that exists. Only non-null values are written.
- values: BaseValues[source]
Configuration values. These can be accessed using Python item access syntax, e.g.
ixmp.config.values["platform"]["platform name"]…
.
Utilities
|
Compute the difference between Scenarios a and b. |
|
Context manager to discard changes to ts and close the DB on any exception. |
|
Return a formatted list of TimeSeries on platform. |
|
Check out timeseries depending on state. |
|
Commit timeseries with message if condition is |
|
Parse url and return Platform and Scenario information. |
|
Print information about ixmp and its dependencies to file. |
|
Update parameter name in scenario using data, without overwriting. |
|
Transform df to the IAMC structure/layout. |
- class ixmp.util.DeprecatedPathFinder(package: str, name_map: Mapping[str, str])[source]
Handle imports from deprecated module locations.
- ixmp.util.diff(a, b, filters=None) Iterator[tuple[str, pandas.core.frame.DataFrame]] [source]
Compute the difference between Scenarios a and b.
diff()
combinespandas.merge()
andScenario.items()
. Only parameters are compared.merge()
is called with the argumentshow="outer", sort=True, suffixes=("_a", "_b"), indicator=True
; the merge is performed on all columns except ‘value’ or ‘unit’.- Yields:
tuple
ofstr
,pandas.DataFrame
– tuples of item name and data.
- ixmp.util.discard_on_error(ts: TimeSeries)[source]
Context manager to discard changes to ts and close the DB on any exception.
For
JDBCBackend
, this can avoid leaving ts in a “locked” state in the database.Examples
>>> mp = ixmp.Platform() >>> s = ixmp.Scenario(mp, ...) >>> with discard_on_error(s): ... s.add_par(...) # Any code ... s.not_a_method() # Code that raises some exception
Before the the exception in the final line is raised (and possibly handled by surrounding code):
Any changes—for example, here changes due to the call to
add_par()
—are discarded/not committed;s
is guaranteed to be in a non-locked state; andclose_db()
is called onmp
.
- ixmp.util.format_scenario_list(platform, model=None, scenario=None, match=None, default_only=False, as_url=False)[source]
Return a formatted list of TimeSeries on platform.
- Parameters:
platform (
Platform
) –model (
str
, optional) – Model name to restrict results. Passed toscenario_list()
.scenario (
str
, optional) – Scenario name to restrict results. Passed toscenario_list()
.match (
str
, optional) – Regular expression to restrict results. Only results where the model or scenario name matches are returned.default_only (
bool
, optional) – Only return TimeSeries where a default version has been set withTimeSeries.set_as_default()
.as_url (
bool
, optional) – Format results as ixmp URLs.
- Returns:
If as_url is
False
, also include summary information.- Return type:
- ixmp.util.logger()[source]
Access global logger.
Deprecated since version 3.3: To control logging from ixmp, instead use
logging
to retrieve it:import logging ixmp_logger = logging.getLogger("ixmp") # Example: set the level to INFO ixmp_logger.setLevel(logging.INFO)
- ixmp.util.maybe_check_out(timeseries, state=None)[source]
Check out timeseries depending on state.
If state is
None
, thenTimeSeries.check_out()
is called.- Returns:
- Raises:
ValueError – If timeseries is a
Scenario
object andhas_solution()
isTrue
.
See also
- ixmp.util.maybe_commit(timeseries, condition, message)[source]
Commit timeseries with message if condition is
True
.- Returns:
See also
- ixmp.util.maybe_convert_scalar(obj) DataFrame [source]
Convert obj to
pandas.DataFrame
.- Parameters:
obj – Any value returned by
Scenario.par()
. For a scalar (0-dimensional) parameter, this will bedict
.- Returns:
maybe_convert_scalar()
always returns a data frame.- Return type:
- ixmp.util.parse_url(url)[source]
Parse url and return Platform and Scenario information.
A URL (Uniform Resource Locator), as the name implies, uniquely identifies a specific scenario and (optionally) version of a model, as well as (optionally) the database in which it is stored. ixmp URLs take forms like:
ixmp://PLATFORM/MODEL/SCENARIO[#VERSION] MODEL/SCENARIO[#VERSION]
where:
The
PLATFORM
is a configured platform name; seeixmp.config
.MODEL
may not contain the forward slash character (‘/’);SCENARIO
may contain any number of forward slashes. Both must be supplied.VERSION
is optional but, if supplied, must be an integer.
- Returns:
- Raises:
ValueError – For malformed URLs.
- ixmp.util.show_versions(file=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='utf-8'>)[source]
Print information about ixmp and its dependencies to file.
- ixmp.util.to_iamc_layout(df: DataFrame) DataFrame [source]
Transform df to the IAMC structure/layout.
The returned object has:
Any (Multi)Index levels reset as columns.
Lower-case column names ‘region’, ‘variable’, ‘subannual’, and ‘unit’.
If not present in df, the value ‘Year’ in the ‘subannual’ column.
- Parameters:
df (
pandas.DataFrame
) – May have a ‘node’ column, which will be renamed to ‘region’.- Return type:
- Raises:
ValueError – If ‘region’, ‘variable’, or ‘unit’ is not among the column names.
- ixmp.util.update_par(scenario, name, data)[source]
Update parameter name in scenario using data, without overwriting.
Only values which do not already appear in the parameter data are added.
Utilities for documentation
GitHub adapter for sphinx.ext.linkcode
.
To use this extension, add it to the extensions
setting in the Sphinx configuration file (usually conf.py
), and set the required linkcode_github_repo_slug
:
extensions = [
...,
"ixmp.util.sphinx_linkcode_github",
...,
]
linkcode_github_repo_slug = "iiasa/ixmp" # Required
linkcode_github_remote_head = "feature/example" # Optional
The extension uses GitPython (if installed) or linkcode_github_remote_head
(optional override) to match a local commit to a remote head (~branch name), and construct links like:
https://github.com/{repo_slug}/blob/{remote_head}/path/to/source.py#L123-L456
- class ixmp.util.sphinx_linkcode_github.GitHubLinker[source]
Handler for storing files/line numbers for code objects and formatting links.
- autodoc_process_docstring(app: sphinx.application.Sphinx, what, name: str, obj, options, lines)[source]
Handler for the Sphinx
autodoc-process-docstring
event.Records the file and source line numbers containing obj.
- config_inited(app: sphinx.application.Sphinx, config)[source]
Handler for the Sphinx
config-inited
event.
- linkcode_resolve(domain: str, info: dict) str | None [source]
Function for the
sphinx.ext.linkcode
setting of the same name.Returns URLs for code objects on GitHub, using information stored by
autodoc_process_docstring()
.
- ixmp.util.sphinx_linkcode_github.find_remote_head(app: sphinx.application.Sphinx) str [source]
Return a name for the remote branch containing the code.
- ixmp.util.sphinx_linkcode_github.find_remote_head_git(app: sphinx.application.Sphinx) str | None [source]
Use git to identify the name of the remote branch containing the code.
- ixmp.util.sphinx_linkcode_github.package_base_path(obj) Path [source]
Return the base path of the package containing obj.
- ixmp.util.sphinx_linkcode_github.setup(app: sphinx.application.Sphinx)[source]
Sphinx extension registration hook.
Utilities for testing
Utilities for testing ixmp.
These include:
pytest hooks, fixtures:
A CliRunner object that invokes the ixmp command-line interface.
Return the os.environ dict with the IXMP_DATA variable set.
An empty
Platform
connected to a temporary, in-memory database.…and assertions:
assert_logs
(caplog[, message_or_messages, ...])Assert that message_or_messages appear in logs.
Methods for setting up and populating test ixmp databases:
add_test_data
(scen)Populate scen with test data.
create_test_platform
(tmp_path, data_path, ...)Create a Platform for testing using specimen files 'name.*'.
make_dantzig
(mp[, solve, quiet, request])Return
ixmp.Scenario
of Dantzig's canning/transport problem.populate_test_platform
(platform)Populate platform with data for testing.
Methods to run and retrieve values from Jupyter notebooks:
run_notebook
(nb_path, tmp_path[, env])Execute a Jupyter notebook via
nbclient
and collect output.get_cell_output
(nb, name_or_index[, kind])Retrieve a cell from nb according to its metadata name_or_index:
- ixmp.testing.add_random_model_data(scenario, length)[source]
Add a set and parameter with given length to scenario.
The set is named ‘random_set’. The parameter is named ‘random_par’, and has two dimensions indexed by ‘random_set’.
- ixmp.testing.assert_logs(caplog, message_or_messages=None, at_level=None)[source]
Assert that message_or_messages appear in logs.
Use assert_logs as a context manager for a statement that is expected to trigger certain log messages. assert_logs checks that these messages are generated.
Example
- def test_foo(caplog):
- with assert_logs(caplog, ‘a message’):
logging.getLogger(__name__).info(‘this is a message!’)
- ixmp.testing.create_test_platform(tmp_path, data_path, name, **properties)[source]
Create a Platform for testing using specimen files ‘name.*’.
Any of the following files from data_path are copied to tmp_path:
name.lobs, name.script, i.e. the contents of a
JDBCBackend
HyperSQL database.name.properties.
The contents of name.properties (if it exists) are formatted using the properties keyword arguments.
- Returns:
the path to the .properties file, if any, else the .lobs file without suffix.
- Return type:
- ixmp.testing.get_cell_output(nb, name_or_index, kind='data')[source]
Retrieve a cell from nb according to its metadata name_or_index:
The Jupyter notebook format allows specifying a document-wide unique ‘name’ metadata attribute for each cell:
https://nbformat.readthedocs.io/en/latest/format_description.html #cell-metadata
Return the cell matching name_or_index if
str
; or the cell at theint
index; or raiseValueError
.
- ixmp.testing.ixmp_cli(tmp_env)[source]
A CliRunner object that invokes the ixmp command-line interface.
- ixmp.testing.make_dantzig(mp: Platform, solve: bool = False, quiet: bool = False, request: FixtureRequest | None = None) Scenario [source]
Return
ixmp.Scenario
of Dantzig’s canning/transport problem.- Parameters:
mp (
Platform
) – Platform on which to create the scenario.solve (
bool
, optional) – IfTrue
. then solve the scenario before returning. DefaultFalse
.quiet (
bool
, optional) – IfTrue
, suppress console output when solving.request (
pytest.FixtureRequest
, optional) – If present, use for a distinct scenario name for each test.
- Return type:
See also
- ixmp.testing.populate_test_platform(platform)[source]
Populate platform with data for testing.
Many of the tests in
ixmp.tests.core
depend on this set of data.The data consist of:
- ixmp.testing.random_model_data(length)[source]
Random (set, parameter) data with at least length elements.
See also
- ixmp.testing.random_ts_data(length)[source]
A
pandas.DataFrame
of time series data with length rows.Suitable for passage to
TimeSeries.add_timeseries()
.
- ixmp.testing.resource_limit(request)[source]
A fixture that limits Python
resources
.See the documentation (
pytest --help
) for the--resource-limit
command-line option that selects (1) the specific resource and (2) the level of the limit.The original limit, if any, is reset after the test function in which the fixture is used.
- ixmp.testing.run_notebook(nb_path, tmp_path, env=None, **kwargs)[source]
Execute a Jupyter notebook via
nbclient
and collect output.- Parameters:
nb_path (
os.PathLike
) – The notebook file to execute.tmp_path (
os.PathLike
) – A directory in which to create temporary output.env (
dict
, optional) – Execution environment fornbclient
. Default:os.environ
.kwargs –
Keyword arguments for
nbclient.NotebookClient
. Defaults are set for:- ”allow_errors”
Default
False
. IfTrue
, the execution always succeeds, and cell output contains exception information rather than code outputs.- ”kernel_version”
Jupyter kernel to use. Default: either “python2” or “python3”, matching the current Python major version.
Warning
Any existing configuration for this kernel on the local system— such as an IPython start-up file—will be executed when the kernel starts. Code that enables GUI features can interfere with
run_notebook()
.- ”timeout”
in seconds; default 10.
- Returns:
nb (
nbformat.NotebookNode
) – Parsed and executed notebook.errors (
list
) – Any execution errors.
- ixmp.testing.test_mp(request, tmp_env, test_data_path)[source]
An empty
Platform
connected to a temporary, in-memory database.This fixture has module scope: the same Platform is reused for all tests in a module.
- ixmp.testing.tmp_env(pytestconfig, tmp_path_factory)[source]
Return the os.environ dict with the IXMP_DATA variable set.
IXMP_DATA will point to a temporary directory that is unique to the test session. ixmp configuration (i.e. the ‘config.json’ file) can be written and read in this directory without modifying the current user’s configuration.