Python (ixmp
package)
The ix modeling platform application programming interface (API) is organized around three classes:
|
Instance of the modeling platform. |
|
Collection of data in time series format. |
|
Collection of model-related data. |
Platform
- class ixmp.Platform(name: Optional[str] = None, backend: Optional[str] = None, **backend_args)[source]
Instance of the modeling platform.
A Platform connects two key components:
A back end for storing data such as model inputs and outputs.
One or more model(s); codes in Python or other languages or frameworks that run, via
Scenario.solve()
, on the data stored in the Platform.
The Platform parameters control these components.
TimeSeries
andScenario
objects tied to a single Platform; to move data between platforms, seeScenario.clone()
.- Parameters:
name (str) – Name of a specific configured backend.
backend ('jdbc') – Storage backend type. ‘jdbc’ corresponds to the built-in
JDBCBackend
; seeBACKENDS
.backend_args – Keyword arguments to specific to the backend. See
JDBCBackend
.
Platforms have the following methods:
add_region
(region, hierarchy[, parent])Define a region including a hierarchy level and a 'parent' region.
add_region_synonym
(region, mapped_to)Define a synonym for a region.
add_unit
(unit[, comment])Define a unit.
check_access
(user, models[, access])Check access to specific models.
regions
()Return all regions defined time series data, including synonyms.
scenario_list
([default, model, scen])Return information about TimeSeries and Scenarios on the Platform.
set_log_level
(level)Set log level for the Platform and its storage
Backend
.units
()Return all units defined on the Platform.
The following backend methods are available via Platform too:
Add (register) new model name.
Add (register) new scenario name.
OPTIONAL: Close database connection(s).
backend.base.Backend.get_doc
(domain[, name])Read documentation from database
backend.base.Backend.get_meta
(model, ...)Retrieve all metadata attached to a specific target.
List existing model names.
List existing scenario names.
OPTIONAL: (Re-)open database connection(s).
backend.base.Backend.remove_meta
(names, ...)Remove metadata attached to a target.
backend.base.Backend.set_doc
(domain, docs)Save documentation to database
backend.base.Backend.set_meta
(meta, model, ...)Set metadata on a target.
These methods can be called like normal Platform methods, e.g.:
$ platform_instance.close_db()
- add_region(region: str, hierarchy: str, parent: str = 'World') None [source]
Define a region including a hierarchy level and a ‘parent’ region.
Tip
On a
Platform
backed by a shared database, a region may already exist with a different spelling. Useregions()
first to check, and consider callingadd_region_synonym()
instead.
- add_region_synonym(region: str, mapped_to: str) None [source]
Define a synonym for a region.
When adding timeseries data using the synonym in the region column, it will be converted to mapped_to.
- add_timeslice(name: str, category: str, duration: float) None [source]
Define a subannual timeslice including a category and duration.
See
timeslices()
for a detailed description of timeslices.
- check_access(user: str, models: Union[str, Sequence[str]], access: str = 'view') Union[bool, Dict[str, bool]] [source]
Check access to specific models.
- export_timeseries_data(path: PathLike, default: bool = True, model: Optional[str] = None, scenario: Optional[str] = None, variable=None, unit=None, region=None, export_all_runs: bool = False) None [source]
Export time series data to CSV file across multiple
TimeSeries
.Refer
TimeSeries.add_timeseries()
about adding time series data.- Parameters:
path (os.PathLike) –
File name to export data to; must have the suffix ‘.csv’.
Result file will contain the following columns:
model
scenario
version
variable
unit
region
meta
subannual
year
value
default (bool, optional) –
True
to include only TimeSeries versions marked as default.model (str, optional) – Only return data for this model name.
scenario (str, optional) – Only return data for this scenario name.
variable (list of str, optional) – Only return data for variable name(s) in this list.
unit (list of str, optional) – Only return data for unit name(s) in this list.
region (list of str, optional) – Only return data for region(s) in this list.
export_all_runs (boolean, optional) – Export all existing model+scenario run combinations.
- get_log_level() str [source]
Return log level of the storage
Backend
, if any.- Returns:
Name of a Python logging level.
- Return type:
- regions() DataFrame [source]
Return all regions defined time series data, including synonyms.
- Return type:
- scenario_list(default: bool = True, model: Optional[str] = None, scen: Optional[str] = None) DataFrame [source]
Return information about TimeSeries and Scenarios on the Platform.
- Parameters:
default (bool, optional) – Return only the default version of each TimeSeries/Scenario (see
TimeSeries.set_as_default()
). Any (model, scenario) without a default version is omitted. IfFalse
, return all versions.model (str, optional) – A model name. If given, only return information for model.
scen (str, optional) – A scenario name. If given, only return information for scen.
- Returns:
Scenario information, with the columns:
model
,scenario
,version
, andscheme
—Scenario identifiers; seeTimeseries and :class:
.Scenario`.is_default
—True
if theversion
is the default version for the (model
,scenario
).is_locked
—True
if the Scenario has been locked for use.cre_user
,cre_date
—database user that created the Scenario, and creation time.upd_user
,upd_date
—user and time for last modification of the Scenario.lock_user
,lock_date
—user that locked the Scenario and lock time.annotation
: description of the Scenario or changelog.
- Return type:
- set_log_level(level: Union[str, int]) None [source]
Set log level for the Platform and its storage
Backend
.- Parameters:
level (str) – Name of a Python logging level.
- timeslices() DataFrame [source]
Return all subannual time slices defined in this Platform instance.
See the Data model documentation for further details.
The category and duration do not have any functional relevance within the ixmp framework, but they may be useful for pre- or post-processing. For example, they can be used to filter all timeslices of a certain category (e.g., all months) from the
pandas.DataFrame
returned by this function or to aggregate subannual data to full-year results.- Returns:
Data frame with columns ‘timeslice’, ‘category’, and ‘duration’.
- Return type:
See also
TimeSeries
- class ixmp.TimeSeries(mp: Platform, model: str, scenario: str, version: Optional[Union[int, str]] = None, annotation: Optional[str] = None, **kwargs)[source]
Collection of data in time series format.
TimeSeries is the parent/super-class of
Scenario
.- Parameters:
mp (
Platform
) – ixmp instance in which to store data.model (str) – Model name.
scenario (str) – Scenario name.
version (int or str, optional) – If omitted and a default version of the (model, scenario) has been designated (see
set_as_default()
), load that version. Ifint
, load a specific version. If'new'
, create a new TimeSeries.annotation (str, optional) – A short annotation/comment used when
version='new'
.
A TimeSeries is uniquely identified on its
Platform
by itsmodel
,scenario
, andversion
attributes. For more details, see the data model documentation.A new
version
is created by:Instantiating a new TimeSeries with the same model and scenario as an existing TimeSeries.
Calling
Scenario.clone()
.
TimeSeries objects have the following methods and attributes:
add_geodata
(df)Add geodata.
add_timeseries
(df[, meta, year_lim])Add time series data.
check_out
([timeseries_only])Check out the TimeSeries.
commit
(comment)Commit all changed data to the database.
Discard all changes and reload from the database.
Fetch geodata and return it as dataframe.
Get the timestamp of the last update/edit of this TimeSeries.
Preload timeseries data to in-memory cache.
read_file
(path[, firstyear, lastyear])Read time series data from a CSV or Microsoft Excel file.
remove_geodata
(df)Remove geodata from the TimeSeries instance.
Remove time series data.
run_id
()Get the run id of this TimeSeries.
Set the current
version
as the default.timeseries
([region, variable, unit, year, ...])Retrieve time series data.
transact
([message, condition])Context manager to wrap code in a 'transaction'.
URL fragment for the TimeSeries.
- add_geodata(df: DataFrame) None [source]
Add geodata.
- Parameters:
df (
pandas.DataFrame
) –Data to add. df must have the following columns:
region
variable
subannual
unit
year
value
meta
- add_timeseries(df: DataFrame, meta: bool = False, year_lim: Tuple[Optional[int], Optional[int]] = (None, None)) None [source]
Add time series data.
- Parameters:
df (
pandas.DataFrame
) –Data to add. df must have the following columns:
region or node
variable
unit
Additional column names may be either of:
year and value—long, or ‘tabular’, format.
one or more specific years—wide, or ‘IAMC’ format.
To support subannual temporal resolution of timeseries data, a column subannual is optional in df. The entries in this column must have been defined in the Platform instance using
add_timeslice()
beforehand. If no column subannual is included in df, the data is assumed to contain yearly values. Seetimeslices()
for a detailed description of the feature.meta (bool, optional) – If
True
, store df as metadata. Metadata is treated specially whenScenario.clone()
is called for Scenarios created withscheme='MESSAGE'
.year_lim (tuple of (int or None, int or None), optional) – Respectively, minimum and maximum years to add from df; data for other years is ignored.
- check_out(timeseries_only: bool = False) None [source]
Check out the TimeSeries.
Data in the TimeSeries can only be modified when it is in a checked-out state.
See also
- commit(comment: str) None [source]
Commit all changed data to the database.
If the TimeSeries was newly created (with
version='new'
),version
is updated with a new version number assigned by the backend. Otherwise,commit()
does not change theversion
.- Parameters:
comment (str) – Description of the changes being committed.
See also
- delete_meta(*args, **kwargs) None [source]
Remove Metadata for this object.
Deprecated since version 3.1: Use
remove_meta()
.
- classmethod from_url(url: str, errors='warn') Tuple[Optional[TimeSeries], Platform] [source]
Instantiate a TimeSeries (or Scenario) given an
ixmp://
URL.The following are equivalent:
from ixmp import Platform, TimeSeries mp = Platform(name='example') scen = TimeSeries(mp 'model', 'scenario', version=42)
and:
from ixmp import TimeSeries scen, mp = TimeSeries.from_url('ixmp://example/model/scenario#42')
- Parameters:
- Returns:
ts, platform – The TimeSeries and Platform referred to by the URL.
- Return type:
2-tuple of (TimeSeries,
Platform
)
- get_geodata() DataFrame [source]
Fetch geodata and return it as dataframe.
- Returns:
Specified data.
- Return type:
- get_meta(name: Optional[str] = None)[source]
Get Metadata for this object.
Metadata with the given name, attached to this (
model
name,scenario
name,version
), is retrieved.- Parameters:
name (str, optional) – Metadata name/identifier.
- preload_timeseries() None [source]
Preload timeseries data to in-memory cache. Useful for bulk updates.
- read_file(path: PathLike, firstyear: Optional[int] = None, lastyear: Optional[int] = None) None [source]
Read time series data from a CSV or Microsoft Excel file.
- Parameters:
path (os.PathLike) – File to read. Must have suffix ‘.csv’ or ‘.xlsx’.
firstyear (int, optional) – Only read data from years equal to or later than this year.
lastyear (int, optional) – Only read data from years equal to or earlier than this year.
See also
- remove_geodata(df: DataFrame) None [source]
Remove geodata from the TimeSeries instance.
- Parameters:
df (
pandas.DataFrame
) –Data to remove. df must have the following columns:
region
variable
unit
subannual
year
- remove_timeseries(df: DataFrame) None [source]
Remove time series data.
- Parameters:
df (
pandas.DataFrame
) –Data to remove. df must have the following columns:
region or node
variable
unit
year
- set_meta(name_or_dict: Union[str, Dict[str, Any]], value=None) None [source]
Set Metadata for this object.
- timeseries(region: Optional[Union[str, Sequence[str]]] = None, variable: Optional[Union[str, Sequence[str]]] = None, unit: Optional[Union[str, Sequence[str]]] = None, year: Optional[Union[int, Sequence[int]]] = None, iamc: bool = False, subannual: Union[bool, str] = 'auto') DataFrame [source]
Retrieve time series data.
- Parameters:
iamc (bool, optional) – Return data in wide/’IAMC’ format. If
False
, return data in long format; seeadd_timeseries()
.region (str or list of str, optional) – Regions to include in returned data.
variable (str or list of str, optional) – Variables to include in returned data.
unit (str or list of str, optional) – Units to include in returned data.
year (int or list of int, optional) – Years to include in returned data.
subannual (bool or 'auto', optional) – Whether to include column for sub-annual specification (if
bool
); if ‘auto’, include column if sub-annual data (other than ‘Year’) exists in returned data frame.
- Raises:
ValueError – If subannual is
False
but Scenario has (filtered) sub-annual data.- Returns:
Specified data.
- Return type:
- transact(message: str = '', condition: bool = True)[source]
Context manager to wrap code in a ‘transaction’.
If condition is
True
, the TimeSeries (orScenario
) is checked out before the block begins. When the block ends, the object is committed with message. If condition isFalse
, nothing occurs before or after the block.Example
>>> # `ts` is currently checked in/locked >>> with ts.transact(message="replace 'foo' with 'bar' in set x"): >>> # `ts` is checked out and may be modified >>> ts.remove_set("x", "foo") >>> ts.add_set("x", "bar") >>> # Changes to `ts` have been committed
- property url: str[source]
URL fragment for the TimeSeries.
This has the format
{model name}/{scenario name}#{version}
, with the same values passed when creating the TimeSeries instance.Examples
To form a complete URL (e.g. to use with
from_url()
), use a configuredPlatform
name:>>> platform_name = "my-ixmp-platform" >>> mp = Platform(platform_name) >>> ts = TimeSeries(mp, "foo", "bar", 34) >>> ts.url "foo/bar#34" >>> f"ixmp://{platform_name}/{ts.url}" "ixmp://platform_name/foo/bar#34"
Note
Use caution: because Platform configuration is system-specific, other systems must have the same configuration for platform_name in order for the URL to refer to the same TimeSeries/Scenario.
Scenario
- class ixmp.Scenario(mp: Platform, model: str, scenario: str, version: Optional[Union[int, str]] = None, scheme: Optional[str] = None, annotation: Optional[str] = None, **model_init_args)[source]
Bases:
TimeSeries
Collection of model-related data.
See
TimeSeries
for the meaning of parameters mp, model, scenario, version, and annotation.- Parameters:
scheme (str, optional) – Use an explicit scheme to initialize the new scenario. The
initialize()
method of the correspondingModel
class inMODELS
is used to initialize items in the Scenario.cache –
Deprecated since version 3.0: The cache keyword argument to
Scenario
has no effect and raises a warning. Use cache as one of the backend_args toPlatform
to disable/enable caching for storage backends that support it. Useload_scenario_data()
to load all data in the Scenario into an in-memory cache.
A Scenario is a
TimeSeries
that also contains model data, including model solution data. See the data model documentation.The Scenario class provides methods to manipulate model data items:
Set:
init_set()
,add_set()
,set()
,remove_set()
,has_set()
Parameter:
≥1-dimensional:
init_par()
,add_par()
,par()
,remove_par()
,par_list()
, andhas_par()
.0-dimensional:
init_scalar()
,change_scalar()
, andscalar()
.
Variable:
init_var()
,var()
,var_list()
, andhas_var()
.Equation:
init_equ()
,equ()
,equ_list()
, andhas_equ()
.
add_par
(name[, key_or_data, value, unit, ...])Set the values of a parameter.
add_set
(name, key[, comment])Add elements to an existing set.
change_scalar
(name, val, unit[, comment])Set the value and unit of a scalar.
clone
([model, scenario, annotation, ...])Clone the current scenario and return the clone.
equ
(name[, filters])Return a dataframe of (filtered) elements for a specific equation.
equ_list
()List all defined equations.
get_meta
([name])Get Metadata for this object.
has_equ
(name)Check whether the scenario has an equation with that name.
has_par
(name)Check whether the scenario has a parameter with that name.
has_set
(name)Check whether the scenario has a set name.
Return
True
if the Scenario contains model solution data.has_var
(name)Check whether the scenario has a variable with that name.
idx_names
(name)Return the list of index names for an item (set, par, var, equ).
idx_sets
(name)Return the list of index sets for an item (set, par, var, equ).
init_equ
(name[, idx_sets, idx_names])Initialize a new equation.
init_par
(name, idx_sets[, idx_names])Initialize a new parameter.
init_scalar
(name, val, unit[, comment])Initialize a new scalar.
init_set
(name[, idx_sets, idx_names])Initialize a new set.
init_var
(name[, idx_sets, idx_names])Initialize a new variable.
Load all Scenario data into memory.
par
(name[, filters])Return parameter data.
par_list
()List all defined parameters.
read_excel
(path[, add_units, init_items, ...])Read a Microsoft Excel file into the Scenario.
remove_par
(name[, key])Remove parameter values or an entire parameter.
remove_set
(name[, key])Delete set elements or an entire set.
remove_solution
([first_model_year])Remove the solution from the scenario.
scalar
(name)Return the value and unit of a scalar.
set
(name[, filters])Return the (filtered) elements of a set.
set_list
()List all defined sets.
set_meta
(name_or_dict[, value])Set Metadata for this object.
solve
([model, callback, cb_kwargs])Solve the model and store output.
to_excel
(path[, items, filters, max_row])Write Scenario to a Microsoft Excel file.
var
(name[, filters])Return a dataframe of (filtered) elements for a specific variable.
var_list
()List all defined variables.
- add_par(name: str, key_or_data: Optional[Union[str, Sequence[str], Dict, DataFrame]] = None, value=None, unit: Optional[str] = None, comment: Optional[str] = None) None [source]
Set the values of a parameter.
- add_set(name: str, key: Union[str, Sequence[str], Dict, DataFrame], comment: Optional[str] = None) None [source]
Add elements to an existing set.
- Parameters:
name (str) – Name of the set.
key (str or iterable of str or dict or
pandas.DataFrame
) – Element(s) to be added. If name exists, the elements are appended to existing elements.comment (str or iterable of str, optional) – Comment describing the element(s). If given, there must be the same number of comments as elements.
- Raises:
KeyError – If the set name does not exist.
init_set()
must be called beforeadd_set()
.ValueError – For invalid forms or combinations of key and comment.
- change_scalar(name: str, val: Real, unit: str, comment: Optional[str] = None) None [source]
Set the value and unit of a scalar.
- check_out(timeseries_only: bool = False) None [source]
Check out the Scenario.
- Raises:
ValueError – If
has_solution()
isTrue
.
See also
- clone(model: Optional[str] = None, scenario: Optional[str] = None, annotation: Optional[str] = None, keep_solution: bool = True, shift_first_model_year: Optional[int] = None, platform: Optional[Platform] = None) Scenario [source]
Clone the current scenario and return the clone.
If the (model, scenario) given already exist on the
Platform
, the version for the cloned Scenario follows the last existing version. Otherwise, the version for the cloned Scenario is 1.Note
clone()
does not set or alter default versions. This means that a clone to new (model, scenario) names has no default version, and will not be returned byPlatform.scenario_list()
unless default=False is given.- Parameters:
model (str, optional) – New model name. If not given, use the existing model name.
scenario (str, optional) – New scenario name. If not given, use the existing scenario name.
annotation (str, optional) – Explanatory comment for the clone commit message to the database.
keep_solution (bool, optional) – If
True
, include all timeseries data and the solution (vars and equs) from the source scenario in the clone. IfFalse
, only include timeseries data marked meta=True (seeadd_timeseries()
).shift_first_model_year (int, optional) – If given, all timeseries data in the Scenario is omitted from the clone for years from first_model_year onwards. Timeseries data with the meta flag (see
add_timeseries()
) are cloned for all years.platform (
Platform
, optional) – Platform to clone to (default: current platform)
- equ(name: str, filters=None, **kwargs) DataFrame [source]
Return a dataframe of (filtered) elements for a specific equation.
- idx_names(name: str) List[str] [source]
Return the list of index names for an item (set, par, var, equ).
- Parameters:
name (str) – name of the item
- idx_sets(name: str) List[str] [source]
Return the list of index sets for an item (set, par, var, equ).
- Parameters:
name (str) – name of the item
- init_par(name: str, idx_sets: Sequence[str], idx_names: Optional[Sequence[str]] = None) None [source]
Initialize a new parameter.
- init_set(name: str, idx_sets: Optional[Sequence[str]] = None, idx_names: Optional[Sequence[str]] = None) None [source]
Initialize a new set.
- Parameters:
- Raises:
ValueError – If the set (or another object with the same name) already exists.
RuntimeError – If the Scenario is not checked out (see
check_out()
).
- init_var(name: str, idx_sets: Optional[Sequence[str]] = None, idx_names: Optional[Sequence[str]] = None) None [source]
Initialize a new variable.
- items(type: ItemType = ItemType.PAR, filters: Optional[Dict[str, Sequence[str]]] = None) Iterable[Tuple[str, Any]] [source]
Iterate over model data items.
- load_scenario_data() None [source]
Load all Scenario data into memory.
- Raises:
ValueError – If the Scenario was instantiated with
cache=False
.
- par(name: str, filters: Optional[Dict[str, Sequence[str]]] = None, **kwargs) DataFrame [source]
Return parameter data.
If filters is provided, only a subset of data, matching the filters, is returned.
- read_excel(path: PathLike, add_units: bool = False, init_items: bool = False, commit_steps: bool = False) None [source]
Read a Microsoft Excel file into the Scenario.
- Parameters:
path (os.PathLike) – File to read. Must have suffix ‘.xlsx’.
add_units (bool, optional) – Add missing units, if any, to the Platform instance.
init_items (bool, optional) – Initialize sets and parameters that do not already exist in the Scenario.
commit_steps (bool, optional) – Commit changes after every data addition.
See also
- remove_par(name: str, key=None) None [source]
Remove parameter values or an entire parameter.
- Parameters:
name (str) – Name of the parameter.
key (dataframe or key list or concatenated string, optional) – Elements to be removed
- remove_set(name: str, key: Optional[Union[str, Sequence[str], Dict, DataFrame]] = None) None [source]
Delete set elements or an entire set.
- Parameters:
name (str) – Name of the set to remove (if key is
None
) or from which to remove elements.key (
pandas.DataFrame
or list of str, optional) – Elements to be removed from set name.
- remove_solution(first_model_year: Optional[int] = None) None [source]
Remove the solution from the scenario.
This function removes the solution (variables and equations) and timeseries data marked as meta=False from the scenario (see
add_timeseries()
).- Parameters:
first_model_year (int, optional) – If given, timeseries data marked as meta=False is removed only for years from first_model_year onwards.
- Raises:
ValueError – If Scenario has no solution or if first_model_year is not int.
- scalar(name: str) Dict[str, Union[Real, str]] [source]
Return the value and unit of a scalar.
- Parameters:
name (str) – Name of the scalar.
- Returns:
{‘value’
- Return type:
value, ‘unit’: unit}
- set(name: str, filters: Optional[Dict[str, Sequence[str]]] = None, **kwargs) Union[List[str], DataFrame] [source]
Return the (filtered) elements of a set.
- Parameters:
name (str) – Name of the set.
filters (dict) – Mapping of dimension_name → elements, where dimension_name is one of the idx_names given when the set was initialized (see
init_set()
), and elements is an iterable of labels to include in the return value.
- Return type:
- solve(model: Optional[str] = None, callback: Optional[Callable] = None, cb_kwargs: Dict[str, Any] = {}, **model_options) None [source]
Solve the model and store output.
ixmp ‘solves’ a model by invoking the run() method of a
Model
subclass—for instance,GAMSModel.run()
. Depending on the underlying model code, different steps are taken; see each model class for details. In general:Data from the Scenario are written to a model input file.
Code or an external program is invoked to perform calculations or optimizations, solving the model.
Data representing the model outputs or solution are read from a model output file and stored in the Scenario.
If the optional argument callback is given, additional steps are performed:
Execute the callback with the Scenario as an argument. The Scenario has an iteration attribute that stores the number of times the underlying model has been solved (#2).
If the callback returns
False
or similar, iterate by repeating from step #1. Otherwise, exit.
- Parameters:
model (str) – model (e.g., MESSAGE) or GAMS file name (excluding ‘.gms’)
callback (callable, optional) – Method to execute arbitrary non-model code. Must accept a single argument: the Scenario. Must return a non-
False
value to indicate convergence.cb_kwargs (dict, optional) – Keyword arguments to pass to callback.
model_options – Keyword arguments specific to the model. See
GAMSModel
.
- Warns:
UserWarning – If callback is given and returns
None
. This may indicate that the user has forgotten areturn
statement, in which case the iteration will continue indefinitely.- Raises:
ValueError – If the Scenario has already been solved.
- to_excel(path: ~os.PathLike, items: ~ixmp.backend.ItemType = ItemType.None, filters: ~typing.Optional[~typing.Dict[str, ~typing.Union[~typing.Sequence[str], ~ixmp.core.scenario.Scenario]]] = None, max_row: ~typing.Optional[int] = None) None [source]
Write Scenario to a Microsoft Excel file.
- Parameters:
path (os.PathLike) – File to write. Must have suffix
.xlsx
.items (ItemType, optional) – Types of items to write. Either
SET
|PAR
(i.e. only sets and parameters), orMODEL
(also variables and equations, i.e. model solution data).filters (dict, optional) – Filters for values along dimensions; same as the filters argument to
par()
.max_row (int, optional) – Maximum number of rows in each sheet. If the number of elements in an item exceeds this number or
EXCEL_MAX_ROWS
, then an item is written to multiple sheets named, e.g. ‘foo’, ‘foo(2)’, ‘foo(3)’, etc.
See also
- ixmp.backend.io.EXCEL_MAX_ROWS = 1048576[source]
Maximum number of rows supported by the Excel file format. See
to_excel()
and Scenario/model data.
Configuration
When imported, ixmp
reads configuration from the first file named
config.json
found in one of the following directories:
The directory given by the environment variable
IXMP_DATA
, if defined,${XDG_DATA_HOME}/ixmp
, if the environment variable is defined, or$HOME/.local/share/ixmp
.
Tip
For most users, #2 or #3 is a sensible default; platform information for many local and remote databases can be stored in config.json
and retrieved by name.
Advanced users wishing to use a project-specific config.json
can set IXMP_DATA
to the path for any directory containing a file with this name.
To manipulate the configuration file, use the platform
command in the ixmp command-line interface:
# Add a platform named 'p1' backed by a local HSQL database
$ ixmp platform add p1 jdbc hsqldb /path/to/database/files
# Add a platform named 'p2' backed by a remote Oracle database
$ ixmp platform add p2 jdbc oracle \
database.server.example.com:PORT:SCHEMA username password
# Add a platform named 'p3' with specific JVM arguments
$ ixmp platform add p3 jdbc hsqldb /path/to/database/files -Xmx12G
# Make 'p2' the default Platform
$ ixmp platform add default p2
…or, use the methods of ixmp.config
.
- class ixmp._config.Config(read: bool = True)[source]
Configuration for ixmp.
For most purposes, there is only one instance of this class, available at
ixmp.config
and automaticallyread()
from the ixmp configuration file at the moment the package is imported. (save()
writes the current values to file.)Config is a key-value store. Key names are strings; each key has values of a fixed type. Individual keys can be accessed with
get()
andset()
, or by accessing thevalues
attribute.Spaces in names are automatically replaced with underscores, e.g. “my key” is stored as “my_key”, but may be set and retrieved as “my key”.
Downstream packages (e.g.
message_ix
,message_ix_models
) mayregister()
additional keys to be stored in and read from the ixmp configuration file.The default configuration (restored by
clear()
) is:{ "platform": { "default": "local", "local": { "class": "jdbc", "driver": "hsqldb", "path": "~/.local/share/ixmp/localdb/default" }, }
clear
()Clear all configuration keys by setting empty or default values.
get
(name)Return the value of a configuration key name.
keys
()Return the names of all registered configuration keys.
read
()Try to read configuration keys from file.
save
()Write configuration keys to file.
set
(name, value[, _strict])Set configuration key name to value.
register
(name, type_[, default])Register a new configuration key.
unregister
(name)Unregister and clear the configuration key name.
add_platform
(name, *args, **kwargs)Add or overwrite information about a platform.
get_platform_info
(name)Return information on configured Platform name.
remove_platform
(name)Remove the configuration for platform name.
- Parameters:
read (bool) – Read
config.json
on startup.
- add_platform(name: str, *args, **kwargs)[source]
Add or overwrite information about a platform.
- Parameters:
name (str) – New or existing platform name.
args – Positional arguments. If name is ‘default’, args must be a single string: the name of an existing configured Platform. Otherwise, the first of args specifies one of the
BACKENDS
, and the remaining args differ according to the backend.kwargs – Keyword arguments. These differ according to backend.
See also
Backend.handle_config
,JDBCBackend.handle_config
- get_platform_info(name: str) Tuple[str, Dict[str, Any]] [source]
Return information on configured Platform name.
- Parameters:
name (str) – Existing platform. If name is “default”, the information for the default platform is returned.
- Returns:
str – The name of the platform. If name was “default”, this will be the actual name of platform that is designated default.
dict – The “class” key specifies one of the
BACKENDS
. Other keys vary by backend class.
- Raises:
ValueError – If name is not configured as a platform.
- read()[source]
Try to read configuration keys from file.
If successful, the attribute
path
is set to the path of the file.
- register(name: str, type_: type, default: Optional[Any] = None, **kwargs)[source]
Register a new configuration key.
- Parameters:
name (str) – Name of the new key.
type (object) – Type of valid values for the key, e.g.
str
orpathlib.Path
.default (any, optional) – Default value for the key. If not supplied, the type is called to supply the default value, e.g.
str()
.
- Raises:
ValueError – if the key name is already registered.
- save()[source]
Write configuration keys to file.
config.json
is created in the first of the ixmp configuration directories that exists. Only non-null values are written.
Utilities
|
Compute the difference between Scenarios a and b. |
|
Return a formatted list of TimeSeries on platform. |
|
Check out timeseries depending on state. |
|
Commit timeseries with message if condition is |
|
Parse url and return Platform and Scenario information. |
|
Print information about ixmp and its dependencies to file. |
|
Update parameter name in scenario using data, without overwriting. |
|
Transform df to the IAMC structure/layout. |
- ixmp.utils.diff(a, b, filters=None) Iterator[Tuple[str, DataFrame]] [source]
Compute the difference between Scenarios a and b.
diff()
combinespandas.merge()
andScenario.items()
. Only parameters are compared.merge()
is called with the argumentshow="outer", sort=True, suffixes=("_a", "_b"), indicator=True
; the merge is performed on all columns except ‘value’ or ‘unit’.- Yields:
tuple of str, pandas.DataFrame – Tuples of item name and data.
- ixmp.utils.format_scenario_list(platform, model=None, scenario=None, match=None, default_only=False, as_url=False)[source]
Return a formatted list of TimeSeries on platform.
- Parameters:
platform (
Platform
) –model (str, optional) – Model name to restrict results. Passed to
scenario_list()
.scenario (str, optional) – Scenario name to restrict results. Passed to
scenario_list()
.match (str, optional) – Regular expression to restrict results. Only results where the model or scenario name matches are returned.
default_only (bool, optional) – Only return TimeSeries where a default version has been set with
TimeSeries.set_as_default()
.as_url (bool, optional) – Format results as ixmp URLs.
- Returns:
If as_url is
False
, also include summary information.- Return type:
- ixmp.utils.logger()[source]
Access global logger.
Deprecated since version 3.3: To control logging from ixmp, instead use
logging
to retrieve it:import logging ixmp_logger = logging.getLogger("ixmp") # Example: set the level to INFO ixmp_logger.setLevel(logging.INFO)
- ixmp.utils.maybe_check_out(timeseries, state=None)[source]
Check out timeseries depending on state.
If state is
None
, thencheck_out()
is called.- Returns:
- Raises:
ValueError – If timeseries is a
Scenario
object andhas_solution()
isTrue
.
See also
- ixmp.utils.maybe_commit(timeseries, condition, message)[source]
Commit timeseries with message if condition is
True
.- Returns:
See also
- ixmp.utils.maybe_convert_scalar(obj) DataFrame [source]
Convert obj to
pandas.DataFrame
.- Parameters:
obj – Any value returned by
Scenario.par()
. For a scalar (0-dimensional) parameter, this will bedict
.- Returns:
maybe_convert_scalar()
always returns a data frame.- Return type:
- ixmp.utils.parse_url(url)[source]
Parse url and return Platform and Scenario information.
A URL (Uniform Resource Locator), as the name implies, uniquely identifies a specific scenario and (optionally) version of a model, as well as (optionally) the database in which it is stored. ixmp URLs take forms like:
ixmp://PLATFORM/MODEL/SCENARIO[#VERSION] MODEL/SCENARIO[#VERSION]
where:
The
PLATFORM
is a configured platform name; seeixmp.config
.MODEL
may not contain the forward slash character (‘/’);SCENARIO
may contain any number of forward slashes. Both must be supplied.VERSION
is optional but, if supplied, must be an integer.
- Returns:
platform_info (dict) – Keyword argument ‘name’ for the
Platform
constructor.scenario_info (dict) – Keyword arguments for a
Scenario
on the above platform: ‘model’, ‘scenario’ and, optionally, ‘version’.
- Raises:
ValueError – For malformed URLs.
- ixmp.utils.show_versions(file=<_io.TextIOWrapper name='<stdout>' mode='w' encoding='UTF-8'>)[source]
Print information about ixmp and its dependencies to file.
- ixmp.utils.to_iamc_layout(df: DataFrame) DataFrame [source]
Transform df to the IAMC structure/layout.
The returned object has:
Any (Multi)Index levels reset as columns.
Lower-case column names ‘region’, ‘variable’, ‘subannual’, and ‘unit’.
If not present in df, the value ‘Year’ in the ‘subannual’ column.
- Parameters:
df (pandas.DataFrame) – May have a ‘node’ column, which will be renamed to ‘region’.
- Return type:
- Raises:
ValueError – If ‘region’, ‘variable’, or ‘unit’ is not among the column names.
- ixmp.utils.update_par(scenario, name, data)[source]
Update parameter name in scenario using data, without overwriting.
Only values which do not already appear in the parameter data are added.
Utilities for documentation
GitHub adapter for sphinx.ext.linkcode
.
To use this extension, add it to the extensions
setting in the Sphinx configuration file (usually conf.py
), and set the required linkcode_github_repo_slug
:
extensions = [
...,
"ixmp.utils.sphinx_linkcode_github",
...,
]
linkcode_github_repo_slug = "iiasa/ixmp" # Required
linkcode_github_remote_head = "feature/example" # Optional
The extension uses GitPython (if installed) or linkcode_github_remote_head
(optional override) to match a local commit to a remote head (~branch name), and construct links like:
https://github.com/{repo_slug}/blob/{remote_head}/path/to/source.py#L123-L456
- class ixmp.utils.sphinx_linkcode_github.GitHubLinker[source]
Handler for storing files/line numbers for code objects and formatting links.
- autodoc_process_docstring(app: sphinx.application.Sphinx, what, name: str, obj, options, lines)[source]
Handler for the Sphinx
autodoc-process-docstring
event.Records the file and source line numbers containing obj.
- config_inited(app: sphinx.application.Sphinx, config)[source]
Handler for the Sphinx
config-inited
event.
- linkcode_resolve(domain: str, info: dict) Optional[str] [source]
Function for the
sphinx.ext.linkcode
setting of the same name.Returns URLs for code objects on GitHub, using information stored by
autodoc_process_docstring()
.
- ixmp.utils.sphinx_linkcode_github.find_remote_head(app: sphinx.application.Sphinx) str [source]
Return a name for the remote branch containing the code.
- ixmp.utils.sphinx_linkcode_github.find_remote_head_git(app: sphinx.application.Sphinx) Optional[str] [source]
Use git to identify the name of the remote branch containing the code.
- ixmp.utils.sphinx_linkcode_github.package_base_path(obj) Path [source]
Return the base path of the package containing obj.
- ixmp.utils.sphinx_linkcode_github.setup(app: sphinx.application.Sphinx)[source]
Sphinx extension registration hook.
Utilities for testing
Utilities for testing ixmp.
These include:
pytest hooks, fixtures:
ixmp_cli
A CliRunner object that invokes the ixmp command-line interface.
tmp_env
Return the os.environ dict with the IXMP_DATA variable set.
test_mp
An empty
Platform
connected to a temporary, in-memory database.…and assertions:
assert_logs
(caplog[, message_or_messages, ...])Assert that message_or_messages appear in logs.
Methods for setting up and populating test ixmp databases:
add_test_data
(scen)create_test_platform
(tmp_path, data_path, ...)Create a Platform for testing using specimen files 'name.*'.
make_dantzig
(mp[, solve, quiet])Return
ixmp.Scenario
of Dantzig's canning/transport problem.populate_test_platform
(platform)Populate platform with data for testing.
Methods to run and retrieve values from Jupyter notebooks:
run_notebook
(nb_path, tmp_path[, env])Execute a Jupyter notebook via
nbclient
and collect output.get_cell_output
(nb, name_or_index[, kind])Retrieve a cell from nb according to its metadata name_or_index:
- ixmp.testing.add_random_model_data(scenario, length)[source]
Add a set and parameter with given length to scenario.
The set is named ‘random_set’. The parameter is named ‘random_par’, and has two dimensions indexed by ‘random_set’.
- ixmp.testing.get_cell_output(nb, name_or_index, kind='data')[source]
Retrieve a cell from nb according to its metadata name_or_index:
The Jupyter notebook format allows specifying a document-wide unique ‘name’ metadata attribute for each cell:
https://nbformat.readthedocs.io/en/latest/format_description.html #cell-metadata
Return the cell matching name_or_index if
str
; or the cell at theint
index; or raiseValueError
.
- ixmp.testing.make_dantzig(mp: Platform, solve: bool = False, quiet: bool = False) Scenario [source]
Return
ixmp.Scenario
of Dantzig’s canning/transport problem.- Parameters:
- Return type:
.Scenario
See also
- ixmp.testing.populate_test_platform(platform)[source]
Populate platform with data for testing.
Many of the tests in
ixmp.tests.core
depend on this set of data.The data consist of:
3 versions of the Dantzig cannery/transport Scenario.
Version 2 is the default.
All have
HIST_DF
andTS_DF
as time-series data.
1 version of a TimeSeries with model name ‘Douglas Adams’ and scenario name ‘Hitchhiker’, containing 2 values.
- ixmp.testing.random_model_data(length)[source]
Random (set, parameter) data with at least length elements.
See also
- ixmp.testing.random_ts_data(length)[source]
A
pandas.DataFrame
of time series data with length rows.Suitable for passage to
TimeSeries.add_timeseries()
.
- ixmp.testing.resource_limit(request)[source]
A fixture that limits Python
resources
.See the documentation (
pytest --help
) for the--resource-limit
command-line option that selects (1) the specific resource and (2) the level of the limit.The original limit, if any, is reset after the test function in which the fixture is used.
- ixmp.testing.run_notebook(nb_path, tmp_path, env=None, **kwargs)[source]
Execute a Jupyter notebook via
nbclient
and collect output.- Parameters:
nb_path (path-like) – The notebook file to execute.
tmp_path (path-like) – A directory in which to create temporary output.
env (dict-like, optional) – Execution environment for
nbclient
. Default:os.environ
.kwargs –
Keyword arguments for
nbclient.NotebookClient
. Defaults are set for:- ”allow_errors”
Default
False
. IfTrue
, the execution always succeeds, and cell output contains exception information rather than code outputs.- ”kernel_version”
Jupyter kernel to use. Default: either “python2” or “python3”, matching the current Python major version.
Warning
Any existing configuration for this kernel on the local system— such as an IPython start-up file—will be executed when the kernel starts. Code that enables GUI features can interfere with
run_notebook()
.- ”timeout”
in seconds; default 10.
- Returns:
nb (
nbformat.NotebookNode
) – Parsed and executed notebook.errors (list) – Any execution errors.