Package structure#

reho

REHO - A Decision Support Tool for Renewable Energy Communities. Renewable Energy Hub Optimizer (REHO) is a decision support tool for sustainable urban energy system planning. It is developed by EPFL (Switzerland), within the Industrial Process and Energy Systems Engineering (IPESE) group.

REHO exploits the benefits of two programming languages:

  • AMPL: the core optimization model with the objectives, constraints, modeling equations (energy balance, mass balance, heating cascade, etc.)

  • Python: the data management structure used for initialization of the model, execution of the optimization, and results retrieval). All the input and output data is passed to the AMPL model through amplpy, the Python API for AMPL.

../_images/architecture.svg

Fig. 5 Diagram of REHO architecture#

Diagram of REHO architecture illustrates REHO architecture, which can be distinguished into three parts:

  • Preprocessing: generation of end use demand energy profiles + characterization of equipment and resources

  • Optimization: MILP Dantzig-Wolfe decomposition algorithm with the master problem (MP) and subproblems (SPs)

  • Postprocessing: list of energy system configurations and related KPIs

data/#

Directory for data-related files.

  • elcom/

  • emissions/

  • infrastructure/

  • mobility/

  • QBuildings/

  • SIA/

  • skydome/

model/#

Directory for model-related code.

ampl_model/#

Core of the optimization model (model objectives, constraints, modelling equations), containing all AMPL files:

  • units/ contains the model files specific to each technology that can be used in the system. Three subfolders (district_units, h2_units, and storage) are used for easier classification.

  • data_stream.dat contains values that specify the operating temperatures of streams and energy conversion units.

  • master_problem.mod contains the modeling of the problem for the decomposition approach.

  • sub_problem.mod contains the modelling of the energy system with the declaration of all parameters and variables, problem constraints (energy balance, mass balance, heat cascade, etc.). This is the core of the MILP model.

  • scenario.mod contains the optimization objective functions, the epsilon constraints, and some specific constraints that can be enabled to model a particular scenario.

postprocessing/#

Directory where the output of the optimization from the AMPL model is extracted and processed to give a reho.results dictionary.

KPIs.py#

Calculates the KPIs resulting from the optimization.

reho.model.postprocessing.KPIs.build_df_profiles_house(df_Results, infrastructure)#

Builds hourly profiles for demand and consumption of units and buildings.

reho.model.postprocessing.KPIs.build_df_annual(df_Results, df_profiles_house, infrastructure, df_Time)#

Transforms profiles to annual values, convert to MWh and insert additional values (costs, net resource exchanges).

Parameters:
  • df_Results (df) – results of a scenario

  • df_profiles_house (df)

  • infrastructure (df)

  • df_Time (df)

Returns:

Annual parameters for each building and for the network.

reho.model.postprocessing.KPIs.temperature_profile(df_Results, daily_averaging=False)#

Returns a pd.Series of the indoor temperature profile, one column per building.

Parameters:
  • df_Results (df) – pd.DataFrame of a scenario

  • daily_averaging (bool) – to average over days

Returns:

df_Tin

sensitivity_analysis.py#

Performs a sensitivity analysis on the optimization.

class reho.model.postprocessing.sensitivity_analysis.SensitivityAnalysis(reho, SA_type, sampling_parameters=0, upscaling_factor=1)#

Performs a sensitivity analysis (SA): sampling, solving, storing all optimizations results and the sensitivity of each tested parameter.

Parameters:
  • reho (reho object) – Model of the district, obtained via the REHO class.

  • SA_type (str) – Type of SA, choose between ‘Morris’, ‘Sobol’, and ‘Monte_Carlo’.

  • sampling_parameters (int) – Number of trajectories for the sampling of the solution space.

  • upscaling_factor (int) – To represent the effective ERA of the typical districts.

Notes

The framework is designed to be performed using TOTEX minimization but can easily be modified: simply change the objective function in the REHO object initialization, and adapt the calculation for objective_values in extract_results().

build_SA(unit_parameter=['Cost_inv1', 'Cost_inv2'], SA_parameters={})#
  • Generates the list of parameters for the SA, their values and type of variation range

  • Generates the problem of the SA, i.e. define the parameters and theirs bounds

  • Generates the sampling scheme of the SA

Parameters:
  • [list] (unit_parameter)

  • [dict] (SA_parameters)

Returns:

  • parameter (dict) – Parameters

  • problem (dict) – Parameters with their bounds

  • sampling (array) – Sampling values

run_SA(save_inter=True, save_inter_nb_iter=50, save_time_opt=True, intermediate_start=0)#

Launches all optimizations of the SA and store their results.

Parameters:
  • save_inter (boolean) – Enable intermediary save

  • save_inter_nb_iter (int) – Step at which the intermediary save is done

  • save_time_opt (boolean) – Ceates a .txt file and write the time for each optimization

  • intermediate_start (int) – Starts the SA from a specific sampling point

Returns:

  • SA_results (dict) – Contains the number of the optimization and a dictionary regrouping all main results of the optimizations

  • objective_values (list) – Values of the objective function for each optimization

calculate_SA()#

Computes the sensitivity indices with the objective values and the problem.

write_results.py#

Extracts the results from the AMPL model and converts it to Python dictionary and pandas dataframes.

preprocessing/#

Directory where the data are prepared as input for the AMPL optimization.

buildings_profiles.py#

Generates the buildings profiles for domestic hot water (DHW) demand, domestic electricity demand, internal heat gains, and solar gains.

reho.model.preprocessing.buildings_profiles.reference_temperature_profile(parameters_to_ampl, cluster)#

Returns a reference temperature timeseries.

reho.model.preprocessing.buildings_profiles.eud_profiles(buildings_data, cluster, df_SIA_380, df_SIA_2024, df_Timestamp, include_stochasticity=False, sd_stochasticity=None, use_custom_profiles=False)#

Generates building-specific profiles for internal heat gains, DHW demand, and domestic electricity demand based on SIA 2024 norms.

The SIA profiles are daily profiles with coefficient attributed to each month. This function extends the profiles to the periods used, according to the building’s affectation.

Parameters:
  • buildings_data (dict) – Buildings data from QBuildingsReader class.

  • df_SIA_380 (pd.DataFrame) – SIA norms.

  • df_SIA_2024 (pd.DataFrame) – SIA norms.

  • df_Timestamp (pd.DataFrame) – Information for clustering results, used to know the periods and period duration.

  • cluster (dict) – Clustering parameters.

  • include_stochasticity (bool) – Includes variability in the standard values given by the SIA profiles (see List of the available methods in REHO).

  • sd_stochasticity (list) – Parameters of the stochasticity: first value is the standard deviation on the peak demand, second value is the standard deviation on the time-shift (see List of the available methods in REHO).

  • use_custom_profiles (dict) – Allows to give custom profiles (see List of the available methods in REHO).

Returns:

  • np.array – Heat gains from people

  • np.array – DHW demand

  • np.array – Electricity demand

Notes

  • One building can have several affectations. In that case, the building is divided by the share of ERA by affectations and the profiles are summed.

  • To use custom profiles, use csv files with 8760 rows. The name of the columns should be the same as the buildings keys in buildings_data.

Caution

When using custom electricity profiles, the heat gains from electricity appliances are estimated through a conversion factor conv_heat_factor (default value = 70%).

Examples

>>> my_profiles = {'electricity': 'my_folder/electricity.csv'}
>>> file_id = 'Geneva_10_24_T_I_W'
>>> cluster = {'Location': 'Bruxelles', 'Attributes': ['T', 'I', 'W'], 'Periods': 10, 'PeriodDuration': 24}
>>> people_gain, eud_dhw, eud_elec = eud_profiles(buildings_data, cluster, use_custom_profiles=my_profiles)
reho.model.preprocessing.buildings_profiles.apply_stochasticity(df_profiles, scale, SF)#

Returns the daily profiles where an intensity variation (scale) and time shift factor (SF) have been applied.

reho.model.preprocessing.buildings_profiles.create_random_var(sd_amplitude, sd_timeshift)#

Creates an array of random variables for the use of apply_stochasticity.

Notes

The array is hard-coded to be of dimension [1,5], as it applies on the daily profiles for electricity demand, DHW demand, occupancy, electricity heat gains, and heat gains from people.

reho.model.preprocessing.buildings_profiles.annual_to_typical(cluster, annual_file, df_Timestamp, typical_file=None)#

From an annual profile (8760 values), extracts the values corresponding to the typical days.

Parameters:
  • cluster (dict) – Dictionary containing ‘PeriodDuration’ indicating number of hours per typical day.

  • annual_file (str) – Path to annual CSV file containing at least a ‘time(UTC)’ column.

  • df_Timestamp (pd.DataFrame) – DataFrame containing at least a ‘Date’ column indicating typical day dates.

  • typical_file (str, optional) – Path to save the extracted typical day CSV file.

Returns:

df_typical – DataFrame indexed by [‘Period’, ‘Hour’] containing typical day data.

Return type:

pd.DataFrame

reho.model.preprocessing.buildings_profiles.solar_gains_profile(buildings_data, sia_data, local_data)#

Computes the solar heat gains from the irradiance. Heat gains depend on the facades surfaces and on a window fraction (obtained from SIA 2024).

Parameters:
  • buildings_data (dict) – Building-specific data.

  • sia_data (dict) – SIA norms.

  • local_data (dict) – Location-specific data.

Returns:

Solar gains for each timesteps.

Return type:

np.array

clustering.py#

Clustering algorithm for input data reduction.

class reho.model.preprocessing.clustering.Clustering(data, nb_clusters=None, period_duration=24, options=None)#

Executes a clustering for each number of clusters among a specified interval (nb_clusters), and selects the optimal one according to the MAPE criterion (Mean Average Percentage Error).

Parameters:#
datapd.DataFrame

Annual weather data

nb_clusterslist

Interval for the number of clusters possible.

electricity_prices.py#

Queries the electricity retail and injection prices, from the ELCOM database and pvtarif.ch database respectively.

reho.model.preprocessing.electricity_prices.get_prices_from_elcom_by_canton(year=2024, canton=None, category=None, tva=None, export_path=None)#

Queries the electricity retail prices from the ELCOM database. Year, canton and consumer category can be specified. TVA is applied by default or can be adapted as a scaling factor.

Parameters:
  • year (int) – Year from which the electricity prices must be retrieved.

  • canton (str/int) – Canton from which the electricity prices must be retrieved. Can be in form of canton ID or canton name.

  • category (str) – Category from which the electricity prices must be retrieved.

  • tva (bool) – Whether the tva should be included in the final results or not.

  • export_path (str) – If given, export the prices with the parameter required at the path.

Returns:

Electricity price and its components.

Return type:

pd.DataFrame

See also

get_prices_from_elcom_by_city

To retrieve the ELCOM prices by city.

get_injection_prices

To obtain the injection prices instead.

Notes

  • A QBuildingsReader object can be passed to ‘canton’ for automatic localization.

  • List and description of the available categories are available at the ELCOM website.

  • The TVA on electricity changed in 2024, from 7.7% to 8.1%.

Examples

>>> prices = electricity_prices.get_prices_from_elcom_by_canton(canton='Geneva', category='H4')
>>> prices
    Year  Canton Category  ...  community_fees  aidfee  finalcosts
0  2024  Geneva       H4  ...         1.42824     2.3   30.925972
[1 rows x 9 columns]
reho.model.preprocessing.electricity_prices.get_prices_from_elcom_by_city(year=2024, city=None, category=None, tva=None, export_path=None)#

Queries the electricity retail prices from the ELCOM database by munipalities.

Year, municipality and consumer category can be specified. TVA is applied by default or can be adapted as a scaling factor.

Parameters:
  • year (int) – Year from which the electricity prices must be retrieved.

  • city (str/int) – Municipality from which the electricity prices must be retrieved. Can be in form of city ID or city name. If not given, queries the ELCOM database for the prices in every municipality.

  • category (str) – Category from which the electricity prices must be retrieved. If not given, prices are given for every consumer category.

  • tva (float) – Scaling factor for the resulting prices, initialized as the normal TVA.

  • export_path (str) – If given, export the prices with the parameter required at the path.

Returns:

Electricity price and its components.

Return type:

pd.DataFrame

See also

get_prices_from_elcom_by_canton

To retrieve the ELCOM prices by canton.

get_injection_prices

To obtain the injection prices instead.

Notes

  • A QBuildingsReader object can be passed to ‘city’ for automatic localization.

  • List and description of the available categories are available at the ELCOM website.

  • The TVA on electricity changed in 2024, from 7.7% to 8.1%.

Examples

>>> prices = electricity_prices.get_prices_from_elcom_by_city(city='Geneva', category='H4')
>>> prices
    Year  City   Category  ...  community_fees  aidfee  finalcosts
0  2024  Geneva       H4  ...         1.42824     2.3   30.925972
[1 rows x 9 columns]
reho.model.preprocessing.electricity_prices.get_injection_prices(city=None, year=2024, category=None, tva=None)#

Retrieves injection prices from the pvtarif.ch API.

The year, municipality and consumer category can be given to query at a more precise level. TVA is applied by default or can be adapted as a scaling factor.

Parameters:
  • city (str or None, optional) – The city for which to retrieve injection prices. If None, prices for all cities will be retrieved.

  • year (int, optional) – The year for which to retrieve injection prices. Default is 2024.

  • category (str or None, optional) – The energy category for which to retrieve injection prices. If None, prices for the first power category are given.

  • tva (float or None, optional) – The Value Added Tax (TVA) multiplier to apply to the total costs. If None, the default TVA value is used.

Returns:

Contains injection prices information for each city.

Return type:

pd.DataFrame

Raises:

ExecutionError – Raised if there is an issue with the HTTP request to the PVTarif API.

See also

get_prices_from_elcom_by_city

To retrieve the ELCOM prices by city.

get_prices_from_elcom_by_canton

To retrieve the ELCOM prices by canton.

Notes

  • The data are not realibly available before 2017.

  • The category corresponds to the one from ELCOM.

  • The TVA on electricity changed in 2024, from 7.7% to 8.1%.

Example

>>> retribution_prices = get_injection_prices(year=2023, city='Basel')
>>> retribution_prices.columns
Index(['id_city', 'municipality', 'id_operator', 'operator', 'federal_tariff',
   'origin_bonus', 'totalcosts', 'finalcosts'],
  dtype='object')
>>> retribution_prices
    id_city municipality  id_operator  ... origin_bonus  totalcosts  finalcosts
1914     2701        Basel          624  ...          0.0        13.0        14.0
[1 rows x 8 columns]
reho.model.preprocessing.electricity_prices.get_electricity_prices(city, year=2024, category=None, tva=None)#

Builds a DataFrame with the electricity prices (demand and supply) ready to use for REHO.

It calls get_prices_from_elcom_by_city and get_injection_prices and merges the two.

Parameters:
  • year (int) – Year from which the electricity prices must be retrieved.

  • city (str/int) – Municipality from which the electricity prices must be retrieved. Can be in form of city ID or city name. If not given, queries the ELCOM database for the prices in every municipality.

  • category (str) – Category from which the electricity prices must be retrieved. If not given, prices are given for every consumer category.

  • tva (float) – Scaling factor for the resulting prices, initialized as the normal TVA.

Returns:

Prices for the given parameters which columns are [‘Year’, ‘City’, ‘Provider’, ‘Category’, ‘Elec_demand_cts_kWh’, ‘Elec_supply_cts_kWh’].

Return type:

pd.DataFrame

See also

get_prices_from_elcom_by_city

To retrieve the ELCOM prices by city.

get_injection_prices

To obtain the injection prices instead.

Examples

>>> get_electricity_prices(year=2017, city='Genève')
    Year    City  ... Elec_demand_cts_kWh Elec_supply_cts_kWh
0   2017  Genève  ...           22.216512               12.92
1   2017  Genève  ...           21.887440               12.92
2   2017  Genève  ...           19.197310               12.92
3   2017  Genève  ...           21.596772               12.92
4   2017  Genève  ...           19.367290               12.92
5   2017  Genève  ...           16.895382               12.92
6   2017  Genève  ...           19.316962               12.92
7   2017  Genève  ...           21.598712               12.92
8   2017  Genève  ...           23.155285               12.92
9   2017  Genève  ...           23.548888               12.92
10  2017  Genève  ...           21.866694               12.92
11  2017  Genève  ...           20.781146               12.92
12  2017  Genève  ...           22.345242               12.92
13  2017  Genève  ...           17.155742               12.92
14  2017  Genève  ...           15.900062               12.92
[15 rows x 6 columns]

emissions_parser.py#

Characterizes the CO2 emissions related to electricity generated from the grid.

mobility_generator.py#

Processes data for parameters related to the Mobility Layer.

reho.model.preprocessing.mobility_generator.generate_mobility_parameters(cluster, parameters, infrastructure, modal_split)#

This function initializes (almost) all the necessary parameters to run the mobility sector in REHO. Additionally to the parameters given, this function reads data in the file dailyprofiles.csv

Parameters:
  • cluster (dict) – to get periods characterisations (p,t)

  • parameters (dictionary) – From the parameters will be extracted values related to the mobility namely DailyDist, Mode_Speed and Population. Population is a float, DailyDist a dict of float, Mode_Speed is a dictionnary given by the user in the scenario initialisation. It can contain customed values for only some modes while the other remain default.

  • infrastructure (list) – a list of all infrastructure units providing Mobility + “Public_transport” => which is the Network_supply[‘Mobility’]

  • modal_split (df) – a dataframe of the modal split for each category of distance

Returns:

  • param_output (dict) – a dict of dataframes containing the profiles for each param.

  • .. caution:: – The default values in this function are a hardcoded copy of parameters DailyDist and Population in mobility.mod.

reho.model.preprocessing.mobility_generator.get_mobility_demand(profiles_input, timestamp, days_mapping, DailyDist, Population)#

Formatting of the parameters Domestic_energy_pkm and Domestic_energy

Parameters:
  • profiles_input (df) – a dataframe of 24h profile data.

  • timestamp (df) – from the reho.cluster, to get the type of day (weekday, weekend) for each Period.

  • days_mapping (dict) – mapping between the labels of profile_input and timestamp

  • DailyDist (float)

  • Population (float)

Returns:

  • demand_pkm (df) – the param Domestic_energy_pkm[dist,p,t] by categories of distance

  • mobility_demand (df) – the param Domestic_energy[Mobility,p,t]

reho.model.preprocessing.mobility_generator.get_daily_profile(profiles_input, timestamp, days_mapping, transportunits)#

Formatting of the parameters Daily_Profile[u,p,t], used for example for the Bikes and ICE transport units. Either a profile is declared in the file dailyprofiles.csv, or the default profile taken is equal to the daily demand profile of a given day (demwdy_def and demwnd_def).

Parameters:
  • profiles_input (df) – a dataframe of 24h profile data.

  • timestamp (df) – from the reho.cluster, to get the type of day (weekday, weekend) for each Period.

  • days_mapping (dict) – mapping between the labels of profile_input and timestamp

  • transportunits (list)

Returns:

daily_profile – the parameter Daily_Profile[u,p,t]

Return type:

df

reho.model.preprocessing.mobility_generator.get_EV_charging(units, timestamp, profiles_input, days_mapping)#

Formatting of the parameter EV_charging_profile[u,p,t]. Data is taken from dailyprofiles.csv (columns EV_cpfwnd and EV_cpfwdy). Each Unit (from UnitOfType[EV]) can be provided with a personnalized profile, otherwise the default value EV_cpfxxx is taken.

Parameters:
  • profiles_input (df) – a dataframe of 24h profile data.

  • timestamp (df) – from the reho.cluster, to get the type of day (weekday, weekend) for each Period.

  • days_mapping (dict) – mapping between the labels of profile_input and timestamp

  • units (df) – dataframe from district_units.csv

Returns:

EV_charging_profile – the parameter EV_charging_profile[u,p,t]

Return type:

df

reho.model.preprocessing.mobility_generator.get_EV_plugged_out(units, timestamp, profiles_input, days_mapping)#

Formatting of the parameter EV_plugged_out[u,p,t]. Data is taken from dailyprofiles.csv (columns EV_outwnd and EV_outwdy). Each Unit (from UnitOfType[EV]) can be provided with a personnalized profile, otherwise the default value EV_outxxx is taken.

Parameters:
  • profiles_input (df) – a dataframe of 24h profile data.

  • timestamp (df) – from the reho.cluster, to get the type of day (weekday, weekend) for each Period.

  • days_mapping (dict) – mapping between the labels of profile_input and timestamp

  • units (df) – dataframe from district_units.csv

Returns:

EV_plugged_out – the parameter EV_plugged_out[u,p,t]

Return type:

df

reho.model.preprocessing.mobility_generator.get_activity_profile(units, timestamp, profiles_input, days_mapping)#

Formatting of the parameter EV_activity[a,u,p,t]. Data is taken from dailyprofiles.csv (columns EV_aAAddd, with AA the activity label and ddd the type of day).

Parameters:
  • units (df) – dataframe from district_units.csv

  • profiles_input (df) – a dataframe of 24h profile data.

  • timestamp (df) – from the reho.cluster, to get the type of day (weekday, weekend) for each Period.

  • days_mapping (dict) – mapping between the labels of profile_input and timestamp

Returns:

activity_profile – the parameter EV_activity[a,u,p,t]

Return type:

df

reho.model.preprocessing.mobility_generator.get_Ebike_charging(units, timestamp, profiles_input, days_mapping)#

Formatting of the parameter EBike_charging_profile[u,p,t]. Data is taken from dailyprofiles.csv (columns EBike_cpfddd, with ddd the type of day).

Parameters:
  • units (df) – dataframe from district_units.csv

  • profiles_input (df) – a dataframe of 24h profile data.

  • timestamp (df) – from the reho.cluster, to get the type of day (weekday, weekend) for each Period.

  • days_mapping (dict) – mapping between the labels of profile_input and timestamp

Returns:

EBike_charging_profile – the parameter EBike_charging_profile[u,p,t]

Return type:

df

reho.model.preprocessing.mobility_generator.get_mode_speed(units, mode_speed_custom)#

Formatting of the parameter Mode_speed[u]. Default values are taken from OFS microcensus report.

Parameters:
  • units (df) – dataframe from district_units.csv

  • mode_speed_custom (df or dict) – customized speed given by the user.

Returns:

mode_speed – the parameter Mode_Speed[u]

Return type:

df

reho.model.preprocessing.mobility_generator.get_min_share(modal_split, modes, transportunits)#

Formatting of the parameters min_share[u,dist] and min_share_modes[u,dist].

Parameters:
  • modal_split (df) – dataframe with columns for the categories of distance and rows for the units and modes.

  • modes (list) – list of modes (usually cars, PT and MD)

  • transportunits (list) – list of transport units.

Returns:

  • minshare (df) – the parameter min_share[u,dist]

  • minshare_modes (df) – the parameter min_share_modes[u,dist]

reho.model.preprocessing.mobility_generator.get_max_share(modal_split, modes, transportunits)#

Formatting of the parameters max_share[u,dist] and max_share_modes[u,dist].

Parameters:
  • modal_split (df) – dataframe with columns for the categories of distance and rows for the units and modes.

  • modes (list) – list of modes (usually cars, PT and MD)

  • transportunits (list) – list of transport units.

Returns:

  • maxshare (df) – the parameter max_share[u,dist]

  • maxshare_modes (df) – the parameter max_share_modes[u,dist]

reho.model.preprocessing.mobility_generator.generate_transport_units_sets(transportunits)#

Creates the sets transport_Units_MD and transport_Units_cars that are subsets of the available transport units, respectively for soft mobility and cars. Used later to constrain the maximum and minimum share of public transport, soft mobility, cars in the total mobility supply.

Parameters:

transportunits (dict of arrays) – Each key of the dict is a UnitOfType label containing a list of all the Units names. should be something like self.infrastructure.UnitsOfType

Returns:

  • transport_Units_MD (set)

  • transport_Units_cars (set)

reho.model.preprocessing.mobility_generator.rho_param(ext_districts, rho, activities=['work', 'leisure', 'travel'])#

This function is used in the iterative scenario to iteratively calculate multiple districts with EVs being able to charge at the different districts.

This function is used to calculate the parameter S from the share of activities S(a) in each districts. For each activity, if a distribution is provided by the parameter rho, then S = rho_d / sum over ext_d(rho) Otherwise, we assume equal distribution over the districts and S = 1/len(nb_ext_d)

Parameters:

ext_districts

Returns:

share – dataframe with index (activity, district) containing the distribution accross all district for each activity.

Return type:

dataframe

reho.model.preprocessing.mobility_generator.compute_iterative_parameters(reho_models, Scn_ID, iter, district_parameters, only_prices=False)#

” This function is used in the iterative scenario to iteratively calculate multiple districts with EVs being able to charge at the different districts. The load is expressed using the corrective parameter f.

Parameters:
  • reho_models (dict of reho objects) – Dictionary of reho object, one for each district

  • Scn_ID (str or int) – label for the scenario

  • iter (int) – iteration of the city scale optimization

  • district_parameters (dict of dict) – Each key of the dict refers to a district d. Used to extract the scale parameter f : district_parameters[d][‘f’]

  • only_prices (bool) – if False, only returns the parameters Cost_demand_ext and Cost_supply_ext if True, additionally returns the parameter EV_supply_ext

Returns:

parameters – For each district d, returns a dict of the parameters to be inputted in the next optimisation. Parameters include Cost_demand_ext, Cost_supply_ext, EV_supply_ext.

Return type:

dict of dict

reho.model.preprocessing.mobility_generator.linear_split_bin_table(df, col, lowerbound=None, upperbound=None)#

Cuts off a discrete bin distribution data serie to the desired bounds. First and last bin are calculated proportionnally to the size of the bin.

Parameters:
  • df (dataframe) – data

  • col (str) – the columns of the df on which the split operation is applied

  • lowerbound (float or None)

  • upperbound (float or None)

reho.model.preprocessing.mobility_generator.mobility_demand_from_WP1data(pkm_demand, max_dist=70, nbins=1, modalwindow=0.01, share_cars=None, share_EV_infleet=None)#

This functions computes parameters related to mobility from data tables provided by WP1 (OFS data). Parameters computed include : DailyDist and the modal_split dataframe.

Parameters:
  • pkm_demand (float) – Total number of km travelled/day/cap

  • max_dist (float) – trip length cutoff

  • nbins (int) – number of categories of distance

  • modal_window (float) – delta between max and min share bounds is modal_window*2

  • share_cars (float in [0,1]) – modifies the modal shares PT and MD in consequence.

  • share_EV_infleet (Float in [0,1]) – the share of EVs in the car fleet

Returns:

  • DailyDist (dict)

  • modal_split (df for reho.modal_split)

local_data.py#

Handles data specific to the location.

reho.model.preprocessing.local_data.return_local_data(cluster, qbuildings_data)#

Retrieves the data (weather and carbon emissions) corresponding to the buildings’ location.

Parameters:
  • cluster (dict) – Defines location of the buildings, and clustering attributes for the data reduction process.

  • qbuildings_data (dict) – Buildings characterization

Returns:

  • Cluster (dict) to identify the location and clustering attributes

  • File_ID (string) to identify the location and clustering attritutes

  • T_ext (np.array) to represent the external temperature for typical days

  • Irr (np.array) to represent the solar irradiance for typical days

  • df_Timestamp (pd.DataFrame) to represent the timestamps for the typical days

Return type:

dict

QBuildings.py#

Handles data for buildings characterization.

class reho.model.preprocessing.QBuildings.QBuildingsReader(load_facades=False, load_roofs=False)#

Handles and prepares the data related to buildings.

There usually come from GBuildings database. However, one can use data from a csv, in which case the column names should correspond to the GBuildings one, described in Processed GBuildings tables.

Parameters:
  • load_facades (bool) – Whether the facades data should be added.

  • load_roofs (bool) – Whether the roofs data should be added.

establish_connection(db)#

Allows to establish the connection with one of the QBuildings database.

Parameters:

db (str) – Name of the database to which we want to connect

read_csv(buildings_filename='data/buildings.csv', nb_buildings=None, roofs_filename='data/roofs.csv', facades_filename='data/facades.csv')#

Reads buildings-related data from CSV files and prepare it for the REHO model.

If not all the buildings from the file should be extracted, one can give a number of buildings. The fields from the files are translated to the corresponding ones used in REHO.

Parameters:
  • buildings_filename (str) – The filename of the CSV file containing buildings data.

  • nb_buildings (int, optional) – The number of buildings to consider. If not provided, all buildings in the file are considered.

  • roofs_filename (str, optional) – The filename of the CSV file containing roofs data.

  • facades_filename (str, optional) – The filename of the CSV file containing facades data.

Returns:

A dictionary containing the prepared data for the REHO model, including buildings, facades, roofs, and shadows if roofs and facades are loaded.

Return type:

dict

Notes

  • If nb_buildings is not provided, all buildings in the ‘buildings’ data are considered.

  • If load_roofs = True, roofs_filename must be provided, else it is not useful. Same goes for the facades.

Example

>>> from reho.model.reho import *
>>> reader = QBuildingsReader(load_roofs=True)
>>> qbuildings_data = reader.read_csv("buildings.csv", roofs_filename="roofs.csv", nb_buildings=7)
>>> qbuildings_data['buildings_data'].keys()
dict_keys(['Building1', 'Building2', 'Building3'])
>>> qbuildings_data['buildings_data']['Building1'].keys()
dict_keys(['id_class', 'ratio', 'status', 'ERA', 'SolarRoofArea', 'area_facade_m2', 'height_m', 'U_h', 'HeatCapacity', 'T_comfort_min_0', 'Th_supply_0', 'Th_return_0', 'Tc_supply_0', 'Tc_return_0', 'x', 'y', 'z', 'geometry', 'transformer', 'id_building', 'egid', 'period', 'n_p', 'energy_heating_signature_kWh_y', 'energy_cooling_signature_kWh_y', 'energy_hotwater_signature_kWh_y', 'energy_el_kWh_y'])
read_db(district_boundary='transformers', district_id=None, nb_buildings=None, egid=None, to_csv=False)#

Reads the database and extracts the relevant buildings data. If only some buildings from the district_id need to be extracted, you can specify the desired number of buildings or, if the EGIDs are known, provide a list of EGIDs. The fields from the database are translated to the nomenclature used in REHO.

Parameters:
  • district_boundary (str) – The boundary of the district. It can be either ‘transformers’ or ‘geo_girec’. By default, a district corresponds to a LV tranformer area as defined in QBuidings database.

  • district_id (int or str) – ID or name of the district where the buildings lie.

  • nb_buildings (int) – Number of buildings to select

  • egid (list) – To specify a list of buildings their EGIDs

  • to_csv (bool) – To export the data into csv

Returns:

A dictionary that contains the qbuildings data. The default has only one key buildings_data with a dictionary of buildings, with their fields and corresponding values.

Return type:

dict

Notes

  • The use of this function requires the previous creation of a QBuildingsReader and the use of establish_connection('Suisse').

  • EGIDs are the postal address unique identifier used in Switzerland. One can find the EGIDs of a given address at the RegBL.

  • If load_roofs = True the roofs are returned as well in the dictionary as a DataFrame under the key roofs_data.

  • If load_facades = True the facades and the shadows are returned as well in the dictionary as a DataFrame under the keys roofs_data and shadows_data.

Examples

>>> from reho.model.reho import *
>>> reader = QBuildingsReader(load_roofs=True)
>>> reader.establish_connection('Suisse')
>>> qbuildings_data = reader.read_db(district_id=3658, egid=[954117])
>>> qbuildings_data['buildings_data']
{'buildings_data': {'Building1': {'id_class': 'I', 'ratio': '1.0', 'status': "['existing', 'existing', 'existing']", 'ERA': 1396.0, 'SolarRoofArea': 1121.8206745917826, 'area_facade_m2': 848.6771960464813, 'height_m': 9.211343577064236, 'U_h': 0.00152, 'HeatCapacity': 120.29999999999991, 'T_comfort_min_0': 20.0, 'Th_supply_0': 65.0, 'Th_return_0': 50.0, 'Tc_supply_0': 12.0, 'Tc_return_0': 17.0, 'x': 2592703.9673297284, 'y': 1120087.7339999992, 'z': 572.4461527539248, 'geometry': <POLYGON ((2592684.383 1120074.623, 2592683.644 1120075.443, 2592679.083 112...>, 'transformer': 3658, 'id_building': '40214', 'egid': '954117', 'period': '1981-1990', 'n_p': 34.9, 'energy_heating_signature_kWh_y': 111855.52745599969, 'energy_cooling_signature_kWh_y': 0.0, 'energy_hotwater_signature_kWh_y': 4562.903646729638, 'energy_el_kWh_y': 39088.0}}
>>> qbuildings_data['roofs_data']
    TILT  ...                                           geometry
0     26  ...  MULTIPOLYGON (((2592819.164 1120187.216, 25928...
1     25  ...  MULTIPOLYGON (((2592832.585 1120154.503, 25928...
2     25  ...  MULTIPOLYGON (((2592819.164 1120187.216, 25928...
3     26  ...  MULTIPOLYGON (((2592824.929 1120157.956, 25928...
0     19  ...  MULTIPOLYGON (((2592378.668 1120324.589, 25923...
..   ...  ...                                                ...
25     0  ...  MULTIPOLYGON (((2592872.699 1120127.178, 25928...
26     0  ...  MULTIPOLYGON (((2592917.016 1120132.965, 25929...
27    28  ...  MULTIPOLYGON (((2592891.248 1120129.691, 25928...
28    26  ...  MULTIPOLYGON (((2592901.604 1120125.591, 25929...
29    27  ...  MULTIPOLYGON (((2592887.725 1120119.181, 25928...
[252 rows x 6 columns]
reho.model.preprocessing.QBuildings.read_geometry(df)#

Avoid issues with geometry when reading data from a csv

sia_parser.py#

Collects data from the SIA Swiss norms , which are used to distinguish between eight different building types in their usage and behavior.

reho.model.preprocessing.sia_parser.daily_profiles_with_monthly_deviation(status, rooms, date, df)#

Returns daily profiles for electricity demand, DHW demand, occupancy, electricity heat gains, and heat gains from people. The profiles are based on the SIA norms and vary according to the building specifications (rooms, renovation status) and the date (weekday, month).

skydome.py#

Generates a skydome decomposition into patches for PV orientation.

weather.py#

Generates the meteorological data (temperature and solar irradiance).

reho.model.preprocessing.weather.get_weather_data(qbuildings_data)#

Using the pvlib library, connects to the PVGIS dabatase to extract the weather data based on the building’s coordinates.

reho.model.preprocessing.weather.read_custom_weather(path_to_weather_file)#

From the current directory, looks for a custom weather file. This file should be a .csv with the same structure as the examples provided in reho/scripts/examples/data/profiles/.

reho.model.preprocessing.weather.generate_weather_data(cluster, qbuildings_data, clustering_directory)#

This function is called if the clustered weather data specified by File_ID do not exist yet. Applies the clustering method (see Clustering class) and writes several files as output.

Parameters:
  • cluster (dict) – Contains a ‘Location’ (str), some ‘Attributes’ (list, among ‘T’ (temperature), ‘I’ (irradiance), ‘W’ (weekday) and ‘E’ (emissions)), a number of periods ‘Periods’ (int) and a ‘PeriodDuration’ (int).

  • qbuildings_data (dict) – Input data for the buildings.

  • clustering_directory (str) – Path to the directory where the clustering files will be saved.

Notes

Caution

For Alpine regions, i.e. locations characterized by mountainous terrain and significant microclimatic variability, PVGIS databases (ERA5 and SARAH3) can be problematic. Their coarse spatial resolution may average temperatures from higher altitudes nearby, causing systematic underestimation. For case studies in Switzerland, recommended weather databases are MeteoSwiss and Meteonorm, providing a more accurate representation of the local climate. Please refer to ‘custom_weather’ method for instructions.

reho.model.preprocessing.weather.write_weather_files(clustering_directory, attributes, values_cluster, index_inter)#

Writes the clustering results computed from generate_weather_data as CSV files in folder clustering_directory.

Parameters:
  • clustering_directory (str) – Path to the directory where clustering files will be saved.

  • attributes (list) – Contains the clustering attributes, among ‘Text’, ‘Irr’, ‘Weekday’, and ‘Emissions’.

  • values_cluster (pd.DataFrame) – Produced by generate_weather_data.

  • index_inter (pd.DataFrame) – Produced by generate_weather_data.

Notes

  • Files generated:
    • ‘typical_data.csv’ (contains ‘Text’, ‘Irr’, ‘Weekday’)

    • ‘frequency.csv’

    • ‘index.csv’

    • ‘timestamp.csv’

reho.model.preprocessing.weather.get_cluster_file_ID(cluster)#

Gets the weather file ID that corresponds to the specifications provided in the reho initalization.

The file ID is built by concatenating Location_Periods_PeriodDuration_Attributes. cluster = {'Location': 'Geneva', 'Attributes': ['T', 'I', 'W'], 'Periods': 10, 'PeriodDuration': 24} Will yield to: File_ID = 'Geneva_10_24_T_I_W'

Parameters:

cluster (dict) – Contains a ‘Location’ (str), some ‘Attributes’ (list, among ‘T’ (temperature), ‘I’ (irradiance), ‘W’ (weekday) and ‘E’ (emissions)), a number of periods ‘Periods’ (int) and a ‘PeriodDuration’ (int).

Returns:

A literal representation to identify the location and clustering attritutes.

Return type:

str

sub_problem.py#

File for handling data and optimization for an AMPL sub-problem.

class reho.model.sub_problem.SubProblem(district, buildings_data, local_data, parameters, set_indexed, cluster, scenario, method, solver, qbuildings_data=None)#

Collects all the data input and sends it an AMPL model, solves the optimization.

Parameters:
  • district (district) – Instance of the class district, contains relevant structure in the district such as Units or grids.

  • buildings_data (dict) – Building-specific data.

  • local_data (dict) – Location-specific data.

  • parameters (dict, optional) – Dictionary containing ‘new’ parameters for the AMPL model. If incomplete, uses data from buildings_data.

  • set_indexed (dict, optional) – Dictionary containing new data which are indexed sets in the AMPL model.

  • cluster (dict, optional) – Dictionary containing information about clustering.

  • scenario (dict, optional) – Dictionary containing the objective function, EMOO constraints, and additional constraints.

  • method (dict, optional) – Dictionary containing different options for methodology choices.

  • solver (str, optional) – Chosen solver for AMPL (gurobi, cplex, HiGHS, cbc…).

  • qbuildings_data (dict, optional) – Input data for the buildings.

send_parameters_and_sets_to_ampl(ampl)#

Load data to AMPL depending on their type

reho.model.sub_problem.initialize_default_methods(method)#

Sets the default options for an optimization.

master_problem.py#

File for handling data and optimization for an AMPL master problem.

class reho.model.master_problem.MasterProblem(qbuildings_data, units, grids, parameters=None, set_indexed=None, cluster=None, method=None, solver=None, DW_params=None)#

Applies the decomposition method.

Stores district attributes, scenario, method, attributes for the decomposition, and initiate an attribute that will store results.

Parameters:
  • qbuildings_data (dict) – Contains 3 layers: A dictionary of the buildings characteristics such as surface area, class, egid, a DataFrame for Roofs characteristics and a DataFrame for Facades characteristics.

  • units (dict) – Units characteristics.

  • grids (dict) – Grids characteristics.

  • parameters (dict, optional) – Parameters set in the script (usually energy tariffs).

  • set_indexed (dict, optional) – The indexes used in the model.

  • cluster (dict, optional) – Define location, number of periods, and number of timesteps. To use your own weather file, you can add a key custom_weather with the corresponding path.

  • method (dict, optional) – The different methods to run the optimization (refer to List of the available methods in REHO).

  • solver (str, optional) – Chosen solver for AMPL (gurobi, cplex, highs, cbc, etc.).

  • DW_params (dict, optional) – Hyperparameters of the decomposition and other useful information.

Notes

  • The REHO class inherits this class, so the inputs are similar.

  • qbuildings_data contains by default only the buildings’ data. The roofs and facades are added solely with the use of methods: use_pv_orientation and use_facades.

select_SP_obj_decomposition(scenario)#

The SPs in decomposition have another objective than in the compact formulation because their objective function is formulated as a reduced cost. Also adding global linking constraints, like Epsilon, changes the scenario to choose.

Parameters:

scenario (dictionary) – objective function

Returns:

  • SP_scenario (dictionary) – scenario for the SP (iterations)

  • SP_scenario_init (dictionary) – scenario for the SP (initiation)

initiate_decomposition(scenario, Scn_ID=0, Pareto_ID=1, epsilon_init=None)#

The SPs are initialized for the given objective. In case the optimization includes an epsilon constraint, there are two ways to initialize. Either the epsilon constraint is applied on the SPs, or the initialization is done with beta. The former has the risk to be infeasible for certain SPs, therefore the latter is preferred. Three beta values are given to mark the extreme points and an average point. Sets up the parallel optimization if needed

Parameters:
  • scenario (dictionary) – Which objective function to optimize and the value of epsilon constraints to apply

  • Scn_ID (int) – ID of the optimization scenario

  • Pareto_ID (int) – Id of the pareto point. For single objective optimization it is 1 by default

  • epsilon_init (array) – Epsilon constraints to apply for the initialization

SP_initiation_execution(scenario, Scn_ID=0, Pareto_ID=1, h=None, epsilon_init=None, beta=None)#

Adapts the model depending on the method, execute the optimization and get the results

Parameters:
  • scenario (dictionary) – Which objective function to optimize and the value of epsilon constraints to apply

  • Scn_ID (int) – scenario ID

  • Pareto_ID (int) – Id of the pareto point. For single objective optimization it is 0 by default.

  • h (string) – House id

  • epsilon_init (float) – Epsilon constraint to apply for the initialization

  • beta (float) – Beta initial value used for initialization

Returns:

  • df_Results – results of the optimization (unit installed, power exchanged, costs, GWP emissions, …)

  • attr – results of the optimization process (CPU time, objective value, nb variables or constraints, …)

MP_iteration(scenario, binary, Scn_ID=0, Pareto_ID=1, read_DHN=False)#

Runs the optimization of the Master Problem (MP):

  • Creates the ampl_MP master problem

  • Sets the sets and the parameters in ampl

  • Actualises the grid exchanges and the costs of each sub problem (house) without the grid costs

  • Runs the optimization

  • Extracts the results (lambda, dual variables pi and mu, objective value of the MP (TOTEX, grid exchanges, …)

  • Deletes the ampl_MP model

Parameters:
  • scenario (dictionary)

  • binary (boolean) – if the decision variable ‘lambda’ is binary or continuous

  • Scn_ID (int)

  • Pareto_ID (int)

  • read_DHN (bool)

Raises:

ValueError – If the sets are not arrays or if the parameters are not arrays or floats or dataframes. Or if the MP optimization did not converge:

SP_iteration(scenario, Scn_ID=0, Pareto_ID=1)#

Sets up the parallel optimization if needed.

Parameters:
  • scenario (dictionary)

  • Scn_ID (int) – scenario ID

  • Pareto_ID (int) – pareto ID

SP_execution(scenario, Scn_ID, Pareto_ID, h)#

Inserts dual variables in ampl model, apply scenario, adapt model depending on the methods and get results.

Parameters:
  • scenario (dictionary)

  • Scn_ID (int) – scenario ID

  • Pareto_ID (int) – pareto ID

  • h (string) – house ID

Returns:

  • df_Results – results of the optimization (unit installed, power exchanged, costs, GWP emissions, …)

  • attr – results of the optimization process (CPU time, objective value, nb variables or constraints, …)

Raises:

ValueError – If the SP optimization did not converge:

check_Termination_criteria(scenario, Scn_ID=0, Pareto_ID=1)#

Verifies a number of termination criteria:

  • Optimal solution found based on reduced costs -> last solutions proposed by the SPs did not improve the MP

  • No improvements

Returns:

df.any(axis=None) – If one of the stopping criteria is reached

Return type:

boolean

get_final_MP_results(Pareto_ID=1, Scn_ID=0)#

Builds the final design and operating results based on the optimal set of lambdas.

get_annual_grid_opex(df_Grid_t, cost_supply=pandas.Series, cost_demand=pandas.Series)#
Parameters:
  • df_Grid_t (pd.DataFrame) – from result object REHO

  • cost_supply (series) – cost profile of supply

  • cost_demand (series) – cost profile of demand

Returns:

possibility to set tariffs/dual value pi. default: use costs from model

Return type:

annual_grid_costs

get_dual_values_SPs(Scn_ID, Pareto_ID, iter, House, dual_variable)#

Selects the right dual variables for the given Scn_ID, Pareto_ID, iter and house IDs.

Parameters:
  • Scn_ID (int) – scenario ID

  • Pareto_ID (int) – pareto ID

  • iter (int) – iter ID

  • House (string) – house ID

  • dual_variable (string) – dual variable to get

Returns:

dual_value – dual variables

Return type:

array

get_solver_attributes(Scn_ID, Pareto_ID, ampl)#
Parameters:
  • Scn_ID (int) – scenario ID

  • Pareto_ID (int) – ID of the pareto point, default is 1

  • ampl (ampl model) – results concerning one SP

Returns:

df – Information on the optimization (CPU time, nb constraints, …)

Return type:

pd.DataFrame

split_parameter_sets_per_building(h, parameters_SP={}, set_indexed_SP={})#

Some inputs are for the district and some other for the houses. This function fuses the two and gives the parameters per house. This is important to run an optimization on a single building

Parameters:
  • h (string) – House ID

  • parameters_SP (dict) – Parameters of the house

  • set_indexed_SP (dict) – Set indexed of the house

Returns:

  • buildings_data_SP (dict) – egid, surface area, class of the building, …

  • parameters_SP (dict) – Parameters from the script for a single house (f.e. tariffs)

  • infrastructure_SP (dict) – The district structure for a single house

  • set_indexed_SP (dict) – The set_indexed variable without the values concerning only the master problem (district scale)

infrastructure.py#

File for handling infrastructure parameters.

class reho.model.infrastructure.Infrastructure(qbuildings_data, units, grids)#

Characterizes all the sets and parameters which are connected to buildings, units and grids.

Parameters:
  • qbuildings_data (dict) – Buildings characterization

  • units (dict) – Units characterization

  • grids (dict) – Grids characterization

reho.model.infrastructure.prepare_units_array(file, exclude_units=[], grids=None)#

Prepares the array that will be used by initialize_units.

Parameters:
  • file (str) – Name of the file where to find the units’ data (building, district or storage).

  • exclude_units (list of str) – The units you want to exclude, given through initialize_units.

  • grids (dict) – Grids given through initialize_grids.

Returns:

Contains one dictionary in each cell, with the parameters for a specific unit.

Return type:

np.array

See also

initialize_units

Notes

  • Make sure the name of the columns you are using are the same as the one from the default files, that can be found in data/infrastructure.

  • The name of the units, which will be used as keys, do not matter but the UnitOfType must be along a defined list of possibilities.

reho.model.infrastructure.initialize_units(scenario, grids=None, building_data='/home/docs/checkouts/readthedocs.org/user_builds/reho/checkouts/main/reho/data/infrastructure/building_units.csv', district_data=None, interperiod_data=None)#

Initializes the available units for the energy system.

Parameters:
  • scenario (dict or None) – A dictionary containing information about the scenario.

  • grids (dict or None, optional) – Information about the energy layers considered. If None, ['Electricity', 'NaturalGas', 'Oil', 'Wood', 'Data', 'Heat'].

  • building_data (str, optional) – Path to the CSV file containing building unit data. Default is ‘building_units.csv’.

  • district_data (str or bool or None, optional) – Path to the CSV file containing district unit data. If True, district units are initialized with ‘district_units.csv’. If None, district units will not be considered. Default is None.

  • interperiod_data (dict or bool or None, optional) – Paths to the CSV file(s) containing inter-period storage units data. If True, units are initialized with ‘building_units_IP.csv’ and ‘district_units_IP.csv’. If None, storage units won’t be considered. Default is None.

Returns:

Contains building_units and district_units.

Return type:

dict

See also

initialize_grids

Notes

  • The default files are located in reho/data/infrastructure/.

  • The custom files can be given as absolute or relative path.

Examples

>>> units = infrastructure.initialize_units(scenario, grids, building_data="custom_building_units.csv",
...                                         district_data="custom_district_units.csv", interperiod_data=True)
reho.model.infrastructure.initialize_grids(available_grids={'Electricity': {}, 'NaturalGas': {}}, file='/home/docs/checkouts/readthedocs.org/user_builds/reho/checkouts/main/reho/data/infrastructure/layers.csv')#

Initializes grid information for the energy system.

Parameters:
  • available_grids (dict, optional) – A dictionary specifying the available grids and their parameters. The keys represent grid names, and the values are dictionaries containing optional parameters [‘Cost_demand_cst’, ‘Cost_supply_cst’, ‘GWP_demand_cst’, ‘GWP_supply_cst’].

  • file (str, optional) – Path to the CSV file containing grid data. Default is ‘layers.csv’ in the data/infrastructure/ folder.

Returns:

Contains information about the initialized grids.

Return type:

dict

See also

initialize_units

Notes

  • If one wants to use its one custom grid file, he should pay attention that the name of the layer and the parameters correspond.

  • Adding a layer in a custom file will not add it to the model as it is not modelized.

Examples

>>> available_grids = {'Electricity': {'Cost_demand_cst': 0.1, 'GWP_supply_cst': 0.05}, 'NaturalGas': {'Cost_supply_cst': 0.15}}
>>> grids = initialize_grids(available_grids, file="custom_layers.csv")

reho.py#

File for constructing and solving the optimization problem.

class reho.model.reho.REHO(qbuildings_data, units, grids, parameters=None, set_indexed=None, cluster=None, method=None, scenario=None, solver='highs', DW_params=None)#

Performs the single or multi-objective optimization.

Parameters are inherited from MasterProblem.

save_results(format='pickle', filename='results', erase_file=True, filter=True)#

Saves the results in the desired format: pickle file or Excel sheet.

The results are indexed on the scenarios and pareto IDs.

Parameters:
  • format (tuple, optional) – Format(s) in which to save the results. Choose from ‘pickle’ and ‘xlsx’. Default is (‘pickle’).

  • filename (str, optional) – Base name of the file to be saved. The extension will be added based on the format. Default is ‘results’.

  • erase_file (bool, optional) – Whether to overwrite existing files with the same name. Default is True.

  • filter (bool, optional) – Whether to filter out rows with only zeros in Excel sheets. Default is True.

Return type:

None

Notes

If ‘erase_file’ is set to False, a unique counter is added to the filename to avoid overwriting existing files.

plotting/#

Contains plotting functions and code relative to the visualization of REHO results.

  • layout.csv: Contains colors and labels to characterize the units and layers of an energy system configuration.

  • sia380_1.csv: Contains the translation of building’s affectation in roman numbering to labels in the SIA 380/1 norm.

plotting.py#

Contains ready-to-use representations for results generated by REHO.

reho.plotting.plotting.plot_performance(results, plot='costs', indexed_on='Scn_ID', label='EN_long', add_annotation=True, per_m2=False, additional_costs=None, additional_gwp=None, scc=0.177, title=None, filename=None, export_format='html', scaling_factor=1, return_df=False)#

Plots performance based on REHO results.

Parameters:
  • results (dict) – Dictionary of REHO results.

  • plot (str) –

    Choose among those three possibilities:

    • ’costs’ for the economic performance indicators,

    • ’gwp’ for the global warming potential indicators,

    • ’combined’ for a combination of the two indicators, where the emissions are converted into costs using the scc parameter.

  • indexed_on (str) – Whether the results should be grouped on Scn_ID or Pareto_ID.

  • label (str) – Indicates the language to use for the plot. Choose among ‘FR_long’, ‘FR_short’, ‘EN_long’, ‘EN_short’.

  • add_annotation (bool) – Adds the numerical values along the bar plots.

  • per_m2 (bool) – Set to True to obtain the results divided by the total ERA.

  • additional_costs (dict) – Additional costs to include (choose between ‘isolation’, ‘mobility’, and ‘ict’) and scaling values.

  • additional_gwp (dict) – Additional gwp to include (choose between ‘isolation’, ‘mobility’, and ‘ict’) and scaling values.

  • scc (float) – Carbon externalities, expressed in CHF/kgCO2. Default value is the Social Cost of Carbon, from Rennert, 2022.

  • title (str) – Title for the plot.

  • filename (str) – Name of the file to be saved.

  • export_format (str) – Can be either ‘html’, ‘png’, or ‘pdf’.

  • scaling_factor (int/float) – Scales linearly the REHO results for the plot.

  • return_df (bool) – A dataframe can be returned for further post-processing or reporting purposes.

Returns:

  • plotly.graph_objs.Figure – The generated plotly figure.

  • pd.DataFrame – (Optional) A dataframe for further post-processing or reporting purposes.

reho.plotting.plotting.plot_expenses(results, plot='costs', indexed_on='Scn_ID', label='EN_long', premium_version=None, per_m2=False, additional_costs={}, additional_gwp={}, scc=0.177, title=None, filename=None, export_format='html', scaling_factor=1, return_df=False)#

Plots expenses based on REHO results.

Parameters:
  • results (dict) – Dictionary of REHO results.

  • plot (str) –

    Choose among those three possibilities:

    • ’costs’ for the economic performance indicators,

    • ’gwp’ for the global warming potential indicators,

    • ’combined’ for a combination of the two indicators, where the emissions are converted into costs using the scc parameter.

  • indexed_on (str) – Whether the results should be grouped on Scn_ID or Pareto_ID.

  • label (str) – Indicates the language to use for the plot. Choose among ‘FR_long’, ‘FR_short’, ‘EN_long’, ‘EN_short’.

  • premium_version (list) – If enabled, it should be an array containing the retail price and feed-in price of electricity.

  • per_m2 (bool) – Set to True to obtain the results divided by the total ERA.

  • additional_costs (dict) – Additional costs to include (choose between ‘isolation’, ‘mobility’, and ‘ict’) and scaling values.

  • additional_gwp (dict) – Additional gwp to include (choose between ‘isolation’, ‘mobility’, and ‘ict’) and scaling values.

  • scc (float) –

    Carbon externalities, expressed in CHF/kgCO2. Default value is the Social Cost of Carbon, from Rennert, 2022.

  • title (str) – Title for the plot.

  • filename (str) – Name of the file to be saved.

  • export_format (str) – Can be either ‘html’, ‘png’, or ‘pdf’.

  • scaling_factor (int/float) – Scales linearly the REHO results for the plot.

  • return_df (bool) – A dataframe can be returned for further post-processing or reporting purposes.

Returns:

  • plotly.graph_objs.Figure – The generated plotly figure.

  • pd.DataFrame – (Optional) A dataframe for further post-processing or reporting purposes.

reho.plotting.plotting.plot_sankey(df_Results, label='EN_long', color='ColorPastel', title=None, filename=None, export_format='html', scaling_factor=1, return_df=False)#

Plots a Sankey plot based on the results DataFrame.

Parameters:
  • df_Results (pd.DataFrame) – Coming from REHO results (already extracted from the desired Scn_ID and Pareto_ID).

  • label (str) – Indicate the language to use for the plot. Choose among ‘FR_long’, ‘FR_short’, ‘EN_long’, ‘EN_short’.

  • color (str) – Indicate the color set to use for the plot. ‘ColorPastel’ is default.

  • title (str) – Title for the plot.

  • filename (str) – Name of the file to be saved.

  • export_format (str) – Can be either ‘html’, ‘png’, or ‘pdf’.

  • scaling_factor (int/float) – Scales linearly the REHO results for the plot.

  • return_df (bool) – A dataframe can be returned for further post-processing or reporting purposes.

Returns:

  • plotly.graph_objs.Figure – The generated plotly figure.

  • pd.DataFrame – (Optional) A dataframe for further post-processing or reporting purposes.

reho.plotting.plotting.plot_profiles(df_Results, units_to_plot, style='plotly', label='EN_long', color='ColorPastel', resolution='weekly', plot_curtailment=False, title=None, filename=None, export_format='html', return_df=False)#

Plots an hourly profile for an entire year of operation.

Parameters:
  • df_Results (pd.DataFrame) – Coming from REHO results (already extracted from the desired Scn_ID or Pareto_ID).

  • units_to_plot (list) – Units to be plotted.

  • style (str) – Choose between ‘plotly’ or ‘matplotlib’.

  • label (str) – Indicate the language to use for the plot. Choose among ‘FR_long’, ‘FR_short’, ‘EN_long’, ‘EN_short’.

  • color (str) – Indicate the color set to use for the plot. ‘ColorPastel’ is default.

  • resolution (str) – Moving average possible, choose between ‘monthly’, ‘weekly’, and ‘daily’.

  • plot_curtailment (bool) – PV curtailment can optionally be plotted.

  • title (str) – Title for the plot.

  • filename (str) – Name of the file to be saved.

  • export_format (str) – Can be either ‘html’, ‘png’, or ‘pdf’.

  • return_df (bool) – A dataframe can be returned for further post-processing or reporting purposes.

Returns:

  • plotly.graph_objs.Figure – The generated plotly figure.

  • pd.DataFrame – (Optional) A dataframe for further post-processing or reporting purposes.

reho.plotting.plotting.plot_eud(results, label='EN_long', title=None, filename=None, export_format='html', scaling_factor=1, return_df=False)#

Plots a Sunburst for End Use Demand (EUD) based on REHO results, grouped by buildings’ class.

Parameters:
  • results (dict) – Dictionary of REHO results.

  • label (str) – Indicate the language to use for the plot. Choose among ‘FR_long’, ‘FR_short’, ‘EN_long’, ‘EN_short’.

  • title (str) – Title for the plot.

  • filename (str) – Name of the file to be saved.

  • export_format (str) – Can be either ‘html’, ‘png’, or ‘pdf’.

  • scaling_factor (int/float) – Scales linearly the REHO results for the plot.

  • return_df (bool) – A dataframe can be returned for further post-processing or reporting purposes.

Returns:

  • plotly.graph_objs.Figure – The generated plotly figure.

  • pd.DataFrame – (Optional) A dataframe for further post-processing or reporting purposes.

reho.plotting.plotting.plot_unit_monthly(results, unit_to_plot, label='EN_short', title=None, filename=None, export_format='html')#

Generates a monthly bar plot showing the mean energy produced per hour and the installed power for a specific unit.

Parameters:
  • results (dict) – Dictionary of REHO results.

  • unit_to_plot (dict) – Specify the unit to plot and Scn_ID / Pareto_ID from which it should be found.

  • label (str) – Indicates the language to use for the plot. Choose among ‘FR_long’, ‘FR_short’, ‘EN_long’, ‘EN_short’.

  • title (str) – Title for the plot.

  • filename (str) – Name of the file to be saved.

  • export_format (str) – Can be either ‘html’, ‘png’, or ‘pdf’.

Returns:

  • plotly.graph_objs.Figure – The generated plotly figure.

  • pd.DataFrame – (Optional) A dataframe for further post-processing or reporting purposes.

Examples

>>> reho_results = pd.read_pickle('results/progressive_scenario.pickle')
>>> unit_to_plot = {'Unit': 'NG_Boiler', 'Scn_ID': 'fossil', 'Pareto_ID': False}
>>> plot_unit_monthly(reho_results, unit_to_plot, label='FR_long', filename="my_plot", export_format='png').show()
reho.plotting.plotting.plot_pareto(results, color='ColorPastel', title=None, return_df=False)#

Plots a Pareto front based on REHO results. CAPEX, OPEX, TOTEX and GWP are displayed.

Parameters:
  • results (dict) – Dictionary of REHO results.

  • color (str) – Indicate the color set to use for the plot. ‘ColorPastel’ is default.

  • title (str) – Title for the plot.

  • return_df (bool) – A dataframe can be returned for further post-processing or reporting purposes.

Returns:

  • plotly.graph_objs.Figure – The generated plotly figure.

  • pd.DataFrame – (Optional) A dataframe for further post-processing or reporting purposes.

reho.plotting.plotting.plot_pareto_by_objectives(results, objectives=['CAPEX', 'OPEX'], style='plotly', annotation='TOTEX', title=None, filename=None, export_format='png')#

Plots a Pareto front based on REHO results. Only the 2 specified objectives are displayed. Results are expressed per m2.

Parameters:
  • results (dict) – Dictionary of REHO results.

  • objectives (list) – Specify the two objectives among CAPEX, OPEX, TOTEX and GWP.

  • style (str) – Choose between ‘plotly’ or ‘matplotlib’.

  • annotation (str) – Numerical values of the chosen KPI (CAPEX, OPEX, TOTEX or GWP) is printed.

  • title (str) – Title for the plot.

  • filename (str) – Name of the file to be saved.

  • export_format (str) – Can be either ‘html’, ‘png’, or ‘pdf’.

Returns:

The generated plotly figure.

Return type:

plotly.graph_objs.Figure

reho.plotting.plotting.plot_composite_curve(df_Results, cluster, periods=['Yearly'], filename=None, export_format='png', return_df=False)#

Plots a composite curve based on the results DataFrame.

Parameters:
  • df_Results (pd.DataFrame) – Coming from REHO results (already extracted from the desired Scn_ID and Pareto_ID).

  • cluster (dict) – Define location, number of periods, and number of timesteps.

  • periods (list) – Indicate the desired timeframe.

  • filename (str) – Name of the file to be saved.

  • export_format (str) – Can be either ‘png’, or ‘pdf’.

  • return_df (bool) – A dataframe can be returned for further post-processing or reporting purposes.

Returns:

  • matplotlib.pyplot – The generated matplotlib figure.

  • pd.DataFrame – (Optional) A dataframe for further post-processing or reporting purposes.

sankey.py#

Builds a dataframe for the visualization of annual flows from REHO results in the form of a Sankey diagram.

reho.plotting.sankey.update_label(source_name, target_name, df_label)#

Updates labels of df_label if source_name or target_name not in index of df_label.

Parameters:
  • source_name (str) – Source to update

  • target_name (str) – Target to update

  • df_label (pd.DataFrame) – Labels

Returns:

df_label updated with the source and target values

Return type:

pd.DataFrame

reho.plotting.sankey.handle_PV_battery_network(df_annuals, df_stv, df_label, elec_storage_list, elec_storage_use, mol_storage_use)#

This function is used to handle the layout of the Sankey diagram when electricity storage is active, to avoid false representation of electricity exports to the grid. :param df_annuals: :type df_annuals: pandas.DataFrame, that gathers annual balance for all the layers and the corresponding units :param df_label: :type df_label: Names of layers and units for the Sankey diagram :param df_stv: :type df_stv: pandas.DataFrame, containing all information about the streams and the numerical values in the Sankey diagram :param elec_storage_list: :type elec_storage_list: list that contains all the units related to electricity storage (interperiod or not) :param elec_storage_use: :type elec_storage_use: Boolean to check whether electricity storage is active. :param mol_storage_use: :type mol_storage_use: Boolean to check whether molecular storage is active.

Returns:

updated

Return type:

df_label and df_stv

reho.plotting.sankey.add_mol_storages_to_sankey(df_annuals, df_label, df_stv, FC_or_ETZ_use)#

This function is called to add all the streams/units that are related to molecule (interperiod storage) to the sankey diagram.

Parameters:
  • df_annuals (pandas.DataFrame, that gathers annual balance for all the layers and the corresponding units)

  • df_label (Names of layers and units for the Sankey diagram)

  • df_stv (pandas.DataFrame, containing all information about the streams and the numerical values in the Sankey diagram)

  • FC_or_ETZ_use (Variable to check whether other electrolyzer types (than the usual) are considered)

Returns:

updated

Return type:

df_label and df_stv

reho.plotting.sankey.add_label_value(df_label, df_stv, precision, units)#

Adds the values from df_stv to the labels of df_labels. The value of the nodes are thus available in the nodes name for the Sankey diagram.

Parameters:
  • df_label (pd.DataFrame) – Labels

  • df_stv (pd.DataFrame) – Source, target and value

  • precision (int) – Precision of the displayed numbers (default = 2)

  • units (str) – Unit of the values (default MWh)

Returns:

df_label pdated with the label values

Return type:

pd.DataFrame

reho.plotting.sankey.add_flow(source, dest, layer, hub, dem_sup, df_annuals, df_label, df_stv, check_dest_2=False, dest_2=None, adjustment=0, fact=1)#

Adds an energy flow for the sankey diagram according cell(s) of df_annuals if cell not null

Parameters:
  • source (str) – name of the source

  • dest (str) – name of the destination

  • layer (str) – name of the layer of the considered cell(s)

  • hub (str) – name of the hub of the considered cell(s)

  • dem_sup (str) – ‘Supply_MWh’ or ‘Demand_MWh’, column to take (! no control)

  • df_annuals (pd.DataFrame)

  • df_label (pd.DataFrame)

  • df_stv (pd.DataFrame)

  • check_dest_2 (bool) – if True dest_2 substitute dest (default False)

  • dest_2 (str) – second possible destination (default None)

  • adjustment (float) – offset added to the cell value (default 0)

  • fact (float) – factor multiplied to the cell value (default 1)

Returns:

  • pd.DataFrame – df_label updated

  • pd.DataFrame – df_stv updated

  • float – value added (0 if nothing added)

reho.plotting.sankey.df_sankey(df_Results, label='EN_long', color='ColorPastel', precision=2, units='MWh', display_label_value=True, scaling_factor=1)#

Builds the Sankey dataframe.

Parameters:
  • df_Results (pd.DataFrame) – DataFrame coming from REHO results (already extracted from the desired Scn_ID and Pareto_ID).

  • label (str) – Indicate the language to use for the plot. Choose among ‘FR_long’, ‘FR_short’, ‘EN_long’, ‘EN_short’.

  • color (str) – Indicate the color set to use for the plot. ‘ColorPastel’ is default.

  • precision (int) – Precision of the displayed numbers (default = 2).

  • units (str) – Unit of the values (default MWh).

  • display_label_value (bool) – Numerical values are printed.

  • scaling_factor (int/float) – Scales linearly the REHO results for the plot.

Returns:

Sankey dataframe.

Return type:

pd.DataFrame

paths.py#

File for managing file paths and configurations.