AFM Filières en mode batch

Submodules

mfa_problem.mfa_problem.io_bdd module

class io_bdd.Constraint(**kwargs)[source]

Bases : sqlalchemy.ext.declarative.api.Base

Coef_inv
Coefficient
Destination
Eq_lower
Eq_upper
Equality
Origine
Periode
Region
Table
id
id_interne
model_name
class io_bdd.Data(**kwargs)[source]

Bases : sqlalchemy.ext.declarative.api.Base

Contrainte_Sym_p
Destination
Factor
Incertitude_p
Origine
Periode
Quantity
Region
Source
Table
Unit
Valeur
id
model_name
class io_bdd.Flux(**kwargs)[source]

Bases : sqlalchemy.ext.declarative.api.Base

Destination
Origine
Table
Valeur
id
model_name
class io_bdd.Geographic(**kwargs)[source]

Bases : sqlalchemy.ext.declarative.api.Base

Code_Enfants
Code_Insee
Code_Parents
Nom
geotype
id
id_int
model_name
class io_bdd.MinMax(**kwargs)[source]

Bases : sqlalchemy.ext.declarative.api.Base

Destination
Factor
Max
Max_Unit
Min
Min_Unit
Origine
Periode
Region
Source
Table
Unit
id
model_name
class io_bdd.Param(**kwargs)[source]

Bases : sqlalchemy.ext.declarative.api.Base

Description
Parametre
Valeur
id
model_name
class io_bdd.Product(**kwargs)[source]

Bases : sqlalchemy.ext.declarative.api.Base

Bilan
Level
Poids_conso
Prod_name
Sankey
Table_conso
Transport
id
model_name
class io_bdd.Proxy(**kwargs)[source]

Bases : sqlalchemy.ext.declarative.api.Base

Code_geo
Geographic
Percent
Proxy_name
Quantity
Source
id
model_name
class io_bdd.Proxytype(**kwargs)[source]

Bases : sqlalchemy.ext.declarative.api.Base

Destination
Origin
Proxy_name
Sector
TableER
Type
id
model_name
class io_bdd.ResultList(**kwargs)[source]

Bases : sqlalchemy.ext.declarative.api.Base

Ai
MC_hist0
MC_hist1
MC_hist2
MC_hist3
MC_hist4
MC_hist5
MC_hist6
MC_hist7
MC_hist8
MC_hist9
MC_max
MC_min
MC_mu
MC_mu_in
MC_p0
MC_p10
MC_p100
MC_p20
MC_p30
MC_p40
MC_p5
MC_p50
MC_p60
MC_p70
MC_p80
MC_p90
MC_p95
MC_std
MC_std_in
destination
free_max_Ai
free_min_Ai
id
id_int
max_in
min_in
model_name
nb_sigmas
origine
produit
region
rref_python1_classif
secteur
sigma_in
sigma_in_p
table
valeur_in
valeur_out
class io_bdd.Sector(**kwargs)[source]

Bases : sqlalchemy.ext.declarative.api.Base

Bilan
Level
Poids_conso
Sankey
Sect_name
Table_conso
Transport
id
model_name
io_bdd.check_db(pgadm, pgadmpass, pghost, pgport, pguser, pgpass, pgdb)[source]
io_bdd.check_rec_exist(act_ses, tabname, colname, myval)[source]
io_bdd.check_table_exist(myobject)[source]
io_bdd.clean_mod(tab_mod, mod_nam, act_ses, col_clean='', val_clean=[''])[source]
io_bdd.connect_aff(db_type)[source]
io_bdd.database_proxy_to_json(act_ses, model_name: str, main_mod_name: str, proxy_geo: list)[source]
io_bdd.get_class_by_tablename(base, tablename)[source]

Return class reference mapped to table. :param tablename: String with name of table. :return: Class reference or None.

io_bdd.read_inputs(tab_name, act_ses, modname, param1, param2, col1='', col2='')[source]
io_bdd.save_input_constraint(mydat, act_ses, bdd_clean, modna)[source]
io_bdd.save_input_flux(mydat, act_ses, bdd_clean, modna)[source]
io_bdd.save_input_param(mydat, act_ses, bdd_clean, modna)[source]
io_bdd.save_input_product(mydat, act_ses, bdd_clean, modna)[source]
io_bdd.save_input_sector(mydat, act_ses, bdd_clean, modna)[source]
io_bdd.save_inputs(js_di, active_ses: str, bdd_clean_mode: int, modname: str, li_add=[''])[source]
io_bdd.save_inputs_data(mydat, act_ses, bdd_clean, modna)[source]
io_bdd.save_inputs_geo(mydat, act_ses, bdd_clean, modna)[source]
io_bdd.save_inputs_mima(mydat, act_ses, bdd_clean, modna)[source]
io_bdd.save_inputs_proxy(mydat, act_ses, bdd_clean, modna, proxlist=[''])[source]
io_bdd.save_inputs_proxytype(mydat, act_ses, bdd_clean, modna)[source]
io_bdd.save_results(file_dat, active_ses, bdd_clean_mode, modname, downscale)[source]
io_bdd.table_list()[source]
io_bdd.write_proxy_output_in_db(act_ses, model_name: str, proxy_output)[source]

mfa_problem.mfa_problem.io_excel

This module is dedicated to the conversion from outside format to internal json format. Outside formats may be : a workbook (excel), another json file, a database etc… Structure and specifications of internal json format are defined in this module. Internal json format can take two main forms : one to adress input informations and a second one for output communications.

io_excel.consistantSheetName(prop_sheet: str)[source]

Test if the prop_sheet is consistent with the allowed sheet list. - Result is empty string if the tested sheet is not consistant. - Result is the dictionary key corresponding of the allowed list found. Note 1 : if the prop_sheet input is empty (“”) the result is a list of allowed sheet name as a string Note 2 : a particular case is taken into account for proxy input file which usualy has 3 proxy sheets (and one of them with “sector” keyword in its name)

io_excel.excel_proxy_to_json(input_file: str, upper_level_name: str)[source]
io_excel.format_excel(excel_writer: pandas.io.excel._base.ExcelWriter, tab_name: str, mfa_problem_input: dict)[source]
io_excel.input_to_json(input_type, input_file, sess_act, mod_name)[source]

Main convertor routine. Call dedicated routine depending on input type - input_type : type of the input (0 : xls/xlsx/csv, 1: database, 2: JSon) - input_file : string with input file name (with extension and path) - xltab_list : list of main entries awaited - jstab_list : list of entries needed in JSon file

io_excel.load_mfa_problem_from_excel(input_file: str, create_empty_ter=False)[source]

Main convertor routine. Call dedicated routine depending on input type - input_file : string with input file name (with extension and path)

io_excel.pd_sorted_col(dft, lico)[source]

Sort columns order of a dataframe in function of a column list

io_excel.write_mfa_problem_output_to_excel(output_file_name: str, mfa_problem_input: dict, mfa_problem_output: dict)[source]
io_excel.write_proxy_output_in_excel(input_file: str, headers: list, sheet_name: str, proxy_output)[source]
io_excel.xl_convert_tablist(df_file: str, tab_list: list)[source]

Convert each tab of a workbook in mfa_problem_input dictionnary entry - df_file : dataframe with all sheets of the input file - tab_list : input file worksheet list

io_excel.xl_get_sheet_details(file_path, only_sheets=True)[source]

Finded at : https://stackoverflow.com/questions/17977540/pandas-looking-up-the-list-of-sheets-in-an-excel-file Speedest way to get informations from an excel file without the nead to open it Benchmarking: (On a 6mb xlsx file with 4 sheets) Pandas, xlrd: 12 seconds openpyxl: 24 seconds Proposed method: 0.4 seconds Notes (modifications I made): - use tempfile.mkdtemp instead of settings.MEDIA_ROOT - routine adapted to extract only sheets names (when entry only_sheets=True) Requirements : - must install xmltodict and add “import xmltodict” - must add “import tempfile” - must add “import shutil” - must add “from zipfile import ZipFile”

io_excel.xl_import_param(df_fi: dict, stab: str, mfa_problem_input: dict)[source]

Import information from workbook tab called « param » if it exists - df_fi : dataframe with all sheets of the input file - stab : name of the workbook tabulation to work on - mfa_problem_input : dictionnary with informations to convert in JSon format

io_excel.xl_import_tab(df_fi: dict, stab: str, def_val: list, js_tab: str, mfa_problem_input: dict)[source]

Import informations from workbook tab called stab if it exists - df_fi : dataframe with all sheets of the input file - stab : name of the workbook sheet to work on - def_val : dictionnary of default values (default columns values of excel sheet) - js_tab : name of the main JSon dictionnary key for this entry - mfa_problem_input : dictionnary with informations to convert in JSon format

io_excel.xl_import_terbase(df_fi: dict, stab: str, mfa_problem_input: dict)[source]

Import informations from workbook tab called « ter_base » if it exists - df_fi : dataframe with all sheets of the input file - stab : name of the workbook tabulation to work on - mfa_problem_input : dictionnary with informations to convert in JSon format

mfa_problem.mfa_problem.mfa_problem_check_io module

This module is used to check if an excel input file has no inconsistancy in its supply/use tables.

The module uses 2 or 3 arguments :
  • « –input_file » : specifies the name of the (excel) input file to check (usualy data/tuto_fr.xlsx).

  • « –tab_list » : specifies the list of sheets for products, sectors and existing fluxes (typically [“Dim products”, “Dim sectors”, “Existing fluxes”])

  • « –merge_with » : second excel input file, the two ter1 will be merged into a new one. It is assumed that tab names are the same as the first file. »

mfa_problem_check_io.check_constraints(index2name: list, solved_vector: numpy.ndarray, ter_vectors: numpy.ndarray, AConstraint: scipy.sparse.csc.csc_matrix, Ai_vars: list, Ai_signs: list, downscale: bool, vars_type: numpy.ndarray, constraints_types_cum_idx)[source]
mfa_problem_check_io.check_if_flows_exist(tod, ter1: list, tab: str, unknown_flows: list)[source]
mfa_problem_check_io.check_input_file(mfa_problem_input: dict)[source]
mfa_problem_check_io.constraint_type(constraint_id: int, constraints_types_cum_idx: list)[source]
mfa_problem_check_io.name_of(index2name: list, id: int, downscale: bool)[source]
mfa_problem_check_io.table(table_id: str)[source]

mfa_problem.mfa_problem.mfa_problem_main module

mfa_problem_main.optimisation(model_name: str, js_dict: dict, uncertainty_analysis: bool, nb_realisations: int, downscale: bool, upper_level_index2name: dict, upper_level_solved_vector: list, upper_level_classification: list, montecarlo_upper_level: dict, main_problem: bool = True, record_simulations: bool = False, performance: bool = False)[source]

mfa_problem.mfa_problem.mfa_problem_solver module

mfa_problem_solver.Cvx_minimize(Aconstraint: scipy.sparse.csc.csc_matrix, AIneq: scipy.sparse.csc.csc_matrix, ter_vectors: numpy.ndarray, nb_determinated: int)[source]
mfa_problem_solver.classify_with_matrix_reduction(AConstraintReordered: scipy.sparse.csc.csc_matrix, nb_measured: int)[source]

This function determines which variables are redundant, measured, determinable or free (undetermined) It is necessary to identify the free variables before undertaking the MonteCarlo simulations

mfa_problem_solver.compute_initial_value_pp_variables(full_ter_vectors: numpy.ndarray, solved_vector_reordered: numpy.ndarray, AEqReorderedRef: scipy.sparse.csr.csr_matrix, AIneqReordered: scipy.sparse.csr.csr_matrix, post_process_reordered: numpy.ndarray, nb_measured: int, intervals_reordered: numpy.ndarray)[source]
mfa_problem_solver.compute_intervals_of_free_variables(ter_vectors: numpy.ndarray, solved_vector: numpy.ndarray, AEqReorderedRef: scipy.sparse.csr.csr_matrix, AIneqReordered: scipy.sparse.csr.csr_matrix, already_computed_vars: numpy.ndarray, nb_measured: int)[source]
mfa_problem_solver.montecarlo(rank_unmeasured: int, AEqReorderedRef: scipy.sparse.csc.csc_matrix, AEqReorderedRefReduced: scipy.sparse.csc.csc_matrix, AIneqReordered: scipy.sparse.csc.csc_matrix, AIneqReorderedReduced: scipy.sparse.csc.csc_matrix, nb_measured: int, ter_vectors_reordered: numpy.ndarray, determinable_col2row: dict, reduced_determinable_col2row: dict, reordered_vars_type: list, post_process_reordered: numpy.ndarray, mask_is_measured: numpy.ndarray, nb_realizations: int, sigmas_floor: float, downscale: bool, montecarlo_upperlevel_results: dict)[source]
mfa_problem_solver.resolve_mfa_problem(rank_unmeasured: int, AEqReorderedRef: scipy.sparse.csc.csc_matrix, AEqReorderedRefReduced: scipy.sparse.csc.csc_matrix, AIneqReordered: scipy.sparse.csc.csc_matrix, AIneqReorderedReduced: scipy.sparse.csc.csc_matrix, nb_measured: int, ter_vectors_reordered: numpy.ndarray, determinable_col2row: dict, reduced_determinable_col2row: dict, reordered_vars_type: list, post_process_reordered: numpy.ndarray)[source]
mfa_problem_solver.resolve_reduced_mfa_problem(rank_unmeasured: int, AEqReorderedRef: scipy.sparse.csc.csc_matrix, AIneq: scipy.sparse.csc.csc_matrix, nb_measured: int, ter_vectors_reordered: numpy.ndarray, determinable_col2row: dict)[source]
mfa_problem_solver.truncated_gaussian_draw(mu: int, sigma: int, nb_sigmas: int)[source]

mfa_problem.mfa_problem.su_trace module

su_trace.base_filename()[source]
su_trace.check_log(nbmax=20)[source]
su_trace.log_level(StrLevel='INFO')[source]

Change the level information of the current logger Possible values are (All calls with a higher value than the selected one are logged) « NOTSET »(value 0), « DEBUG »(value 10), « INFO »(20), « WARNING »(30), « ERROR »(40), « CRITICAL »(50)

su_trace.logger_init(logname, mode)[source]
su_trace.perf_process(procname='python')[source]
su_trace.run_log(myfile)[source]
su_trace.timems(t_input: float, f_out='', b_full=False)[source]

mfa_problem.tests module