AFM Filières en mode batch¶
Submodules¶
mfa_problem.mfa_problem.io_bdd module¶
-
class
io_bdd.
Constraint
(**kwargs)[source]¶ Bases :
sqlalchemy.ext.declarative.api.Base
-
Coef_inv
¶
-
Coefficient
¶
-
Destination
¶
-
Eq_lower
¶
-
Eq_upper
¶
-
Equality
¶
-
Origine
¶
-
Periode
¶
-
Region
¶
-
Table
¶
-
id
¶
-
id_interne
¶
-
model_name
¶
-
-
class
io_bdd.
Data
(**kwargs)[source]¶ Bases :
sqlalchemy.ext.declarative.api.Base
-
Contrainte_Sym_p
¶
-
Destination
¶
-
Factor
¶
-
Incertitude_p
¶
-
Origine
¶
-
Periode
¶
-
Quantity
¶
-
Region
¶
-
Source
¶
-
Table
¶
-
Unit
¶
-
Valeur
¶
-
id
¶
-
model_name
¶
-
-
class
io_bdd.
Flux
(**kwargs)[source]¶ Bases :
sqlalchemy.ext.declarative.api.Base
-
Destination
¶
-
Origine
¶
-
Table
¶
-
Valeur
¶
-
id
¶
-
model_name
¶
-
-
class
io_bdd.
Geographic
(**kwargs)[source]¶ Bases :
sqlalchemy.ext.declarative.api.Base
-
Code_Enfants
¶
-
Code_Insee
¶
-
Code_Parents
¶
-
Nom
¶
-
geotype
¶
-
id
¶
-
id_int
¶
-
model_name
¶
-
-
class
io_bdd.
MinMax
(**kwargs)[source]¶ Bases :
sqlalchemy.ext.declarative.api.Base
-
Destination
¶
-
Factor
¶
-
Max
¶
-
Max_Unit
¶
-
Min
¶
-
Min_Unit
¶
-
Origine
¶
-
Periode
¶
-
Region
¶
-
Source
¶
-
Table
¶
-
Unit
¶
-
id
¶
-
model_name
¶
-
-
class
io_bdd.
Param
(**kwargs)[source]¶ Bases :
sqlalchemy.ext.declarative.api.Base
-
Description
¶
-
Parametre
¶
-
Valeur
¶
-
id
¶
-
model_name
¶
-
-
class
io_bdd.
Product
(**kwargs)[source]¶ Bases :
sqlalchemy.ext.declarative.api.Base
-
Bilan
¶
-
Level
¶
-
Poids_conso
¶
-
Prod_name
¶
-
Sankey
¶
-
Table_conso
¶
-
Transport
¶
-
id
¶
-
model_name
¶
-
-
class
io_bdd.
Proxy
(**kwargs)[source]¶ Bases :
sqlalchemy.ext.declarative.api.Base
-
Code_geo
¶
-
Geographic
¶
-
Percent
¶
-
Proxy_name
¶
-
Quantity
¶
-
Source
¶
-
id
¶
-
model_name
¶
-
-
class
io_bdd.
Proxytype
(**kwargs)[source]¶ Bases :
sqlalchemy.ext.declarative.api.Base
-
Destination
¶
-
Origin
¶
-
Proxy_name
¶
-
Sector
¶
-
TableER
¶
-
Type
¶
-
id
¶
-
model_name
¶
-
-
class
io_bdd.
ResultList
(**kwargs)[source]¶ Bases :
sqlalchemy.ext.declarative.api.Base
-
Ai
¶
-
MC_hist0
¶
-
MC_hist1
¶
-
MC_hist2
¶
-
MC_hist3
¶
-
MC_hist4
¶
-
MC_hist5
¶
-
MC_hist6
¶
-
MC_hist7
¶
-
MC_hist8
¶
-
MC_hist9
¶
-
MC_max
¶
-
MC_min
¶
-
MC_mu
¶
-
MC_mu_in
¶
-
MC_p0
¶
-
MC_p10
¶
-
MC_p100
¶
-
MC_p20
¶
-
MC_p30
¶
-
MC_p40
¶
-
MC_p5
¶
-
MC_p50
¶
-
MC_p60
¶
-
MC_p70
¶
-
MC_p80
¶
-
MC_p90
¶
-
MC_p95
¶
-
MC_std
¶
-
MC_std_in
¶
-
destination
¶
-
free_max_Ai
¶
-
free_min_Ai
¶
-
id
¶
-
id_int
¶
-
max_in
¶
-
min_in
¶
-
model_name
¶
-
nb_sigmas
¶
-
origine
¶
-
produit
¶
-
region
¶
-
rref_python1_classif
¶
-
secteur
¶
-
sigma_in
¶
-
sigma_in_p
¶
-
table
¶
-
valeur_in
¶
-
valeur_out
¶
-
-
class
io_bdd.
Sector
(**kwargs)[source]¶ Bases :
sqlalchemy.ext.declarative.api.Base
-
Bilan
¶
-
Level
¶
-
Poids_conso
¶
-
Sankey
¶
-
Sect_name
¶
-
Table_conso
¶
-
Transport
¶
-
id
¶
-
model_name
¶
-
-
io_bdd.
database_proxy_to_json
(act_ses, model_name: str, main_mod_name: str, proxy_geo: list)[source]¶
mfa_problem.mfa_problem.io_excel¶
This module is dedicated to the conversion from outside format to internal json format. Outside formats may be : a workbook (excel), another json file, a database etc… Structure and specifications of internal json format are defined in this module. Internal json format can take two main forms : one to adress input informations and a second one for output communications.
-
io_excel.
consistantSheetName
(prop_sheet: str)[source]¶ Test if the prop_sheet is consistent with the allowed sheet list. - Result is empty string if the tested sheet is not consistant. - Result is the dictionary key corresponding of the allowed list found. Note 1 : if the prop_sheet input is empty (“”) the result is a list of allowed sheet name as a string Note 2 : a particular case is taken into account for proxy input file which usualy has 3 proxy sheets (and one of them with “sector” keyword in its name)
-
io_excel.
format_excel
(excel_writer: pandas.io.excel._base.ExcelWriter, tab_name: str, mfa_problem_input: dict)[source]¶
-
io_excel.
input_to_json
(input_type, input_file, sess_act, mod_name)[source]¶ Main convertor routine. Call dedicated routine depending on input type - input_type : type of the input (0 : xls/xlsx/csv, 1: database, 2: JSon) - input_file : string with input file name (with extension and path) - xltab_list : list of main entries awaited - jstab_list : list of entries needed in JSon file
-
io_excel.
load_mfa_problem_from_excel
(input_file: str, create_empty_ter=False)[source]¶ Main convertor routine. Call dedicated routine depending on input type - input_file : string with input file name (with extension and path)
-
io_excel.
pd_sorted_col
(dft, lico)[source]¶ Sort columns order of a dataframe in function of a column list
-
io_excel.
write_mfa_problem_output_to_excel
(output_file_name: str, mfa_problem_input: dict, mfa_problem_output: dict)[source]¶
-
io_excel.
write_proxy_output_in_excel
(input_file: str, headers: list, sheet_name: str, proxy_output)[source]¶
-
io_excel.
xl_convert_tablist
(df_file: str, tab_list: list)[source]¶ Convert each tab of a workbook in mfa_problem_input dictionnary entry - df_file : dataframe with all sheets of the input file - tab_list : input file worksheet list
-
io_excel.
xl_get_sheet_details
(file_path, only_sheets=True)[source]¶ Finded at : https://stackoverflow.com/questions/17977540/pandas-looking-up-the-list-of-sheets-in-an-excel-file Speedest way to get informations from an excel file without the nead to open it Benchmarking: (On a 6mb xlsx file with 4 sheets) Pandas, xlrd: 12 seconds openpyxl: 24 seconds Proposed method: 0.4 seconds Notes (modifications I made): - use tempfile.mkdtemp instead of settings.MEDIA_ROOT - routine adapted to extract only sheets names (when entry only_sheets=True) Requirements : - must install xmltodict and add “import xmltodict” - must add “import tempfile” - must add “import shutil” - must add “from zipfile import ZipFile”
-
io_excel.
xl_import_param
(df_fi: dict, stab: str, mfa_problem_input: dict)[source]¶ Import information from workbook tab called « param » if it exists - df_fi : dataframe with all sheets of the input file - stab : name of the workbook tabulation to work on - mfa_problem_input : dictionnary with informations to convert in JSon format
-
io_excel.
xl_import_tab
(df_fi: dict, stab: str, def_val: list, js_tab: str, mfa_problem_input: dict)[source]¶ Import informations from workbook tab called stab if it exists - df_fi : dataframe with all sheets of the input file - stab : name of the workbook sheet to work on - def_val : dictionnary of default values (default columns values of excel sheet) - js_tab : name of the main JSon dictionnary key for this entry - mfa_problem_input : dictionnary with informations to convert in JSon format
-
io_excel.
xl_import_terbase
(df_fi: dict, stab: str, mfa_problem_input: dict)[source]¶ Import informations from workbook tab called « ter_base » if it exists - df_fi : dataframe with all sheets of the input file - stab : name of the workbook tabulation to work on - mfa_problem_input : dictionnary with informations to convert in JSon format
mfa_problem.mfa_problem.mfa_problem_check_io module¶
This module is used to check if an excel input file has no inconsistancy in its supply/use tables.
- The module uses 2 or 3 arguments :
« –input_file » : specifies the name of the (excel) input file to check (usualy data/tuto_fr.xlsx).
« –tab_list » : specifies the list of sheets for products, sectors and existing fluxes (typically [“Dim products”, “Dim sectors”, “Existing fluxes”])
« –merge_with » : second excel input file, the two ter1 will be merged into a new one. It is assumed that tab names are the same as the first file. »
mfa_problem.mfa_problem.mfa_problem_main module¶
-
mfa_problem_main.
optimisation
(model_name: str, js_dict: dict, uncertainty_analysis: bool, nb_realisations: int, downscale: bool, upper_level_index2name: dict, upper_level_solved_vector: list, upper_level_classification: list, montecarlo_upper_level: dict, main_problem: bool = True, record_simulations: bool = False, performance: bool = False)[source]¶
mfa_problem.mfa_problem.mfa_problem_solver module¶
-
mfa_problem_solver.
Cvx_minimize
(Aconstraint: scipy.sparse.csc.csc_matrix, AIneq: scipy.sparse.csc.csc_matrix, ter_vectors: numpy.ndarray, nb_determinated: int)[source]¶
-
mfa_problem_solver.
classify_with_matrix_reduction
(AConstraintReordered: scipy.sparse.csc.csc_matrix, nb_measured: int)[source]¶ This function determines which variables are redundant, measured, determinable or free (undetermined) It is necessary to identify the free variables before undertaking the MonteCarlo simulations
-
mfa_problem_solver.
compute_initial_value_pp_variables
(full_ter_vectors: numpy.ndarray, solved_vector_reordered: numpy.ndarray, AEqReorderedRef: scipy.sparse.csr.csr_matrix, AIneqReordered: scipy.sparse.csr.csr_matrix, post_process_reordered: numpy.ndarray, nb_measured: int, intervals_reordered: numpy.ndarray)[source]¶
-
mfa_problem_solver.
compute_intervals_of_free_variables
(ter_vectors: numpy.ndarray, solved_vector: numpy.ndarray, AEqReorderedRef: scipy.sparse.csr.csr_matrix, AIneqReordered: scipy.sparse.csr.csr_matrix, already_computed_vars: numpy.ndarray, nb_measured: int)[source]¶
-
mfa_problem_solver.
montecarlo
(rank_unmeasured: int, AEqReorderedRef: scipy.sparse.csc.csc_matrix, AEqReorderedRefReduced: scipy.sparse.csc.csc_matrix, AIneqReordered: scipy.sparse.csc.csc_matrix, AIneqReorderedReduced: scipy.sparse.csc.csc_matrix, nb_measured: int, ter_vectors_reordered: numpy.ndarray, determinable_col2row: dict, reduced_determinable_col2row: dict, reordered_vars_type: list, post_process_reordered: numpy.ndarray, mask_is_measured: numpy.ndarray, nb_realizations: int, sigmas_floor: float, downscale: bool, montecarlo_upperlevel_results: dict)[source]¶
-
mfa_problem_solver.
resolve_mfa_problem
(rank_unmeasured: int, AEqReorderedRef: scipy.sparse.csc.csc_matrix, AEqReorderedRefReduced: scipy.sparse.csc.csc_matrix, AIneqReordered: scipy.sparse.csc.csc_matrix, AIneqReorderedReduced: scipy.sparse.csc.csc_matrix, nb_measured: int, ter_vectors_reordered: numpy.ndarray, determinable_col2row: dict, reduced_determinable_col2row: dict, reordered_vars_type: list, post_process_reordered: numpy.ndarray)[source]¶