Build NeurEco Discrete Dynamic model with the Python API#

To build a NeurEco Discrete Dynamic model in Python API, import NeurEcoDynamic library:

from NeurEco import NeurEcoDynamic as Dynamic

Initialize a NeurEco object to handle the Discrete Dynamic problem:

model = Dynamic.DiscreteDynamic()

Call method build with the parameters set for the problem under consideration:

model.build(train_time_list, train_exc_list, train_out_list,
                 valid_time_list=None,
                 valid_exc_list=None,
                 valid_out_list=None,
                 test_time_list=None,
                 test_exc_list=None,
                 test_out_list=None,
                 exc_columns_names=None,
                 out_columns_names=None,
                 write_model_to="",
                 checkpoint_address="",
                 valid_percentage=None,
                 inputs_scaling="l2",
                 inputs_shifting="mean",
                 outputs_scaling="l2",
                 outputs_shifting="mean",
                 inputs_normalize_per_feature=True,
                 outputs_normalize_per_feature=True,
                 steady_state_exc=None,
                 steady_state_out=None,
                 min_hidden_states=1,
                 max_hidden_states=0)
train_time_list

list of 1-D NumPy arrays, required: list containing the training time arrays (1-D)

train_exc_list

list of n-D NumPy arrays, required: list containing the training excitation arrays

train_out_list

list of n-D NumPy arrays, required: list containing the training output arrays

valid_time_list

list of 1-D NumPy arrays, optional: list containing the validation time arrays (1-D)

valid_exc_list

list of n-D NumPy arrays, optional: list containing the validation excitation arrays

valid_out_list

list of n-D NumPy arrays, optional: list containing the validation output arrays

test_time_list

list of 1-D NumPy arrays, optional: list containing the testing time arrays (1-D)

test_exc_list

list of n-D NumPy arrays, optional: list containing the testing excitation arrays

test_out_list

list of n-D arrays, optional: list containing the testing output arrays

exc_columns_names

list of strings: list of strings: list containing the excitation variables names

out_columns_names

list of strings: list of strings: list containing the output variables names

write_model_to

string: path on the disk where to save the model

param checkpoint_address

string: path on the disk where to save the checkpoint (this file will contain the intermediate models created that could be used if the build is too long, or when resume=True

param inputs_shifting

string, optional, default = ‘mean’. Possible values: ‘mean’ or ‘none’. See Data normalization for Discrete Dynamic for more details

param inputs_scaling

string, optional, default = ‘l2’. Possible values: ‘l2’, ‘none’. See Data normalization for Discrete Dynamic for more details

param outputs_shifting

string, optional, default = ‘mean’. Possible values: ‘mean’ or ‘none’. See Data normalization for Discrete Dynamic for more details

param outputs_scaling

string, optional, default = ‘l2’. Possible values: ‘l2’, ‘none’. See Data normalization for Discrete Dynamic for more details

param inputs_normalize_per_feature

bool, optional, default = True. If True, normalizes each input feature independently from others. See Data normalization for Discrete Dynamic for more details

param outputs_normalize_per_feature

bool, optional, default = True. If True, normalizes each output feature independently from others. See Data normalization for Discrete Dynamic for more details

param valid_percentage

validation percentage in case validation data not given: this percentage will be the last bit of the excitation data

param steady_state_exc

numpy 1-D array: forces built model to be stable when fed with this input value

param steady_state_out

numpy 1-D array: stable output value associated to input value steady_state_exc

param min_hidden_states

starting number of hidden states to accelerate best topology identification process

param max_hidden_states

maximum number of hidden states to accelerate best topology identification process, if 0: not set

return

build_status: int: 0 if build is successful, other if otherwise

Data normalization for Discrete Dynamic#

Set inputs_normalize_per_feature (or outputs_normalize_per_feature) to True if trying to fit the features of different natures (temperature and pressure for example) and want to give them equivalent importance.

Set inputs_normalize_per_feature (or outputs_normalize_per_feature) to False if trying to fit the features of the same nature (a set of temperatures for example) or a field.

If neither of provided normalization options suits the problem, normalize the data your own way prior to feeding them to NeurEco (and deactivate normalization by setting the scale and shift to none).

A normalization operation for NeurEco is a combination of a \(shift\) and a \(scale\), so that:

\[x_{normalized} = \frac{x-shift}{scale}\]

Allowed shift methods for NeurEco and their corresponding shifted values are listed in the table below:

NeurEco Discrete Dynamic shifting methods#

Name

shift value

none

\[0\]

mean

\[mean(x)\]

Allowed scale methods for NeurEco Tabular and their corresponding scaled values are listed in the table below:

NeurEco Discrete Dynamic scaling methods#

Name

scale value

none

\[1\]

l2

\[\frac{\left\Vert x\right\Vert }{\sqrt{size \_ of \_ x}}\]

Control the size of the NeurEco Discrete Dynamic model during Build#

At any given moment of time the state of the system is represented by a vector, that stores the so-called hidden states of the system. In the state-space representation the hidden states are the state variables.

NeurEco Discrete Dynamic allows imposing the limits on the number of hidden states. When these limits rely on some additional knowledge about the system, it can facilitate the model training and reduce the time of Build.

Imposing the maximum number of hidden states, by setting the parameter max_hidden_states in build, can decrease the size of the constructed model. That is what one is looking for when seeking a trade-off between accuracy and augmenting the embeddability of the model even more.

See Advanced build tutorial for an example of usage.