Optimize neuron - Olimaol/CompNeuroPy Wiki

The opt_neuron class

CompNeuroPy provides the opt_neuron class which can be used to define your optimization of an ANNarchy neuron model (tuning the parameters). You can either optimize your neuron model to some data or try to reproduce the dynamics of a different neuron model (for example to reduce a more complex model). In both cases, you have to define the experiment which generates the data of interest with your neuron model.

Arguments of the opt_neuron class:

  • experiment: CompNeuroPy Experiment class || the experiment class has to contain the run() function which defines the simulations and recordings
  • get_loss_function: function || function which takes two arguments (described below) and calculates the loss
  • variables_bounds: dictionary || keys = parameter names, values = either list with len=2 (lower and upper bound) or a single value (constant parameter)
  • neuron_model: ANNarchy Neuron object || the neuron model used during optimization
  • results_soll: dictionary || optional, default=None || the target data which can be used by the get_loss_function (second argument). Either provide results_soll or target_neuron_model, not both!
  • target_neuron_model: ANNarchy Neuron object || optional, default=None || the neuron model which produces the target data by running the experiment. Either provide results_soll or target_neuron_model, not both!
  • time_step: float || optional, default=1 || the time step for the simulation in ms
  • compile_folder_name: string || optional, default="annarchy_opt_neuron" || the name of the annarchy compilation folder (will be stored under ./annarchy_folders)
  • num_rep_loss: int || optional, default=1 || only interesting for noisy simulations/models. Define, how often the model should be run to calculate a single loss value (the defined number of losses is obtained and averaged). Do not confuse this with the number of runs for optimization which is specified in the run() function. The absolute number of executions of the experiment is "number of runs times num_rep_loss".
  • method: string || optional, default="hyperopt" || The tool/package used for optimization. Either "hyperopt" or "sbi".
  • prior: distribution || optional, default=None || If you use sbi: the prior distribution used by sbi. If none is given, uniform distributions between the variable bounds are assumed.
  • fv_space: list || optional, default=None || If you use hyperopt: the search space for hyperopt. If none is given, uniform distributions between the variable bounds are assumed.

An example:

opt = opt_neuron(
    experiment=my_exp,
    get_loss_function=get_loss,
    variables_bounds=variables_bounds,
    results_soll=get_results_target()[0],
    time_step=get_results_target()[1],
    compile_folder_name="annarchy_opt_neuron_example",
    neuron_model=my_neuron,
    method="hyperopt",
)

Complete examples are available in the examples/opt_neuron/ folder of the source code.

Run the optimization

To run the optimization simply call the run() function of the opt_neuron object.

Arguments of the run() function:

  • max_evals: int || number of runs (each run = sample loss(parameters)) the optimization method performs
  • results_file_name: string || optional, default="best.npy" || name of the file which contains the optimization results (saved in ./dataRaw/paramter_fit/). The file contains the optimized and target results, the obtained parameters, the loss, and the SD of the loss (in case of noisy models with multiple executions of experiment per run/loss calculation)
  • sbi_plot_file: string || optional, default="posterior.svg" || If you use sbi: the name of the figure which shows the posterior (saved in ./).

Define the experiment

You have to define a class (parent class = CompNeuroPy class Experiment) containing a run() function. In the run() function simulations and recordings are performed. The run() function has to contain a single argument, a population name (string) of the model used during optimization. The opt_neuron class automatically generates a model consisting of a single neuron (based on the neuron_model provided). With the population name string, you can access this population in the run() function.

At the end you have to return the function self.results() (provided by the parent class Experiment). This function returns an object containing recordings, the monDict of the used monitors, and additional data.

If you use a Monitors object for the recordings store it in self.mon. Store additional/optional data (which you want to access after optimization) in self.data. If you want to reset the model and/or monitors use the self.reset() function provided by the parent class Experiment. You can also define further class functions which are used by the run() function.

An example:

class my_exp(Experiment):
    """
    parent class Experiment provides the variables:
        self.mon = self.cnp.Monitors() --> a CompNeuroPy Monitors object to do recordings
        self.data = {}                 --> a dictionary with any optional data
    and the functions:
        self.reset()   --> resets the model and monitors
        self.results() --> returns a results object (with recordings and optional data from self.data)
    """

    ### we have to define some function in which the simulations and recordings are done
    def run(self, population_name):
        """
        do the simulations and recordings
        """

        ### define some simulations
        my_sim = generate_simulation(...

        ### define recordings
        ### store them in self.mon
        self.mon = Monitors(...)

        ### run simulations/recordings
        self.mon.start()
        my_sim.run()
        ### if you want to reset the model, you have to use the objects reset()
        ### it's the same as the ANNarchy reset (same arguments) and will also reset the monitors
        self.reset()
        ### SIMULATION END

        ### optional: store anything you want in the data dict, for example, information about
        ### the simulations
        self.data["sim"] = my_sim.simulation_info()
        self.data["population_name"] = population_name
        self.data["time_step"] = dt()
        self.data["recording_times"] = self.mon.get_recording_times()

        ### return results, use the object's self.results() function which automatically
        ### returns an object with "recordings", "monDict", and "data"
        return self.results()

The get_loss_function

The get_loss_function must have two arguments. When this function is called during optimization, the first argument is always the results object returned by the experiment, i.e. the results of the neuron you want to optimize. The second argument depends on whether you have specified results_soll, i.e. data to be reproduced by the neuron_model, or whether you have specified a target_neuron_model whose results are to be reproduced by the neuron_model. Thus, the second argument is either results_soll provided to the `opt_neuron' class or another results object (returned by the experiment), generated with the target_neuron_model.

An example:

In this example we assume, that results_soll was provided during initialization of the opt_neuron class (no target_neuron_model used).

def get_loss(results_ist, results_soll):
    """
    results_soll: any
        contains the target data
        provided during initialization of opt_neuron
    results_ist: object
        the results object generated by the experiment
    """

    ### get the recordings and other important things from the
    ### results_ist (results generated during the optimization
    ### by the experiment)
    rec_ist = results_ist.recordings
    pop_ist = results_ist.data["population_name"]
    neuron = 0

    ### get the important data for calculating the loss from
    ### the results_soll (target data directly provided to
    ### opt_neuron), here: simply array with target values
    ### for two trials
    v_target_0 = results_soll[0]
    v_target_1 = results_soll[1]

    ### get the important data for calculating the loss from the
    ### recordings, here: the recorded variable r from two chunks
    v_ist_0 = rec_ist[0][pop_ist + ";r"][:, neuron]
    v_ist_1 = rec_ist[1][pop_ist + ";r"][:, neuron]

    ### calculate the loss
    rmse1 = np.sqrt(np.mean((v_target_0 - v_ist_0) ** 2))
    rmse2 = np.sqrt(np.mean((v_target_1 - v_ist_1) ** 2))

    ### return the loss, one can return a singel value or a
    ### list of values (which will be summed)
    return [rmse1, rmse2]
⚠️ **GitHub.com Fallback** ⚠️