From 4b62934b256b5128275ac39fded12b160a7c733d Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Tue, 26 Oct 2021 09:58:00 +0100 Subject: [PATCH 01/12] incremented version --- version.txt | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/version.txt b/version.txt index 4404a17bae..6016e8addc 100644 --- a/version.txt +++ b/version.txt @@ -1 +1 @@ -4.5.1 +4.6.0 From 74216b355fe80e2680b85d223334c4e27b6bbc8c Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Tue, 26 Oct 2021 09:58:09 +0100 Subject: [PATCH 02/12] small section on batching --- doxygen/10_UserManual.dox | 12 ++++++++++++ 1 file changed, 12 insertions(+) diff --git a/doxygen/10_UserManual.dox b/doxygen/10_UserManual.dox index 9255159d34..d199937ac4 100644 --- a/doxygen/10_UserManual.dox +++ b/doxygen/10_UserManual.dox @@ -205,6 +205,18 @@ where the arguments are \arg \add_cpp_python_text{`CustomUpdateModel::VarValues varInitialisers`: The, `var_space`: Dictionary containing the} initial values or initialisation snippets for the custom update model's state variables (see \ref sectVariableInitialisation) \arg \add_cpp_python_text{`CustomUpdateModel::VarValues varReferences`: The, `var_ref_space`: Dictionary containing the} variable references for the custom update model (see \ref sectVariableReferences) +\section batching Batching +When running models on a GPU, smaller models may not fully occupy the device. +In some scenerios such as gradient-based training and parameter sweeping, this can be overcome by runing multiple copies of the same model at the same time (batching in Machine Learning speak). +Batching can be enabled on a GeNN model with: +\add_toggle_code_cpp +model.setBatchSize(512); +\end_toggle_code +\add_toggle_code_python +model.batch_size = 512 +\end_toggle_code +Model parameters and sparse connectivity are shared across all batches. +Read-write state variables are duplicated for each batch and, by default, read-only state variables are shared across all batches (see section \ref sectNeuronModels for more details). ----- \link UserManual Previous\endlink | \link sectDefiningNetwork Top\endlink | \link sectNeuronModels Next\endlink From 16eb66a78b76afbbc1f1ff02968d5a6e58906299 Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Tue, 26 Oct 2021 10:20:38 +0100 Subject: [PATCH 03/12] subsection on reductions in custom update manual --- doxygen/10_UserManual.dox | 42 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 39 insertions(+), 3 deletions(-) diff --git a/doxygen/10_UserManual.dox b/doxygen/10_UserManual.dox index d199937ac4..e898b886d1 100644 --- a/doxygen/10_UserManual.dox +++ b/doxygen/10_UserManual.dox @@ -741,10 +741,10 @@ For convenience the methods this class should implement can be implemented using For example, using these \add_cpp_python_text{macros,keyword arguments}, we can define a custom update which will set a referenced variable to the value of a custom update model state variable: \add_toggle_code_cpp -class Reset : public CurrentSourceModels::Base +class Reset : public CustomUpdateModels::Base { public: - DECLARE_MODEL(Reset, 0, 1, 1); + DECLARE_CUSTOM_UPDATE_MODEL(Reset, 0, 1, 1); SET_UPDATE_CODE("$(r) = $(v);"); @@ -761,6 +761,42 @@ reset_model = genn_model.create_custom_custom_update_class( update_code="$(r) = $(v);") \end_toggle_code +\subsection custom_update_reduction Batch reduction +As well as the standard variable access modes described in \ref subsect11, custom updates support variables with several 'reduction' access modes: +- \add_cpp_python_text{VarAccess::REDUCE_BATCH_SUM, ``pygenn.genn_wrapper.Models.VarAccess_REDUCE_BATCH_SUM``} +- \add_cpp_python_text{VarAccess::REDUCE_BATCH_MAX, ``pygenn.genn_wrapper.Models.VarAccess_REDUCE_BATCH_MAX``} + +These access modes allow values read from variables duplicated across batches to be reduced into variables that are shared across batches. +For example, in a gradient-based learning scenario, a model like this could be used to sum gradients from across all batches so they can be used as the input to a learning rule operating on shared synaptic weights: + +\add_toggle_code_cpp +class Reset : public CustomUpdateModels::Base +{ +public: + DECLARE_CUSTOM_UPDATE_MODEL(Reset, 0, 1, 1); + + SET_UPDATE_CODE( + "$(reducedGradient) = $(gradient);\n" + "$(gradient) = 0;\n"); + + SET_VARS({{"reducedGradient", "scalar", VarAccess::REDUCE_BATCH_SUM}}); + SET_VAR_REFS({{"gradient", "scalar"}, +}; +\end_toggle_code +\add_toggle_code_python +gradient_batch_reduce_model = genn_model.create_custom_custom_update_class( + "gradient_batch_reduce", + var_name_types=[("reducedGradient", "scalar", VarAccess_REDUCE_BATCH_SUM)], + var_refs=[("gradient", "scalar")], + update_code=""" + $(reducedGradient) = $(gradient); + $(gradient) = 0; + """) +\end_toggle_code +\note +Reading from variables with a reduction access mode is undefined behaviour. + + ----- \link sectCurrentSourceModels Previous\endlink | \link UserManual Top\endlink | \link subsect34 Next\endlink */ @@ -821,7 +857,7 @@ In Python, these matrix types can be selected by their unqualified name e.g. "DE /*! \page sectVariableInitialisation Variable initialisation -Neuron, weight update and postsynaptic models all have state variables which GeNN can automatically initialise. +Neuron, weight update, postsynaptic and custom update models all have state variables which GeNN can automatically initialise. Previously we have shown variables being initialised to constant values such as: \add_toggle_code_cpp From 96f95a86d7f2f20fc44003ffdb1a030db6cc20c0 Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Tue, 26 Oct 2021 10:29:20 +0100 Subject: [PATCH 04/12] small section on READ_ONLY_DUPLICATE variable mode --- doxygen/10_UserManual.dox | 2 ++ 1 file changed, 2 insertions(+) diff --git a/doxygen/10_UserManual.dox b/doxygen/10_UserManual.dox index e898b886d1..4d5941522c 100644 --- a/doxygen/10_UserManual.dox +++ b/doxygen/10_UserManual.dox @@ -264,6 +264,8 @@ For convenience, \add_cpp_python_text{the methods this class should implement ca of the neuron state variables. The type string "scalar" can be used for variables which should be implemented using the precision set globally for the model \add_cpp_python_text{with ModelSpec::setPrecision, from ``pygenn.genn_model.GeNNModel.__init__``}. The variables defined here as `NAME` can then be used in the syntax \$(NAME) in the code string. If the access mode is set to \add_cpp_python_text{``VarAccess::READ_ONLY``,``VarAccess_READ_ONLY``}, GeNN applies additional optimisations and models should not write to it. + By default such read-only variables are shared across all batches (see section \ref batching). + If, instead, a read-only variable should be duplicated across batches, its access mode should be set to \add_cpp_python_text{``VarAccess::READ_ONLY_DUPLICATE``,``VarAccess_READ_ONLY_DUPLICATE``}. - \add_cpp_python_text{SET_NEEDS_AUTO_REFRACTORY(), `is_auto_refractory_required`} defines whether the neuron should include an automatic refractory period to prevent it emitting spikes in successive timesteps. For example, we can define a leaky integrator \f$\tau\frac{dV}{dt}= -V + I_{{\rm syn}}\f$ solved using Euler's method: From 055742a84e654156403a00104966bb6ac30b021f Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Tue, 26 Oct 2021 10:35:50 +0100 Subject: [PATCH 05/12] update to spike recording documentation --- doxygen/15_UserGuide.dox | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/doxygen/15_UserGuide.dox b/doxygen/15_UserGuide.dox index c2c02b2e67..1139ef460d 100644 --- a/doxygen/15_UserGuide.dox +++ b/doxygen/15_UserGuide.dox @@ -60,9 +60,13 @@ Therefore, in order to maximise performance, we recommend you do not use automat - pygenn.genn_groups.Group.push_state_to_device - pygenn.genn_groups.Group.push_var_to_device - pygenn.genn_groups.NeuronGroup.pull_spikes_from_device +- pygenn.genn_groups.NeuronGroup.pull_spike_events_from_device - pygenn.genn_groups.NeuronGroup.pull_current_spikes_from_device +- pygenn.genn_groups.NeuronGroup.pull_current_spike_events_from_device - pygenn.genn_groups.NeuronGroup.push_spikes_to_device +- pygenn.genn_groups.NeuronGroup.push_spike_events_to_device - pygenn.genn_groups.NeuronGroup.push_current_spikes_to_device +- pygenn.genn_groups.NeuronGroup.push_current_spike_events_to_device - pygenn.genn_groups.SynapseGroup.pull_connectivity_from_device - pygenn.genn_groups.SynapseGroup.push_connectivity_to_device @@ -187,10 +191,11 @@ In addition to these variables, neuron variables can be referred to in the synap \section spikeRecording Spike Recording Especially in models simulated with small timesteps, very few spikes may be emitted every timestep, making calling \add_cpp_python_text{``pullCurrentSpikesFromDevice()`` or ``pullSpikesFromDevice()``, pygenn.genn_groups.NeuronGroup.pull_current_spikes_from_device} every timestep very inefficient. Instead, the spike recording system allows spikes and spike-like events emitted over a number of timesteps to be collected in GPU memory before transferring to the host. -Spike recording can be enabled on chosen neuron groups with the \add_cpp_python_text{``NeuronGroup::setSpikeRecordingEnabled`` and ``NeuronGroup::setSpikeEventRecordingEnabled`` methods,pygenn.genn_groups.NeuronGroup.spike_recording_enabled property}. +Spike recording can be enabled on chosen neuron groups with the \add_cpp_python_text{``NeuronGroup::setSpikeRecordingEnabled`` and ``NeuronGroup::setSpikeEventRecordingEnabled`` methods,pygenn.genn_groups.NeuronGroup.spike_recording_enabled and pygenn.genn_groups.NeuronGroup.spike_event_recording_enabled properties}. Remaining GPU memory can then be allocated at runtime for spike recording by\add_cpp_python_text{calling ``allocateRecordingBuffers()`` from user code,using the `num_recording_timesteps` keyword argument to pygenn.genn_model.GeNNModel.load}. The data structures can then be copied from the GPU to the host using the \add_cpp_python_text{``pullRecordingBuffersFromDevice()`` function,pygenn.genn_model.GeNNModel.pull_recording_buffers_from_device method} and the spikes emitted by a population can be accessed \add_cpp_python_text{in bitmask form via the ``recordSpk`` variable,via the pygenn.genn_groups.NeuronGroup.spike_recording_data property} -\add_cpp_text{Similarly, spike-like events emitted by a population can be accessed via the ``recordSpkEvent`` variable. To make decoding the bitmask data structure easier\, the ``::writeBinarySpikeRecording`` and ``::writeTextSpikeRecording`` helper functions can be used by including spikeRecorder.h in the user code.} +Similarly, spike-like events emitted by a population can be accessed via the \add_cpp_python_text{``recordSpkEvent`` variable,pygenn.genn_groups.NeuronGroup.spike_event_recording_data property}. +\add_cpp_text{To make decoding the bitmask data structure easier, the ``::writeBinarySpikeRecording`` and ``::writeTextSpikeRecording`` helper functions can be used by including spikeRecorder.h in the user code.} \section Debugging Debugging suggestions \add_toggle_cpp From 6293e2d7ea20ff4d71567b7c14472f8675e7f7c3 Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Tue, 26 Oct 2021 11:17:38 +0100 Subject: [PATCH 06/12] hide irrelevant information in python documentation and add stuff about target variables --- doxygen/10_UserManual.dox | 9 ++++++++- 1 file changed, 8 insertions(+), 1 deletion(-) diff --git a/doxygen/10_UserManual.dox b/doxygen/10_UserManual.dox index 4d5941522c..7e6d536a19 100644 --- a/doxygen/10_UserManual.dox +++ b/doxygen/10_UserManual.dox @@ -155,13 +155,20 @@ where the arguments are \arg \add_cpp_python_text{`PostsynapticModel::VarValues postsynapticVarInitialisers`: The, `ps_var_space`: Dictionary containing the} initial values or initialisation snippets for variables for the postsynaptic model's state variables (see \ref sectVariableInitialisation) \arg \add_cpp_python_text{`InitSparseConnectivitySnippet::Init connectivityInitialiser`,`connectivity_initialiser`}: Optional argument, specifying the initialisation snippet for synapse population's sparse connectivity (see \ref sectSparseConnectivityInitialisation). +\add_toggle_cpp The ModelSpec::addSynapsePopulation() function returns a pointer to the newly created SynapseGroup object which can be further configured, namely with: - SynapseGroup::setMaxConnections() and SynapseGroup::setMaxSourceConnections() to configure the maximum number of rows and columns respectively allowed in the synaptic matrix - this can improve performance and reduce memory usage when using SynapseMatrixConnectivity::SPARSE connectivity (see \ref subsect34). \note When using a sparse connectivity initialisation snippet, these values are set automatically. - SynapseGroup::setMaxDendriticDelayTimesteps() sets the maximum dendritic delay (in terms of the simulation time step `DT`) allowed for synapses in this population. No values larger than this should be passed to the delay parameter of the `addToDenDelay` function in user code (see \ref sect34). -- SynapseGroup::setSpanType() sets how incoming spike processing is parallelised for this synapse group. The default SynapseGroup::SpanType::POSTSYNAPTIC is nearly always the best option, but SynapseGroup::SpanType::PRESYNAPTIC may perform better when there are large numbers of spikes every timestep or very few postsynaptic neurons. +- SynapseGroup::setSpanType() sets how incoming spike processing is parallelised for this synapse group. The default SynapseGroup::SpanType::POSTSYNAPTIC is nearly always the best option, but SynapseGroup::SpanType::PRESYNAPTIC may perform better when there are large numbers of spikes every timestep or very few postsynaptic neurons.} +- SynapseGroup::setPSTargetVar() sets the additional input variable (or standard "Isyn") on the postsynaptic neuron population where input from this synapse group is routed (see section \ref neuron_additional_input). +\end_toggle +\add_toggle_python +The pygenn.GeNNModel.add_synapse_population function returns a pygenn.genn_groups.SynapseGroup object which can be further configured, namely with: +- pygenn.genn_groups.SynapseGroup.ps_target_var sets the additional input variable (or standard "Isyn") on the postsynaptic neuron population where input from this synapse group is routed (see section \ref neuron_additional_input). +\end_toggle \note If the synapse matrix uses one of the "GLOBALG" types then the global From 95ba494a30548bbb5ea6b8f876e21778d9ac0b11 Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Tue, 26 Oct 2021 11:34:11 +0100 Subject: [PATCH 07/12] variable references to other custom updates --- doxygen/10_UserManual.dox | 9 +++++++-- 1 file changed, 7 insertions(+), 2 deletions(-) diff --git a/doxygen/10_UserManual.dox b/doxygen/10_UserManual.dox index 7e6d536a19..a29d577616 100644 --- a/doxygen/10_UserManual.dox +++ b/doxygen/10_UserManual.dox @@ -989,13 +989,14 @@ These modes can be set as a model default using ``ModelSpec::setDefaultVarLocati //---------------------------------------------------------------------------- /*! \page sectVariableReferences Variable references -As well as state variables, custom updates have variable references which are used to reference variables belonging to other neuron and synapse groups. +As well as state variables, custom updates have variable references which are used to reference variables belonging to other neuron and synapse groups or even other custom updates. \add_toggle_cpp The variable references required by a model called SetTime could be assigned to various types of variable using the following syntax: \code SetTime::VarReferences neuronVarReferences(createVarRef(ng, "V")); SetTime::VarReferences currentSourceVarReferences(createVarRef(cs, "V")); +SetTime::VarReferences customUpdateVarReferences(createVarRef(cu, "V")); SetTime::VarReferences postsynapticModelVarReferences(createPSMVarRef(sg, "V")); SetTime::VarReferences wuPreVarReferences(createWUPreVarRef(sg, "Pre")); SetTime::VarReferences wuPostVarReferences(createWUPostVarRef(sg, "Post")); @@ -1006,20 +1007,24 @@ A variable reference called R could be assigned to various types of variable usi \code neuron_var_ref = {"R": genn_model.create_var_ref(ng, "V")} current_source_var_ref = {"R": genn_model.create_var_ref(cs, "V")} +custom_update_var_ref = {"R": genn_model.create_var_ref(cu, "V")} postsynaptic_model_var_ref = {"R": genn_model.create_psm_var_ref(sg, "V")} wu_pre_var_ref = {"R": genn_model.create_wu_pre_var_ref(sg, "Pre")} wu_post_var_ref = {"R": genn_model.create_wu_post_var_ref(sg, "Post")} \endcode \end_toggle -where ng is a \add_cpp_python_text{NeuronGroup pointer (as returned by ModelSpec::addNeuronPopulation),pygenn.genn_groups.NeuronGroup (as returned by pygenn.genn_model.GeNNModel.add_neuron_population)}, cs is a \add_cpp_python_text{CurrentSource pointer (as returned by ModelSpec::addCurrentSource),pygenn.genn_groups.CurrentSource (as returned by pygenn.genn_model.GeNNModel.add_current_source)} and sg is a \add_cpp_python_text{SynapseGroup pointer (as returned by ModelSpec::addSynapsePopulation),pygenn.genn_groups.SynapseGroup (as returned by pygenn.genn_model.GeNNModel.add_synapse_population)}. +where ng is a \add_cpp_python_text{NeuronGroup pointer (as returned by ModelSpec::addNeuronPopulation),pygenn.genn_groups.NeuronGroup (as returned by pygenn.genn_model.GeNNModel.add_neuron_population)}, cs is a \add_cpp_python_text{CurrentSource pointer (as returned by ModelSpec::addCurrentSource),pygenn.genn_groups.CurrentSource (as returned by pygenn.genn_model.GeNNModel.add_current_source)}, cu is a \add_cpp_python_text{CustomUpdate pointer (as returned by ModelSpec::addCustomUpdate),pygenn.genn_groups.CustomUpdate (as returned by pygenn.genn_model.GeNNModel.add_custom_update)} and sg is a \add_cpp_python_text{SynapseGroup pointer (as returned by ModelSpec::addSynapsePopulation),pygenn.genn_groups.SynapseGroup (as returned by pygenn.genn_model.GeNNModel.add_synapse_population)}. While references of these types can be used interchangably in the same custom update, as long as all referenced variables have the same delays and belong to populations of the same size, per-synapse weight update model variables must be referenced with slightly different syntax: \add_toggle_code_cpp SetTime::WUVarReferences wuVarReferences(createWUVarRef(sg, "g")); +SetTime::WUVarReferences cuWUVarReferences(createWUVarRef(cu, "g")); \end_toggle_code \add_toggle_code_python wu_var_ref = {"R": create_wu_var_ref(sg, "g")} +cu_wu_var_ref = {"R": create_wu_var_ref(cu, "g")} \end_toggle_code +where sg is a \add_cpp_python_text{SynapseGroup pointer (as returned by ModelSpec::addSynapsePopulation),pygenn.genn_groups.SynapseGroup (as returned by pygenn.genn_model.GeNNModel.add_synapse_population)} and cu is a \add_cpp_python_text{CustomUpdate pointer (as returned by ModelSpec::addCustomUpdate),pygenn.genn_groups.CustomUpdate (as returned by pygenn.genn_model.GeNNModel.add_custom_update)} which operates on another synapse group's state variables. These 'weight update variable references' also have the additional feature that they can be used to define a link to a 'transpose' variable: \add_toggle_code_cpp From 4ce31b1e2cd45afa96b83c7193af568e6cc02466 Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Tue, 26 Oct 2021 11:36:32 +0100 Subject: [PATCH 08/12] WIP release notes --- doxygen/09_ReleaseNotes.dox | 29 +++++++++++++++++++++++++++++ 1 file changed, 29 insertions(+) diff --git a/doxygen/09_ReleaseNotes.dox b/doxygen/09_ReleaseNotes.dox index a606a7d350..1deacbc7b9 100644 --- a/doxygen/09_ReleaseNotes.dox +++ b/doxygen/09_ReleaseNotes.dox @@ -1,4 +1,33 @@ /*! \page ReleaseNotes Release Notes + +Release Notes for GeNN v4.6.0 +==== +This release adds a number of significant new features to GeNN as well as several usability improvements for PyGeNN. +It also includes a number of bug fixes that have been identified since the 4.5.1 release. + +User Side Changes +---- +1. Batch reductions, NCCL multi-GPU reductions +2. Postsynaptic model target +3. Fuse pre and postsynaptic update +4. PyGeNN now shares a version with GeNN itself and this will be accessible via ``pygenn.__version__``. +5. Validate population and variable names +6. Setting spikes manually from PyGeNN +7. Expose spike-like events to PyGeNN +8. Additional useful PyGeNN errors +9. Automatically find Visual Studio +10. Variable references to custom update variables +11. Update google test + +Bug fixes: +---- +1. Use symbolic links in /tmp to fix "path name spaces problem" +2. Fix multiple issues with sparse synapse index narrowing +3. Fixes nasty bug when locale means , is used for decimal point +4. GCC 5 fix +5. Missing include breaks compilation on Visual C++ 2017 +6. Fixed a small problem with the MBody1 example + Release Notes for GeNN v4.5.1 (PyGeNN 0.4.6) ==== This release fixes several small issues found in the 4.5.0 release. From 07e9840dd5832938ccf8c9754b004dd7e59b1364 Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Tue, 26 Oct 2021 13:53:10 +0100 Subject: [PATCH 09/12] expose some more bits of PyGeNN at module level --- pygenn/__init__.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/pygenn/__init__.py b/pygenn/__init__.py index 9f3327786a..b070eb8834 100644 --- a/pygenn/__init__.py +++ b/pygenn/__init__.py @@ -1,3 +1,3 @@ # Import pygenn interface -from .genn_groups import SynapseGroup, NeuronGroup, CurrentSource +from .genn_groups import SynapseGroup, NeuronGroup, CurrentSource, CustomUpdate from .genn_model import GeNNModel From 3952972bfaf28a7bab740919aea5c0a5934bd184 Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Tue, 26 Oct 2021 14:14:19 +0100 Subject: [PATCH 10/12] more release notes --- doxygen/09_ReleaseNotes.dox | 37 +++++++++++++++++-------------------- 1 file changed, 17 insertions(+), 20 deletions(-) diff --git a/doxygen/09_ReleaseNotes.dox b/doxygen/09_ReleaseNotes.dox index 1deacbc7b9..532790cb62 100644 --- a/doxygen/09_ReleaseNotes.dox +++ b/doxygen/09_ReleaseNotes.dox @@ -11,22 +11,19 @@ User Side Changes 2. Postsynaptic model target 3. Fuse pre and postsynaptic update 4. PyGeNN now shares a version with GeNN itself and this will be accessible via ``pygenn.__version__``. -5. Validate population and variable names -6. Setting spikes manually from PyGeNN -7. Expose spike-like events to PyGeNN -8. Additional useful PyGeNN errors -9. Automatically find Visual Studio -10. Variable references to custom update variables -11. Update google test +5. The names of populations and variables are now validated to prevent code with invalid variable names being generated. +6. As well as being able to read the current spikes via the pygenn.NeuronGroup.current_spikes property, they can now also be set +7. Spike-like events were previously not exposed to PyGeNN. These can now be pushed and pulled via pygenn.NeuronGroup.pull_spike_events_from_device, pygenn.NeuronGroup.push_spike_events_to_device, pygenn.NeuronGroup.pull_current_spike_events_from_device and pygenn.NeuronGroup.push_current_spike_events_to_device and accessed via pygenn.NeuronGroup.current_spike_events. +8. Added additional error handling to prevent properties of pygenn.GeNNModel that can only be set before the model was built being set afterwards. +9. Variable references to custom update variables +10. Updated the default parameters used in the MBody1 example to be more sensible Bug fixes: ---- -1. Use symbolic links in /tmp to fix "path name spaces problem" +1. Fixed an issue that was preventing genn-buildmodel.sh correctly handling paths with spaces 2. Fix multiple issues with sparse synapse index narrowing -3. Fixes nasty bug when locale means , is used for decimal point -4. GCC 5 fix -5. Missing include breaks compilation on Visual C++ 2017 -6. Fixed a small problem with the MBody1 example +3. Fixed issue where, if GeNN is run in a locale where , is used for decimal point, some generated code was incorrectly formated. +4. Fixed several small issues preventing GeNN from building on GCC 5 Visual C++ 2017 Release Notes for GeNN v4.5.1 (PyGeNN 0.4.6) ==== @@ -45,7 +42,7 @@ It also includes a number of bug fixes that have been identified since the 4.4.0 User Side Changes ---- -1. When performing inference on datasets, batching helps fill the GPU and improve performance. This could be previously achieved using "master" and "slave" synapse populations but this didn't scale well. Models can now be automatically batched using ``ModelSpec::setBatchSize`` or ``pygenn.genn_model.GeNNModel.batch_size``. +1. When performing inference on datasets, batching helps fill the GPU and improve performance. This could be previously achieved using "master" and "slave" synapse populations but this didn't scale well. Models can now be automatically batched using ``ModelSpec::setBatchSize`` or ``pygenn.GeNNModel.batch_size``. 2. As well as more typical neuron, weight update, postsynaptic and current source models, you can now define custom update models which define a process which can be applied to any variable in the model. These can be used for e.g. resetting state variables or implementing optimisers for gradient-based learning (see \ref defining_custom_updates). 3. Model compilation and CUDA block size optimisation could be rather slow in previous versions. More work is still required in this area but, code will now only be re-generated if the model has actually changed and block sizes will only be re-optimised for modules which have changed. Rebuilding can be forced with the ``-f`` flag to ``genn-buildmodel`` or the ``force_rebuild`` flag to ``pygenn.GeNNModel.build``. 4. Binary PyGeNN wheels are now always built with Python 3. @@ -96,7 +93,7 @@ This release fixes several small issues found in the 4.3.2 release. Bug fixes: ---- 1. Fixed bug in bitmask connectivity and procedural connectivity kernels. -2. Fixed issues with setting model precision in PyGeNN. Time precision can now be set seperately using the ``time_precision`` option to the ``pygenn.genn_model.GeNNModel`` constructor. +2. Fixed issues with setting model precision in PyGeNN. Time precision can now be set seperately using the ``time_precision`` option to the ``pygenn.GeNNModel`` constructor. Release Notes for GeNN v4.3.2 (PyGeNN 0.4.2) ==== @@ -133,8 +130,8 @@ User Side Changes 1. Previously GeNN performed poorly with large numbers of populations. This version includes a new code generator which effectively solves this problem (see \cite Knight2020). 2. ``InitSparseConnectivitySnippet::Base`` row build state and ``NeuronModels::Base`` additional input variables could previously only be initialised with a numeric value. Now they can be initialised with a code string supporting substitutions etc. 3. Added GeNN implementation of cortical microcircuit model \cite Potjans2012 to userprojects (discussed further in \cite Knight2018). Also demonstrates how to dynamically load GeNN models rather than linking against them. -4. Previously one pushed states and spikes to and from device in PyGeNN using methods like ``pygenn.genn_model.GeNNModel.push_current_spikes_to_device`` which was somewhat cumbersome. These have now been wrapped in methods like ``pygenn.genn_groups.NeuronGroup.push_current_spikes_to_device`` which is somewhat nicer. -5. The ``CodeGenerator::generateAll`` function now returns memory estimates which are, in turn, returned from ``pygenn.genn_model.GeNNModel.build``. +4. Previously one pushed states and spikes to and from device in PyGeNN using methods like ``pygenn.GeNNModel.push_current_spikes_to_device`` which was somewhat cumbersome. These have now been wrapped in methods like ``pygenn.genn_groups.NeuronGroup.push_current_spikes_to_device`` which is somewhat nicer. +5. The ``CodeGenerator::generateAll`` function now returns memory estimates which are, in turn, returned from ``pygenn.GeNNModel.build``. 6. To better support batching of inputs into multiple instances of the same model, added ``ModelSpec::addSlaveSynapsePopulation`` to add synapse populations which share per-synapse state with a 'master' synapse group. 7. Added extra global parameters to variable initialisation snippets - can be used for lookup table style functionality. 8. Added support for host initialisation of sparse connectivity initialisation snippet extra global parameters. This allows host-based initialisation to be encapsulated within an ``InitSparseConnectivitySnippet::Base`` class. @@ -165,9 +162,9 @@ This release adds a number of new features to GeNN and its Python interface as w User Side Changes ---- -1. Kernel timings can now be enabled from python with ``pygenn.genn_model.GeNNModel.timing_enabled`` and subsequently accessed with ``pygenn.genn_model.GeNNModel.neuron_update_time``, ``pygenn.genn_model.GeNNModel.init_time``, ``pygenn.genn_model.GeNNModel.presynaptic_update_time``, ``pygenn.genn_model.GeNNModel.postsynaptic_update_time``, ``pygenn.genn_model.GeNNModel.synapse_dynamics_time`` and ``pygenn.genn_model.GeNNModel.init_sparse_time``. +1. Kernel timings can now be enabled from python with ``pygenn.GeNNModel.timing_enabled`` and subsequently accessed with ``pygenn.GeNNModel.neuron_update_time``, ``pygenn.GeNNModel.init_time``, ``pygenn.GeNNModel.presynaptic_update_time``, ``pygenn.GeNNModel.postsynaptic_update_time``, ``pygenn.GeNNModel.synapse_dynamics_time`` and ``pygenn.GeNNModel.init_sparse_time``. 2. Backends now generate ``getFreeDeviceMemBytes()`` function to allow free device memory to be queried from user simulation code. This is also exposed to Python via ``GeNNModel.free_device_mem_bytes`` property. -3. GeNN preferences are now fully exposed to PyGeNN by passing kwargs to ``pygenn.genn_model.GeNNModel.__init__``. +3. GeNN preferences are now fully exposed to PyGeNN by passing kwargs to ``pygenn.GeNNModel.__init__``. 4. Logging level can now be seperately specified for GeNN, the code generator, the SpineML generator and the backend and is accessible from PyGeNN. 5. ``CodeGenerator::PreferencesBase::enableBitmaskOptimisations`` flag enables an alternative algorithm for updating synaptic matrices implemented with ``SynapseMatrixConnectivity::BITMASK`` which performs better on smaller GPUs and CPUs. If you are manually initialising matrices this adds padding to align words to rows of the matrix. 6. ``SynapseMatrixConnectivity::PROCEDURAL`` and ``SynapseMatrixWeight::PROCEDURAL`` allow connectivity and synaptic weights to be generated on the fly rather than stored in memory. @@ -196,7 +193,7 @@ User Side Changes 8. Add ``CodeGenerator::CUDA::Preferences::generateLineInfo`` option to output CUDA line info for profiling. 9. CUDA backend supports ``half`` datatype allowing memory savings through reduced precision. Host C++ code does not support half-precision types so such state variables must have their location set to ``VarLocation::DEVICE``. 10. If ``ModelSpec::setDefaultNarrowSparseIndEnabled`` is set on a model or ``SynapseGroup::setNarrowSparseIndEnabled`` is set on an individual synapse population with sparse connectivity, 16-bit numbers will be used for postsynaptic indices, almost halving memory requirements. -11. Manual selection of CUDA devices is now exposed to PyGeNN via the ``pygenn.genn_model.GeNNModel.selected_gpu`` property. +11. Manual selection of CUDA devices is now exposed to PyGeNN via the ``pygenn.GeNNModel.selected_gpu`` property. Bug fixes: ---- @@ -226,7 +223,7 @@ User Side Changes Bug fixes: ---- -1. Fixed typo in ``pygenn.genn_model.GeNNModel.push_var_to_device`` function in PyGeNN. +1. Fixed typo in ``pygenn.GeNNModel.push_var_to_device`` function in PyGeNN. 2. Fixed broken support for Visual C++ 2013. 3. Fixed zero-copy mode. 4. Fixed typo in tutorial 2. From cfaeec98d76f8824e87207db76c9b212d9075e3a Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Tue, 26 Oct 2021 14:19:49 +0100 Subject: [PATCH 11/12] use shorter-form PyGeNN class names --- doxygen/02_Quickstart.dox | 6 ++--- doxygen/07_PyGeNN.dox | 4 +-- doxygen/09_ReleaseNotes.dox | 8 +++--- doxygen/10_UserManual.dox | 26 +++++++++---------- doxygen/12_Tutorial_Python.dox | 14 +++++------ doxygen/14_Tutorial_Python.dox | 10 ++++---- doxygen/15_UserGuide.dox | 46 +++++++++++++++++----------------- 7 files changed, 57 insertions(+), 57 deletions(-) diff --git a/doxygen/02_Quickstart.dox b/doxygen/02_Quickstart.dox index 7475fe64dd..9cecd52dc2 100644 --- a/doxygen/02_Quickstart.dox +++ b/doxygen/02_Quickstart.dox @@ -243,7 +243,7 @@ There are several steps to be completed to define a neuronal network model. \ref sect_new_var_init section for more information on initialising these variables to hetererogenous values. - c) A pygenn.genn_model.GeNNModel object needs to be created and the floating point precision to use should be set (see \ref floatPrecision for more information on floating point precision), i.e. + c) A pygenn.GeNNModel object needs to be created and the floating point precision to use should be set (see \ref floatPrecision for more information on floating point precision), i.e. \code model = GeNNModel("float", "example") \endcode @@ -266,7 +266,7 @@ There are several steps to be completed to define a neuronal network model. model.load() \endcode \note - If the model isn't changed, pygenn.genn_model.GeNNModel.build doesn't need to be called. + If the model isn't changed, pygenn.GeNNModel.build doesn't need to be called. 4. Also, within the same script, the programmer defines their own "simulation" code. In this code, @@ -279,7 +279,7 @@ There are several steps to be completed to define a neuronal network model. \note The initial values or initialisation "snippets" specified when defining the model are automatically applied. - c) They use pygenn.genn_model.GeNNModel.step_time() to run one time step on either the CPU or GPU depending on the available hardware. + c) They use pygenn.GeNNModel.step_time() to run one time step on either the CPU or GPU depending on the available hardware. d) They use functions like pygenn.genn_groups.Group.pull_state_from_device etc to transfer the results from GPU calculations to the main memory of the host computer diff --git a/doxygen/07_PyGeNN.dox b/doxygen/07_PyGeNN.dox index a36f0bda96..719015c111 100644 --- a/doxygen/07_PyGeNN.dox +++ b/doxygen/07_PyGeNN.dox @@ -2,8 +2,8 @@ /*! \page PyGeNN Python interface (PyGeNN) As well as being able to build GeNN models and user code directly from C++, you can also access all GeNN features from Python. -The ``pygenn.genn_model.GeNNModel`` class provides a thin wrapper around ``ModelSpec`` as well as providing support for loading and running simulations; and accessing their state. -``SynapseGroup``, ``NeuronGroup`` and ``CurrentSource`` are similarly wrapped by the ``pygenn.genn_groups.SynapseGroup``, ``pygenn.genn_groups.NeuronGroup`` and ``pygenn.genn_groups.CurrentSource`` classes respectively. +The ``pygenn.GeNNModel`` class provides a thin wrapper around ``ModelSpec`` as well as providing support for loading and running simulations; and accessing their state. +``SynapseGroup``, ``NeuronGroup`` and ``CurrentSource`` are similarly wrapped by the ``pygenn.SynapseGroup``, ``pygenn.NeuronGroup`` and ``pygenn.CurrentSource`` classes respectively. Full installation instructions can be found in \ref pygenn. The following example shows how PyGeNN can be easily interfaced with standard Python packages such as numpy and matplotlib to plot 4 different Izhikevich neuron regimes: diff --git a/doxygen/09_ReleaseNotes.dox b/doxygen/09_ReleaseNotes.dox index 532790cb62..10f1dc946b 100644 --- a/doxygen/09_ReleaseNotes.dox +++ b/doxygen/09_ReleaseNotes.dox @@ -49,9 +49,9 @@ User Side Changes 5. To aid debugging, debug versions of PyGeNN can now be built (see \ref Debugging). 6. OpenCL performance on AMD devices is improved - this has only been tested on a Radeon RX 5700 XT so any feedback from users with other devices would be much appreciated. 7. Exceptions raised by GeNN are now correctly passed through PyGeNN to Python. -8. Spike times (and spike-like event times) can now be accessed, pushed and pulled from PyGeNN (see ``pygenn.genn_groups.NeuronGroup.spike_times``, ``pygenn.genn_groups.NeuronGroup.push_spike_times_to_device`` and ``pygenn.genn_groups.NeuronGroup.pull_spike_times_from_device`` ) -9. On models where postsynaptic merging isn't enabled, the postsynaptic input current from a synapse group can now be accessed from PyGeNN via ``pygenn.genn_groups.SynapseGroup.in_syn``; and pushed and pulled with ``pygenn.genn_groups.SynapseGroup.push_in_syn_to_device`` and ``pygenn.genn_groups.SynapseGroup.pull_in_syn_from_device`` respectively. -10. Accessing extra global parameters from PyGeNN was previously rather cumbersome. Now, you don't need to manually pass a size to e.g. ``pygenn.genn_groups.NeuronGroup.pull_extra_global_param_from_device`` and, if you are using non-pointer extra global parameters, you no longer need to call e.g. ``pygenn.genn_groups.NeuronGroup.set_extra_global_param`` before loading your model. +8. Spike times (and spike-like event times) can now be accessed, pushed and pulled from PyGeNN (see ``pygenn.NeuronGroup.spike_times``, ``pygenn.NeuronGroup.push_spike_times_to_device`` and ``pygenn.NeuronGroup.pull_spike_times_from_device`` ) +9. On models where postsynaptic merging isn't enabled, the postsynaptic input current from a synapse group can now be accessed from PyGeNN via ``pygenn.SynapseGroup.in_syn``; and pushed and pulled with ``pygenn.SynapseGroup.push_in_syn_to_device`` and ``pygenn.SynapseGroup.pull_in_syn_from_device`` respectively. +10. Accessing extra global parameters from PyGeNN was previously rather cumbersome. Now, you don't need to manually pass a size to e.g. ``pygenn.NeuronGroup.pull_extra_global_param_from_device`` and, if you are using non-pointer extra global parameters, you no longer need to call e.g. ``pygenn.NeuronGroup.set_extra_global_param`` before loading your model. Bug fixes: ---- @@ -130,7 +130,7 @@ User Side Changes 1. Previously GeNN performed poorly with large numbers of populations. This version includes a new code generator which effectively solves this problem (see \cite Knight2020). 2. ``InitSparseConnectivitySnippet::Base`` row build state and ``NeuronModels::Base`` additional input variables could previously only be initialised with a numeric value. Now they can be initialised with a code string supporting substitutions etc. 3. Added GeNN implementation of cortical microcircuit model \cite Potjans2012 to userprojects (discussed further in \cite Knight2018). Also demonstrates how to dynamically load GeNN models rather than linking against them. -4. Previously one pushed states and spikes to and from device in PyGeNN using methods like ``pygenn.GeNNModel.push_current_spikes_to_device`` which was somewhat cumbersome. These have now been wrapped in methods like ``pygenn.genn_groups.NeuronGroup.push_current_spikes_to_device`` which is somewhat nicer. +4. Previously one pushed states and spikes to and from device in PyGeNN using methods like ``pygenn.GeNNModel.push_current_spikes_to_device`` which was somewhat cumbersome. These have now been wrapped in methods like ``pygenn.NeuronGroup.push_current_spikes_to_device`` which is somewhat nicer. 5. The ``CodeGenerator::generateAll`` function now returns memory estimates which are, in turn, returned from ``pygenn.GeNNModel.build``. 6. To better support batching of inputs into multiple instances of the same model, added ``ModelSpec::addSlaveSynapsePopulation`` to add synapse populations which share per-synapse state with a 'master' synapse group. 7. Added extra global parameters to variable initialisation snippets - can be used for lookup table style functionality. diff --git a/doxygen/10_UserManual.dox b/doxygen/10_UserManual.dox index a29d577616..ad9ac66d31 100644 --- a/doxygen/10_UserManual.dox +++ b/doxygen/10_UserManual.dox @@ -75,7 +75,7 @@ tasks must be completed: \add_toggle_python A network model is defined as follows: \end_toggle -1. \add_cpp_python_text{The name of the model must be defined,A pygenn.genn_model.GeNNModel must be created with a name and a default precision (see \ref floatPrecision)}: +1. \add_cpp_python_text{The name of the model must be defined,A pygenn.GeNNModel must be created with a name and a default precision (see \ref floatPrecision)}: \add_toggle_code_cpp model.setName("MyModel"); \end_toggle_code @@ -145,8 +145,8 @@ where the arguments are \arg \add_cpp_python_text{`const string &name`,`pop_name`}: The name of the synapse population \arg \add_cpp_python_text{`::SynapseMatrixType mType`: How,`matrix_type`: String specifying how} the synaptic matrix is stored. See \ref subsect34 for available options. \arg \add_cpp_python_text{`unsigned int delay`,`delay_steps`}: Homogeneous (axonal) delay for synapse population (in terms of the simulation time step `DT`). -\arg \add_cpp_python_text{`const string preName`: Name, `source`: pygenn.genn_groups.NeuronGroup or name} of the (existing!) presynaptic neuron population. -\arg \add_cpp_python_text{`const string postName`: Name, `target`: pygenn.genn_groups.NeuronGroup or name} of the (existing!) postsynaptic neuron population. +\arg \add_cpp_python_text{`const string preName`: Name, `source`: pygenn.NeuronGroup or name} of the (existing!) presynaptic neuron population. +\arg \add_cpp_python_text{`const string postName`: Name, `target`: pygenn.NeuronGroup or name} of the (existing!) postsynaptic neuron population. \arg \add_cpp_python_text{`WeightUpdateModel::ParamValues weightParamValues`: The, `wu_param_space`: Dictionary containing the} parameter values (common to all synapses of the population) for the weight update model. \arg \add_cpp_python_text{``WeightUpdateModel::VarValues weightVarInitialisers`: The, `wu_var_space`: Dictionary containing the} initial values or initialisation snippets for the weight update model's state variables (see \ref sectVariableInitialisation) \arg \add_cpp_python_text{`WeightUpdateModel::PreVarValues weightPreVarInitialisers`: The, `wu_pre_var_space`: Dictionary containing the} initial values or initialisation snippets for the weight update model's presynaptic state variables (see \ref sectVariableInitialisation) @@ -166,8 +166,8 @@ When using a sparse connectivity initialisation snippet, these values are set au - SynapseGroup::setPSTargetVar() sets the additional input variable (or standard "Isyn") on the postsynaptic neuron population where input from this synapse group is routed (see section \ref neuron_additional_input). \end_toggle \add_toggle_python -The pygenn.GeNNModel.add_synapse_population function returns a pygenn.genn_groups.SynapseGroup object which can be further configured, namely with: -- pygenn.genn_groups.SynapseGroup.ps_target_var sets the additional input variable (or standard "Isyn") on the postsynaptic neuron population where input from this synapse group is routed (see section \ref neuron_additional_input). +The pygenn.GeNNModel.add_synapse_population function returns a pygenn.SynapseGroup object which can be further configured, namely with: +- pygenn.SynapseGroup.ps_target_var sets the additional input variable (or standard "Isyn") on the postsynaptic neuron population where input from this synapse group is routed (see section \ref neuron_additional_input). \end_toggle \note @@ -233,7 +233,7 @@ Read-write state variables are duplicated for each batch and, by default, read-o //---------------------------------------------------------------------------- /*! \page sectNeuronModels Neuron models -There is a number of predefined models which can be used with the \add_cpp_python_text{ModelSpec::addNeuronPopulation,pygenn.genn_model.GeNNModel.add_neuron_population} method: +There is a number of predefined models which can be used with the \add_cpp_python_text{ModelSpec::addNeuronPopulation,pygenn.GeNNModel.add_neuron_population} method: - NeuronModels::RulkovMap - NeuronModels::Izhikevich - NeuronModels::IzhikevichVariable @@ -268,7 +268,7 @@ For convenience, \add_cpp_python_text{the methods this class should implement ca \add_toggle_cpp The length of this list should match the NUM_PARAM specified in DECLARE_MODEL. \end_toggle Parameters are assumed to be always of type double. - \add_cpp_python_text{SET_VARS(),`var_name_types`} defines the names, type strings (e.g. "float", "double", etc) and (optionally) access mode - of the neuron state variables. The type string "scalar" can be used for variables which should be implemented using the precision set globally for the model \add_cpp_python_text{with ModelSpec::setPrecision, from ``pygenn.genn_model.GeNNModel.__init__``}. + of the neuron state variables. The type string "scalar" can be used for variables which should be implemented using the precision set globally for the model \add_cpp_python_text{with ModelSpec::setPrecision, from ``pygenn.GeNNModel.__init__``}. The variables defined here as `NAME` can then be used in the syntax \$(NAME) in the code string. If the access mode is set to \add_cpp_python_text{``VarAccess::READ_ONLY``,``VarAccess_READ_ONLY``}, GeNN applies additional optimisations and models should not write to it. By default such read-only variables are shared across all batches (see section \ref batching). @@ -833,17 +833,17 @@ Weight update model variables associated with the sparsely connected synaptic po - SynapseMatrixConnectivity::BITMASK is an alternative sparse matrix implementation where which synapses within the matrix are present is specified as a binary array (see \ref ex_mbody). This structure is somewhat less efficient than the ``SynapseMatrixConnectivity::SPARSE`` format and doesn't allow individual weights per synapse. However it does require the smallest amount of GPU memory for large networks. - SynapseMatrixConnectivity::PROCEDURAL is a new approach where, rather than being stored in memory, connectivity described using \ref sectSparseConnectivityInitialisation is generated 'on the fly' as spikes are processed (see \cite Knight2020 for more information). Therefore, this approach offers very large memory savings for a small performance cost but does not currently support plasticity. -\add_python_text{In Python\, SynapseMatrixConnectivity::SPARSE connectivity can be manually initialised from lists of pre and postsynaptic indices using the pygenn.genn_groups.SynapseGroup.set_sparse_connections method.} +\add_python_text{In Python\, SynapseMatrixConnectivity::SPARSE connectivity can be manually initialised from lists of pre and postsynaptic indices using the pygenn.SynapseGroup.set_sparse_connections method.} Furthermore the SynapseMatrixWeight defines how - SynapseMatrixWeight::INDIVIDUAL allows each individual synapse to have unique weight update model variables. Their values must be initialised at runtime and, if running on the GPU, copied across from the user side code, using the \c pushXXXXXStateToDevice function, where XXXX is the name of the synapse population. - SynapseMatrixWeight::INDIVIDUAL_PSM allows each postsynapic neuron to have unique post synaptic model variables. Their values must be initialised at runtime and, if running on the GPU, copied across from the user side code, using the \c pushXXXXXStateToDevice function, where XXXX is the name of the synapse population. - SynapseMatrixWeight::GLOBAL saves memory by only maintaining one copy of the weight update model variables. -This is automatically initialized to the initial value passed to \add_cpp_python_text{ModelSpec::addSynapsePopulation, pygenn.genn_model.GeNNModel.add_synapse_population}. +This is automatically initialized to the initial value passed to \add_cpp_python_text{ModelSpec::addSynapsePopulation, pygenn.GeNNModel.add_synapse_population}. - SynapseMatrixWeight::PROCEDURAL generates weight update model variable values described using \ref sectVariableInitialisation 'on the fly' as spikes are processed. This is typically used alongside SynapseMatrixConnectivity::PROCEDURAL for large models with static connectivity and weights/delays sampled from probability distributions (see \cite Knight2020 for an example). -Only certain combinations of SynapseMatrixConnectivity and SynapseMatrixWeight are sensible therefore, to reduce confusion, the SynapseMatrixType enumeration defines the following options which can be passed to \add_cpp_python_text{ModelSpec::addSynapsePopulation, pygenn.genn_model.GeNNModel.add_synapse_population}: +Only certain combinations of SynapseMatrixConnectivity and SynapseMatrixWeight are sensible therefore, to reduce confusion, the SynapseMatrixType enumeration defines the following options which can be passed to \add_cpp_python_text{ModelSpec::addSynapsePopulation, pygenn.GeNNModel.add_synapse_population}: - SynapseMatrixType::SPARSE_GLOBALG - SynapseMatrixType::SPARSE_GLOBALG_INDIVIDUAL_PSM - SynapseMatrixType::SPARSE_INDIVIDUALG @@ -1013,7 +1013,7 @@ wu_pre_var_ref = {"R": genn_model.create_wu_pre_var_ref(sg, "Pre")} wu_post_var_ref = {"R": genn_model.create_wu_post_var_ref(sg, "Post")} \endcode \end_toggle -where ng is a \add_cpp_python_text{NeuronGroup pointer (as returned by ModelSpec::addNeuronPopulation),pygenn.genn_groups.NeuronGroup (as returned by pygenn.genn_model.GeNNModel.add_neuron_population)}, cs is a \add_cpp_python_text{CurrentSource pointer (as returned by ModelSpec::addCurrentSource),pygenn.genn_groups.CurrentSource (as returned by pygenn.genn_model.GeNNModel.add_current_source)}, cu is a \add_cpp_python_text{CustomUpdate pointer (as returned by ModelSpec::addCustomUpdate),pygenn.genn_groups.CustomUpdate (as returned by pygenn.genn_model.GeNNModel.add_custom_update)} and sg is a \add_cpp_python_text{SynapseGroup pointer (as returned by ModelSpec::addSynapsePopulation),pygenn.genn_groups.SynapseGroup (as returned by pygenn.genn_model.GeNNModel.add_synapse_population)}. +where ng is a \add_cpp_python_text{NeuronGroup pointer (as returned by ModelSpec::addNeuronPopulation),pygenn.NeuronGroup (as returned by pygenn.GeNNModel.add_neuron_population)}, cs is a \add_cpp_python_text{CurrentSource pointer (as returned by ModelSpec::addCurrentSource),pygenn.CurrentSource (as returned by pygenn.GeNNModel.add_current_source)}, cu is a \add_cpp_python_text{CustomUpdate pointer (as returned by ModelSpec::addCustomUpdate),pygenn.genn_groups.CustomUpdate (as returned by pygenn.GeNNModel.add_custom_update)} and sg is a \add_cpp_python_text{SynapseGroup pointer (as returned by ModelSpec::addSynapsePopulation),pygenn.SynapseGroup (as returned by pygenn.GeNNModel.add_synapse_population)}. While references of these types can be used interchangably in the same custom update, as long as all referenced variables have the same delays and belong to populations of the same size, per-synapse weight update model variables must be referenced with slightly different syntax: \add_toggle_code_cpp @@ -1024,7 +1024,7 @@ SetTime::WUVarReferences cuWUVarReferences(createWUVarRef(cu, "g")); wu_var_ref = {"R": create_wu_var_ref(sg, "g")} cu_wu_var_ref = {"R": create_wu_var_ref(cu, "g")} \end_toggle_code -where sg is a \add_cpp_python_text{SynapseGroup pointer (as returned by ModelSpec::addSynapsePopulation),pygenn.genn_groups.SynapseGroup (as returned by pygenn.genn_model.GeNNModel.add_synapse_population)} and cu is a \add_cpp_python_text{CustomUpdate pointer (as returned by ModelSpec::addCustomUpdate),pygenn.genn_groups.CustomUpdate (as returned by pygenn.genn_model.GeNNModel.add_custom_update)} which operates on another synapse group's state variables. +where sg is a \add_cpp_python_text{SynapseGroup pointer (as returned by ModelSpec::addSynapsePopulation),pygenn.SynapseGroup (as returned by pygenn.GeNNModel.add_synapse_population)} and cu is a \add_cpp_python_text{CustomUpdate pointer (as returned by ModelSpec::addCustomUpdate),pygenn.genn_groups.CustomUpdate (as returned by pygenn.GeNNModel.add_custom_update)} which operates on another synapse group's state variables. These 'weight update variable references' also have the additional feature that they can be used to define a link to a 'transpose' variable: \add_toggle_code_cpp @@ -1033,7 +1033,7 @@ SetTime::WUVarReferences wuTransposeVarReferences(createWUVarRef(sg, "g", backSG \add_toggle_code_python wu_transpose_var_ref = {"R": create_wu_var_ref(sg, "g", back_sg, "g")} \end_toggle_code -where \add_cpp_python_text{backSG is another SynapseGroup pointer,back_sg is another pygenn.genn_groups.SynapseGroup} with tranposed dimensions to sg i.e. its postsynaptic population has the same number of neurons as sg's presynaptic population and vice-versa. +where \add_cpp_python_text{backSG is another SynapseGroup pointer,back_sg is another pygenn.SynapseGroup} with tranposed dimensions to sg i.e. its postsynaptic population has the same number of neurons as sg's presynaptic population and vice-versa. After the update has run, any updates made to the 'forward' variable will also be applied to the tranpose variable. \note diff --git a/doxygen/12_Tutorial_Python.dox b/doxygen/12_Tutorial_Python.dox index 84cd0cb9be..83221b3f28 100644 --- a/doxygen/12_Tutorial_Python.dox +++ b/doxygen/12_Tutorial_Python.dox @@ -24,7 +24,7 @@ model.dT = 0.1 \note With this we have fixed the integration time step to `0.1` in the usual time units. The typical units in GeNN are `ms`, `mV`, `nF`, and `μS`. Therefore, this defines `DT= 0.1 ms`. -Making the actual model definition makes use of the pygenn.genn_model.GeNNModel.add_neuron_population and pygenn.genn_model.GeNNModel.add_synapse_population member functions of the pygenn.genn_model.GeNNModel object. The arguments to a call to pygenn.genn_model.GeNNModel.add_neuron_population are: +Making the actual model definition makes use of the pygenn.GeNNModel.add_neuron_population and pygenn.GeNNModel.add_synapse_population member functions of the pygenn.GeNNModel object. The arguments to a call to pygenn.GeNNModel.add_neuron_population are: \arg `pop_name`: Unique name of the neuron population \arg `num_neurons`: number of neurons in the population \arg `neuron`: The type of neuron model. This should either be a string containing the name of a built in model or user-defined neuron type returned by pygenn.genn_model.create_custom_neuron_class (see \ref sectNeuronModels). @@ -55,7 +55,7 @@ pop1 = model.add_neuron_population("Pop1", 10, "TraubMiles", p, ini) \endcode This model definition will generate code for simulating ten Hodgkin-Huxley neurons on the a GPU or CPU. The next stage is to write the code that sets up the simulation, does the data handling for input and output and generally defines the numerical experiment to be run. -To build your model description into simulation code, simply call pygenn.genn_model.GeNNModel.build +To build your model description into simulation code, simply call pygenn.GeNNModel.build \code model.build() \endcode @@ -92,7 +92,7 @@ model.load() \endcode For the purposes of this tutorial we will initially simply run the model for 200ms and print the final neuron variables. To do so, we add: \note -The pygenn.genn_model.GeNNModel.t property keeps track of the current simulation time in milliseconds. +The pygenn.GeNNModel.t property keeps track of the current simulation time in milliseconds. \code while model.t < 200.0 @@ -109,7 +109,7 @@ n_view = pop1.vars["n"].view for j in range(10): print("%f %f %f %f" % (v_view[j], m_view[j], h_view[j], n_view[j])) \endcode -pygenn.genn_groups.NeuronGroup.pull_state_from_device copies all relevant state variables of the neuron group from the GPU to the CPU main memory. We can then get direct access to the host-allocated memory using a 'view' and finally output the results to stdout by looping through all 10 neurons and outputting the state variables via their views. +pygenn.NeuronGroup.pull_state_from_device copies all relevant state variables of the neuron group from the GPU to the CPU main memory. We can then get direct access to the host-allocated memory using a 'view' and finally output the results to stdout by looping through all 10 neurons and outputting the state variables via their views. This completes the first version of the script. The complete `tenHH.py` file should now look like \code @@ -171,7 +171,7 @@ The output you obtain should look like \section Input Reading This is not particularly interesting as we are just observing the final value of the membrane potentials. To see what is going on in the meantime, we need to copy intermediate values from the device into a data structure and plot them. -This can be done in many ways but one sensible way of doing this is to replace the calls to pygenn.genn_model.GeNNModel.step_time in `tenHH.py` with something like this: +This can be done in many ways but one sensible way of doing this is to replace the calls to pygenn.GeNNModel.step_time in `tenHH.py` with something like this: \code v = np.empty((2000, 10)) v_view = pop1.vars["V"].view @@ -188,9 +188,9 @@ import numpy as np \endcode to the top of tenHH.py. \note -The pygenn.genn_model.GeNNModel.timestep property keeps track of the current simulation timestep count. This is updated at the end of pygenn.genn_model.GeNNModel.step_time so here, we subtract 1 from it to obtain indices into our array from 0 to 9999. +The pygenn.GeNNModel.timestep property keeps track of the current simulation timestep count. This is updated at the end of pygenn.GeNNModel.step_time so here, we subtract 1 from it to obtain indices into our array from 0 to 9999. \note -We switched from using pygenn.genn_groups.NeuronGroup.pull_state_from_device to pygenn.genn_group.NeuronGroup.pull_var_from_device as we are now only interested in the membrane voltage of the neuron. +We switched from using pygenn.NeuronGroup.pull_state_from_device to pygenn.genn_group.NeuronGroup.pull_var_from_device as we are now only interested in the membrane voltage of the neuron. Finally, if we add: \code diff --git a/doxygen/14_Tutorial_Python.dox b/doxygen/14_Tutorial_Python.dox index 316fe4b648..15bcddee95 100644 --- a/doxygen/14_Tutorial_Python.dox +++ b/doxygen/14_Tutorial_Python.dox @@ -57,12 +57,12 @@ model.add_synapse_population("Pop1self", "SPARSE_GLOBALG", 10, "ExpCond", ps_p, {}, init_connectivity(ring_model, {})) \endcode -The pygenn.genn_model.GeNNModel.add_synapse_population parameters are +The pygenn.GeNNModel.add_synapse_population parameters are \arg `pop_name`: The name of the synapse population \arg `matrix_type`: String specifying how the synaptic matrix is stored. See \ref subsect34 for available options. \arg `delay_steps`: Homogeneous (axonal) delay for synapse population (in terms of the simulation time step `DT`). -\arg `source`: pygenn.genn_groups.NeuronGroup or name of the (existing!) presynaptic neuron population. -\arg `target`: pygenn.genn_groups.NeuronGroup or name of the (existing!) postsynaptic neuron population. +\arg `source`: pygenn.NeuronGroup or name of the (existing!) presynaptic neuron population. +\arg `target`: pygenn.NeuronGroup or name of the (existing!) postsynaptic neuron population. \arg `w_update_model`: The type of weight update model. This should either be a string containing the name of a built in model or user-defined neuron type returned by pygenn.genn_model.create_custom_weight_update_class (see \ref sectSynapseModels). \arg `wu_param_space`: Dictionary containing the parameter values (common to all synapses of the population) for the weight update model. \arg `wu_var_space`: Dictionary containing the initial values or initialisation snippets for the weight update model's state variables @@ -73,7 +73,7 @@ The pygenn.genn_model.GeNNModel.add_synapse_population parameters are \arg `ps_var_space`: Dictionary containing the initial values or initialisation snippets for variables for the postsynaptic model's state variables \arg `connectivity_initialiser`: Optional argument, specifying the initialisation snippet for synapse population's sparse connectivity (see \ref sectSparseConnectivityInitialisation). -Adding the pygenn.genn_model.GeNNModel.add_synapse_population command to the model definition informs GeNN that there will be synapses between the named neuron populations, here between population `Pop1` and itself with a delay of 10 (0.1 ms) timesteps. +Adding the pygenn.GeNNModel.add_synapse_population command to the model definition informs GeNN that there will be synapses between the named neuron populations, here between population `Pop1` and itself with a delay of 10 (0.1 ms) timesteps. At this point our script `tenHHRing.py` should look like this \code import matplotlib.pyplot as plt @@ -148,7 +148,7 @@ This is because none of the neurons are spiking so there are no spikes to propag \section initialConditions Providing initial stimuli We can use a NeuronModels::SpikeSourceArray to inject an initial spike into the first neuron in the ring during the first timestep to start spikes propagating. -We then need to add it to the network by adding the following before we call pygenn.genn_model.GeNNModel.build: +We then need to add it to the network by adding the following before we call pygenn.GeNNModel.build: \code stim_ini = {"startSpike": [0], "endSpike": [1]} diff --git a/doxygen/15_UserGuide.dox b/doxygen/15_UserGuide.dox index 1139ef460d..9561172cb4 100644 --- a/doxygen/15_UserGuide.dox +++ b/doxygen/15_UserGuide.dox @@ -21,11 +21,11 @@ Core functions generated by GeNN to be included in the user code include: In order to correctly access neuron state and spikes for the current timestep, correctly accounting for delay buffering etc, you can use the ``getCurrent()``, ``getCurrentSpikes()`` and ``getCurrentSpikeCount()`` functions. Additionally, custom update groups (see \ref defining_custom_updates) can be simulated by calling ``update()``. \end_toggle \add_toggle_python -The pygenn.genn_model.GeNNModel.build method can then be used to generate code for your model. -Subsequently, the model can be loaded using pygenn.genn_model.GeNNModel.load and simulated with pygenn.genn_model.GeNNModel.step_time. Additionally, custom update groups (see \ref defining_custom_updates) can be simulated with pygenn.genn_model.GeNNModel.custom_update. After calling pygenn.genn_model.GeNNModel.load, the pygenn.genn_model.GeNNModel.free_device_mem_bytes property can be used on supported hardware-accelerated backends to determine how much free device memory remains. +The pygenn.GeNNModel.build method can then be used to generate code for your model. +Subsequently, the model can be loaded using pygenn.GeNNModel.load and simulated with pygenn.GeNNModel.step_time. Additionally, custom update groups (see \ref defining_custom_updates) can be simulated with pygenn.GeNNModel.custom_update. After calling pygenn.GeNNModel.load, the pygenn.GeNNModel.free_device_mem_bytes property can be used on supported hardware-accelerated backends to determine how much free device memory remains. \end_toggle -By setting \add_cpp_python_text{``GENN_PREFERENCES::automaticCopy``, the `automaticCopy` keyword to pygenn.genn_model.GeNNModel.__init__}, GeNN can be used in a simple mode where CUDA automatically transfers data between the GPU and CPU when required (see https://devblogs.nvidia.com/unified-memory-cuda-beginners/). +By setting \add_cpp_python_text{``GENN_PREFERENCES::automaticCopy``, the `automaticCopy` keyword to pygenn.GeNNModel.__init__}, GeNN can be used in a simple mode where CUDA automatically transfers data between the GPU and CPU when required (see https://devblogs.nvidia.com/unified-memory-cuda-beginners/). However, copying elements between the GPU and the host memory is costly in terms of performance and the automatic copying operates on a fairly coarse grain (pages are approximately 4 bytes). Therefore, in order to maximise performance, we recommend you do not use automatic copying and instead manually call the following \add_cpp_python_text{functions,methods} when required: \add_toggle_cpp @@ -59,23 +59,23 @@ Therefore, in order to maximise performance, we recommend you do not use automat - pygenn.genn_groups.Group.pull_var_from_device - pygenn.genn_groups.Group.push_state_to_device - pygenn.genn_groups.Group.push_var_to_device -- pygenn.genn_groups.NeuronGroup.pull_spikes_from_device -- pygenn.genn_groups.NeuronGroup.pull_spike_events_from_device -- pygenn.genn_groups.NeuronGroup.pull_current_spikes_from_device -- pygenn.genn_groups.NeuronGroup.pull_current_spike_events_from_device -- pygenn.genn_groups.NeuronGroup.push_spikes_to_device -- pygenn.genn_groups.NeuronGroup.push_spike_events_to_device -- pygenn.genn_groups.NeuronGroup.push_current_spikes_to_device -- pygenn.genn_groups.NeuronGroup.push_current_spike_events_to_device -- pygenn.genn_groups.SynapseGroup.pull_connectivity_from_device -- pygenn.genn_groups.SynapseGroup.push_connectivity_to_device +- pygenn.NeuronGroup.pull_spikes_from_device +- pygenn.NeuronGroup.pull_spike_events_from_device +- pygenn.NeuronGroup.pull_current_spikes_from_device +- pygenn.NeuronGroup.pull_current_spike_events_from_device +- pygenn.NeuronGroup.push_spikes_to_device +- pygenn.NeuronGroup.push_spike_events_to_device +- pygenn.NeuronGroup.push_current_spikes_to_device +- pygenn.NeuronGroup.push_current_spike_events_to_device +- pygenn.SynapseGroup.pull_connectivity_from_device +- pygenn.SynapseGroup.push_connectivity_to_device \end_toggle You can use \add_cpp_python_text{``pushStateToDevice()``,pygenn.genn_groups.Group.push_state_to_device} to copy from the host to the GPU. At the end of your simulation, if you want to access the variables you need to copy them back from the device using the \add_cpp_python_text{``pullStateFromDevice()`` function, pygenn.genn_groups.Group.pull_state_from_device method} or one of the more fine-grained functions listed above. \subsection extraGlobalParamSim Extra Global Parameters -If extra global parameters have a "scalar" type such as ``float`` they can be set directly from simulation code. For example the extra global parameter "reward" of \add_cpp_python_text{population "Pop" can be set with,pygenn.genn_groups.NeuronGroup "pop" should first be initialised before pygenn.genn_model.GeNNModel.load is called with}: +If extra global parameters have a "scalar" type such as ``float`` they can be set directly from simulation code. For example the extra global parameter "reward" of \add_cpp_python_text{population "Pop" can be set with,pygenn.NeuronGroup "pop" should first be initialised before pygenn.GeNNModel.load is called with}: \add_toggle_code_cpp rewardPop = 5.0f; \end_toggle_code @@ -97,7 +97,7 @@ However, if extra global parameters have a pointer type such as ``float*``, GeNN These operate in much the same manner as the functions for interacting with standard variables described above but the allocate, push and pull functions all take a "count" parameter specifying how many entries the extra global parameter array should be. \end_toggle \add_toggle_python -Extra global parameters with a pointer type such as ``float*`` should be initialised and updated in the same manner but, if their value is changed after pygenn.genn_model.GeNNModel.load is called, the updated values need to be pushed to the GPU: +Extra global parameters with a pointer type such as ``float*`` should be initialised and updated in the same manner but, if their value is changed after pygenn.GeNNModel.load is called, the updated values need to be pushed to the GPU: \code pop.extra_global_params["reward"].view[:] = [1,2,3,4] pop.push_extra_global_param_to_device("reward", 4) @@ -112,7 +112,7 @@ Like standard extra global parameters, GeNN generates additional functions to al - `pullFromDevice` \end_toggle \add_toggle_python -These extra global parameters must be initialised before pygenn.genn_model.GeNNModel.load is called: +These extra global parameters must be initialised before pygenn.GeNNModel.load is called: \code pop.vars["g"].set_extra_global_init_param("kernel", [1, 2, 3, 4]) \endcode @@ -122,7 +122,7 @@ pop.vars["g"].set_extra_global_init_param("kernel", [1, 2, 3, 4]) Double precision floating point numbers are supported by devices with compute capability 1.3 or higher. If you have an older GPU, you need to use single precision floating point in your models and simulation. Furthermore, GPUs are designed to work better with single precision while double precision is the standard for CPUs. This difference should be kept in mind while comparing performance. -Typically, variables in GeNN models are defined using the `scalar` type. This type is substituted with "float" or "double" during code generation, according to the model precision. This is specified \add_cpp_python_text{with ModelSpec::setPrecision() -- either `GENN_FLOAT` or `GENN_DOUBLE`. `GENN_FLOAT` is the default value,with the first parameter to pygenn.genn_model.GeNNModel.__init__ as a string e.g. "float"}. +Typically, variables in GeNN models are defined using the `scalar` type. This type is substituted with "float" or "double" during code generation, according to the model precision. This is specified \add_cpp_python_text{with ModelSpec::setPrecision() -- either `GENN_FLOAT` or `GENN_DOUBLE`. `GENN_FLOAT` is the default value,with the first parameter to pygenn.GeNNModel.__init__ as a string e.g. "float"}. There may be ambiguities in arithmetic operations using explicit numbers. Standard C compilers presume that any number defined as "X" is an integer and any number defined as "X.Y" is a double. Make sure to use the same precision in your operations in order to avoid performance loss. @@ -141,7 +141,7 @@ When a neuron or synapse population using this model is added to the model, the For example if we add a population called `Pop` using a model which contains our `V` variable, a variable `VPop` of type `scalar*` will be available in the global namespace of the simulation program. GeNN will pre-allocate this C array to the correct size of elements corresponding to the size of the neuron population. Users can otherwise manipulate these variable arrays as they wish. \end_toggle \add_toggle_python -When a neuron or synapse population using this model is added to the model, it is built (with pygenn.genn_model.GeNNModel.build) and loaded (with pygenn.genn_model.GeNNModel.load), it is available to Python code via a numpy memory view into the host memory: +When a neuron or synapse population using this model is added to the model, it is built (with pygenn.GeNNModel.build) and loaded (with pygenn.GeNNModel.load), it is available to Python code via a numpy memory view into the host memory: \code pop.vars["V"].view[:] = 1.2 \endcode @@ -189,12 +189,12 @@ $(V)+= (-$(V)+$(Isyn))*DT In addition to these variables, neuron variables can be referred to in the synapse models by calling $(\_pre) for the presynaptic neuron population, and $(\_post) for the postsynaptic population. For example, \$(sT_pre), \$(sT_post), \$(V_pre), etc. \section spikeRecording Spike Recording -Especially in models simulated with small timesteps, very few spikes may be emitted every timestep, making calling \add_cpp_python_text{``pullCurrentSpikesFromDevice()`` or ``pullSpikesFromDevice()``, pygenn.genn_groups.NeuronGroup.pull_current_spikes_from_device} every timestep very inefficient. +Especially in models simulated with small timesteps, very few spikes may be emitted every timestep, making calling \add_cpp_python_text{``pullCurrentSpikesFromDevice()`` or ``pullSpikesFromDevice()``, pygenn.NeuronGroup.pull_current_spikes_from_device} every timestep very inefficient. Instead, the spike recording system allows spikes and spike-like events emitted over a number of timesteps to be collected in GPU memory before transferring to the host. -Spike recording can be enabled on chosen neuron groups with the \add_cpp_python_text{``NeuronGroup::setSpikeRecordingEnabled`` and ``NeuronGroup::setSpikeEventRecordingEnabled`` methods,pygenn.genn_groups.NeuronGroup.spike_recording_enabled and pygenn.genn_groups.NeuronGroup.spike_event_recording_enabled properties}. -Remaining GPU memory can then be allocated at runtime for spike recording by\add_cpp_python_text{calling ``allocateRecordingBuffers()`` from user code,using the `num_recording_timesteps` keyword argument to pygenn.genn_model.GeNNModel.load}. -The data structures can then be copied from the GPU to the host using the \add_cpp_python_text{``pullRecordingBuffersFromDevice()`` function,pygenn.genn_model.GeNNModel.pull_recording_buffers_from_device method} and the spikes emitted by a population can be accessed \add_cpp_python_text{in bitmask form via the ``recordSpk`` variable,via the pygenn.genn_groups.NeuronGroup.spike_recording_data property} -Similarly, spike-like events emitted by a population can be accessed via the \add_cpp_python_text{``recordSpkEvent`` variable,pygenn.genn_groups.NeuronGroup.spike_event_recording_data property}. +Spike recording can be enabled on chosen neuron groups with the \add_cpp_python_text{``NeuronGroup::setSpikeRecordingEnabled`` and ``NeuronGroup::setSpikeEventRecordingEnabled`` methods,pygenn.NeuronGroup.spike_recording_enabled and pygenn.NeuronGroup.spike_event_recording_enabled properties}. +Remaining GPU memory can then be allocated at runtime for spike recording by\add_cpp_python_text{calling ``allocateRecordingBuffers()`` from user code,using the `num_recording_timesteps` keyword argument to pygenn.GeNNModel.load}. +The data structures can then be copied from the GPU to the host using the \add_cpp_python_text{``pullRecordingBuffersFromDevice()`` function,pygenn.GeNNModel.pull_recording_buffers_from_device method} and the spikes emitted by a population can be accessed \add_cpp_python_text{in bitmask form via the ``recordSpk`` variable,via the pygenn.NeuronGroup.spike_recording_data property} +Similarly, spike-like events emitted by a population can be accessed via the \add_cpp_python_text{``recordSpkEvent`` variable,pygenn.NeuronGroup.spike_event_recording_data property}. \add_cpp_text{To make decoding the bitmask data structure easier, the ``::writeBinarySpikeRecording`` and ``::writeTextSpikeRecording`` helper functions can be used by including spikeRecorder.h in the user code.} \section Debugging Debugging suggestions From 4b4b3370245bdd825d55449463ea1753c2eb63d3 Mon Sep 17 00:00:00 2001 From: neworderofjamie Date: Tue, 26 Oct 2021 14:32:51 +0100 Subject: [PATCH 12/12] completed release notes --- doxygen/09_ReleaseNotes.dox | 14 +++++++------- 1 file changed, 7 insertions(+), 7 deletions(-) diff --git a/doxygen/09_ReleaseNotes.dox b/doxygen/09_ReleaseNotes.dox index 10f1dc946b..6f2a399583 100644 --- a/doxygen/09_ReleaseNotes.dox +++ b/doxygen/09_ReleaseNotes.dox @@ -7,16 +7,16 @@ It also includes a number of bug fixes that have been identified since the 4.5.1 User Side Changes ---- -1. Batch reductions, NCCL multi-GPU reductions -2. Postsynaptic model target -3. Fuse pre and postsynaptic update +1. As well as performing arbitrary updates and calculating transposes of weight update model variables, custom updates can now be used to implement 'reductions' so, for example, duplicated variables can be summed across model batches (see \ref custom_update_reduction). +2. Previously, to connect a synapse group to a postsynaptic neuron's additional input variable, a custom postsynaptic model had to be used. SynapseGroup::setPSTargetVar and pygenn.SynapseGroup.ps_target_var can now be used to set the target variable of any synapse group. +3. Previously, weight update model pre and postsynaptic updates and variables got duplicated in the neuron kernel. This was very innefficient and these can now be 'fused' together by setting ModelSpec::setFusePrePostWeightUpdateModels. 4. PyGeNN now shares a version with GeNN itself and this will be accessible via ``pygenn.__version__``. 5. The names of populations and variables are now validated to prevent code with invalid variable names being generated. -6. As well as being able to read the current spikes via the pygenn.NeuronGroup.current_spikes property, they can now also be set -7. Spike-like events were previously not exposed to PyGeNN. These can now be pushed and pulled via pygenn.NeuronGroup.pull_spike_events_from_device, pygenn.NeuronGroup.push_spike_events_to_device, pygenn.NeuronGroup.pull_current_spike_events_from_device and pygenn.NeuronGroup.push_current_spike_events_to_device and accessed via pygenn.NeuronGroup.current_spike_events. +6. As well as being able to read the current spikes via the pygenn.NeuronGroup.current_spikes property, they can now also be set. +7. Spike-like events were previously not exposed to PyGeNN. These can now be pushed and pulled via pygenn.NeuronGroup.pull_spike_events_from_device, pygenn.NeuronGroup.push_spike_events_to_device, pygenn.NeuronGroup.pull_current_spike_events_from_device and pygenn.NeuronGroup.push_current_spike_events_to_device; and accessed via pygenn.NeuronGroup.current_spike_events. 8. Added additional error handling to prevent properties of pygenn.GeNNModel that can only be set before the model was built being set afterwards. -9. Variable references to custom update variables -10. Updated the default parameters used in the MBody1 example to be more sensible +9. Variable references can now reference custom update variables (see \ref sectVariableReferences). +10. Updated the default parameters used in the MBody1 example to be more sensible. Bug fixes: ----