GPU/CUDA

Brian has some experimental support for doing numerical integration only using GPUs, using the PyCUDA package.

Note that only numerical integration is done on the GPU, which means that variables that can be altered on the CPU (via synapses or user operations) need to be copied to and from the GPU each time step, as well as variables that are used for thresholding and reset operations. This creates a memory bandwidth bottleneck, which means that for the moment the GPU code is only useful for complicated neuron models such as Hodgkin-Huxley type neurons (although in this case it can lead to very substantial speed improvements).

class brian.experimental.cuda.GPUNeuronGroup(N, model, threshold=None, reset=NoReset(), init=None, refractory=0.0 * second, level=0, clock=None, order=1, implicit=False, unit_checking=True, max_delay=0.0 * second, compile=False, freeze=False, method=None, precision='double', maxblocksize=512, forcesync=False, pagelocked_mem=True, gpu_to_cpu_vars=None, cpu_to_gpu_vars=None)

Neuron group which performs numerical integration on the GPU.

Warning

This class is still experimental, not supported and subject to change in future versions of Brian.

Initialised with arguments as for NeuronGroup and additionally:

precision='double'
The GPU scalar precision to use, older models can only use precision='float'.
maxblocksize=512
If GPU compilation fails, reduce this value.
forcesync=False
Whether or not to force copying of state variables to and from the GPU each time step. This is slow, so it is better to specify precisely which variables should be copied to and from using gpu_to_cpu_vars and cpu_to_gpu_vars.
pagelocked_mem=True
Whether to store state variables in pagelocked memory on the CPU, which makes copying data to/from the GPU twice as fast.
cpu_to_gpu_vars=None, gpu_to_cpu_vars=None
Which variables should be copied each time step from the CPU to the GPU (before state update) and from the GPU to the CPU (after state update).

The point of the copying of variables to and from the GPU is that the GPU maintains a separate memory from the CPU, and so changes made on either the CPU or GPU won’t automatically be reflected in the other. Since only numerical integration is done on the GPU, any state variable that is modified by incoming synapses, for example, should be copied to and from the GPU each time step. In addition, any variables used for thresholding or resetting need to be appropriately copied (GPU->CPU for thresholding, and both for resetting).