Skip to article frontmatterSkip to article content

Abstract neuron models

Download the slides here


Introduction

So far this week we’ve seen a lot of biological detail about neurons. The topic of this section is how we abstract and simplify this into very simple models.

Artificial neuron

You’re probably familiar with the artificial neuron from machine learning.

Single artificial neuron.

Figure 1:Single artificial neuron.

This neuron has nn inputs, which have activations x1x_1 to xnx_n with weights w1w_1 to wnw_n. Each of these activations and weights is a real number.

It performs a weighted sum of its inputs and then passes that through some nonlinear function ff:

xout=f(i=1nwixi)x_\mathrm{out} = f(\sum_{i=1}^n w_i x_i)

Various activation functions can be used here. In older work you’ll often see the sigmoid function, and that’s partly because it was considered a good model of real neurons. Nowadays, you’ll more often see the ReLU, which turns out to be much easier to work with and gives results that are often better.

Common activation functions for an artificial neuron.

Figure 2:Common activation functions for an artificial neuron.

Interestingly, there are papers still being published about what the best model of real neurons’ activation functions, like LaFosse et al. (2023) which finds an activation function that looks a bit intermediate between sigmoid and ReLU.

Estimated activation function from real neurons (from ).

Figure 3:Estimated activation function from real neurons (from LaFosse et al. (2023)).

The relationship between the model and the biological data is established by computing an input-output function, typically by counting the number of spikes coming in to the neurons versus the number of spikes going out, averaged over multiple runs and some time period.

What this model misses out is the temporal dynamics of biological neurons.

Biological neuron

So let’s talk about temporal dynamics. We saw in previous sections that when a neuron receives enough input current it pushes it above a threshold that leads to an action potential, also called a spike.

The source of the input current is incoming spikes from other neurons, and we’ll talk more about that in next week’s sections on synapses.

For this week, here’s a simple, idealised picture of what this looks like. It’s from a model rather than real data, to more clearly illustrate the process.

<Figure size 1000x600 with 1 Axes>

The curve shows the membrane potential of a neuron receiving input spikes at the times indicated by the red circles. Each incoming spike causes a transient rush of incoming current.

For the first one, it’s not enough to cause the neuron to spike, and after a while the membrane potential starts to decay back to its resting value.

And then, more come in and eventually the cumulative effect is enough to push the neuron above the threshold, and it fires a spike and resets.

Integrate and Fire Neuron

The simplest way to model this is using the “integrate and fire neuron”. Here’s a plot of how it behaves.

<Figure size 800x300 with 1 Axes>

Each time an incoming spike arrives - every 10 milliseconds here - the membrane potential instantaneously jumps by some fixed weight until it hits the threshold, at which point it fires a spike and resets.

We can write this down in a standard form as a series of event-based equations.

  1. When you receive an incoming spike, set the variable vv to be v+wiv+w_i.
  2. If the threshold condition (v>1v > 1) is true, fire a spike.
  3. After a spike, set vv to 0.

Table 1:Integrate and fire neuron definition

ConditionModel behaviour
On presynaptic spike index iivv+wiv\leftarrow v+w_i
Threshold condition, fire a spike ifv>1v>1
Reset event, after a spike setv0v\leftarrow 0

We can see that already this captures part of what’s going on in a real neuron, but misses the fact that in the absence of new inputs, the membrane potential decays back to rest. Let’s add that.

Leaky integrate-and-fire (LIF) neuron

We can model this decay by treating the membrane as a capacitor in an electrical circuit. Turning this into a differential equation, vv evolves over time according to the differential equation:

τdvdt=v\tau \frac{\ud v}{\ud t} = - v

where τ is the membrane time constant discussed earlier. We can summarise this as a table as before:

Table 2:Leaky integrate-and-fire neuron definition

ConditionModel behaviour
Continuous evolution over timeτdvdt=v\tau \frac{\ud v}{\ud t} = - v
On presynaptic spike index iivv+wiv\leftarrow v+w_i
Threshold condition, fire a spike ifv>1v>1
Reset event, after a spike setv0v\leftarrow 0

This differential equation can be solved to show that in the absence of any input, vv decreases exponentially over time with a time scale of τ:

v(t)=v(0)exp(t/τ)v(t) = v(0) \exp(-t/\tau)

Here’s how that looks:

<Figure size 800x300 with 1 Axes>

We can see that after the instantaneous increase in membrane potential it starts to decay back to the resting value. Although this doesn’t look like a great model, it turns out that very often this captures enough of what is going on in real neurons.

But there is another reason why adding this leak might be important.

Reliable spike timing

In experiments, if you inject a constant input current (horizontal line, top left) into a neuron and record what happens over a few repeated trials, you see that the membrane potentials (top left above input current) and the spike times (bottom left) are fairly different between trials. That’s because there’s a lot of noise in the brain which adds up over time.

On the other hand, if you inject a fluctuating input current (top right) and do the same thing, you’ll see both the membrane potentials (top right, above fluctuating current) and the spikes (bottom right) tend to occur at the same times. The vertical lines on the bottom right plot show that on every repeat, the spike is happening at the same time.

Spike Timing

Figure 4:Raster plots for constant and fluctuating input current Mainen & Sejnowski, 1995.

We see the same thing with a LIF neuron. Here are the plots with semi-transparent membrane potentials so that you can clearly see that both the spike times and membrane potentials tend to overlap for the fluctuating current, but not the constant current.

<Figure size 1000x600 with 6 Axes>

But now if we repeat that with a simple integrate and fire neuron without a leak, you can see you get unreliable, random spike times for both the constant and fluctuating currents.

<Figure size 1000x600 with 6 Axes>

In other words, adding the leak made the neuron more robust to noise. An important property for the brain, and perhaps also for low power neuromorphic hardware that we’ll talk about later in the course.

So why does the leak make it more noise robust? Well that was answered by Brette & Guigon (2003). The mathematical analysis is complicated, but it boils down to the fact that if you either have a leak, or a nonlinearity in the differential equations, the fluctuations due to the internal noise don’t accumulate over time, whereas with a linear and non-leaky neuron, they do.

LIF neuron with synapses

OK, so we have an LIF neuron which we’ve just seen has some nice properties that you’d want a biological neuron model to have. But still, when you look at the graph here, you can see that these instantaneous jumps in the model are not very realistic. So how can we improve that?

A simple answer is to change the model so that instead of having an instantaneous effect on the membrane potential, instead it has an instantaneous effect on an input current, which is then provided as an input to the leaky integrate-and-fire neuron.

You can see on the bottom plot here that the input current is now behaving like the membrane potential before, with this exponential shape. You can think of that as a model of the internal processes of the synapses that we’ll talk more about next week.

<Figure size 600x900 with 2 Axes>

Figure 5:Leaky integrate-and-fire with exponential synapses.

We model that by adding a differential equation, with its own time constant τI\tau_I:

τIdIdt=I\tau_I \frac{\ud I}{\ud t} = -I

We also add that current II to the differential equation for VV, multiplied by a constant RR. The rest of the model is as it was before:

Table 3:Leaky integrate-and-fire neuron with exponential synapses definition

ConditionModel behaviour

Continuous evolution over time

τdvdt=RIv\tau \frac{\ud v}{\ud t} = RI - v

τIdIdt=I\tau_I\frac{\ud I}{\ud t}=-I

On presynaptic spike index iiII+wiI\leftarrow I+w_i
Threshold condition, fire a spike ifv>1v>1
Reset event, after a spike setv0v\leftarrow 0

We can see that this gives a better approximation of the shape. There’s more that can be done here if you want, for example modelling changes in conductance rather than current, but we’ll get back to that next week.

Spike frequency adaptation

If you feed a regular series of spikes into an LIF neuron (without any noise) you’ll see that the time between spikes is always the same. It has to be because the differential equation is memoryless and resets after each spike.

However, real neurons have a memory of their recent activity. Here are some recordings from the Allen Institute cell types database that we’ll be coming back to later in the course.

Spike Frequency Adaptation

Figure 6:Example of spike frequency adaptation Salaj et al., 2021.

In these recordings (from Salaj et al. (2021)), they’ve injected a constant current into these three different neurons, and recorded their activity. We can see that rather than outputting a regularly spaced sequence of spikes, there’s an increasing gap between the spikes. There are various mechanisms underlying this, but essentially it comes down to slower processes that aren’t reset by a spike. Let’s see how we can model that.

Actually there are many, many different models for spike frequency adaptation. We’ll just show one very simple approach, but if you want some more ideas it’s reviewed in Gutkin & Zeldenrust (2014).

In the simple model here, we just introduce a variable threshold represented by the variable vTv_T. Each time the neuron spikes, the threshold increases, making it more difficult to fire the next spike, before slowly decaying back to its starting value.

<Figure size 800x400 with 2 Axes>

Figure 7:Spike frequency adaptation model compared to standard LIF.

In the equations, we represent that by having a new exponentially decaying differential equation for the threshold, modifying the threshold condition so that it’s v>vTv>v_T instead of v>1v>1, and specifying that after a spike the threshold should increase by some value. You can see that it does the job, there’s a smaller gap between earlier spikes than later spikes.

Table 4:Adaptive leaky integrate-and-fire neuron definition

ConditionModel behaviour

Continuous evolution over time

τdvdt=RIv\tau \frac{\ud v}{\ud t} = RI - v

τTdvTdt=1vT\tau_T\frac{\ud v_T}{\ud t}=1-v_T

On presynaptic spike index iivv+wiv\leftarrow v+w_i
Threshold condition, fire a spike ifv>vTv>v_T

Reset event, after a spike set

v0v\leftarrow 0

vTvT+δvTv_T\leftarrow v_T+\delta v_T

In general, spike frequency adaptation can give really rich and powerful dynamics to neurons, but we won’t go into any more detail about that right now.

Other abstract neuron models

This has been a quick tour of some features of abstract neuron models. There’s a lot more to know about if you want to dive deeper.

Some of the behaviours of real neurons that we haven’t seen in our models so far include:

  • Phasic spiking, i.e. only spiking at the start of an input.
  • Bursting, both phasic at the start or tonic ongoing.
  • And post-inhibitory rebound, where a neuron fires a spike once an inhibitory or negative current is turned off.
Behaviours of real neurons.

Figure 8:Behaviours of real neurons.

You can capture a lot of these effects with two variable models such as the adaptive exponential integrate and fire or Izhikevich neuron models.

There’s also stochastic neuron models based either on having a Gaussian noise current or a probabilistic spiking process.

Taking that further there are Markov chain models of neurons where you model the probability of neurons switching between discrete states.

References
  1. LaFosse, P. K., Zhou, Z., O’Rawe, J. F., Friedman, N. G., Scott, V. M., Deng, Y., & Histed, M. H. (2023). Single-cell optogenetics reveals attenuation-by-suppression in visual cortical neurons. 10.1101/2023.09.13.557650
  2. Mainen, Z. F., & Sejnowski, T. J. (1995). Reliability of Spike Timing in Neocortical Neurons. Science, 268(5216), 1503–1506. 10.1126/science.7770778
  3. Brette, R., & Guigon, E. (2003). Reliability of Spike Timing Is a General Property of Spiking Model Neurons. Neural Computation, 15(2), 279–308. 10.1162/089976603762552924
  4. Salaj, D., Subramoney, A., Kraisnikovic, C., Bellec, G., Legenstein, R., & Maass, W. (2021). Spike frequency adaptation supports network computations on temporally dispersed information. eLife, 10. 10.7554/elife.65459
  5. Gutkin, B., & Zeldenrust, F. (2014). Spike frequency adaptation. Scholarpedia, 9(2), 30643. 10.4249/scholarpedia.30643