From c7e0360c767363b08a4bb22f2f2fbf6148c16c09 Mon Sep 17 00:00:00 2001 From: danielmk Date: Fri, 11 Sep 2020 18:12:24 +0200 Subject: [PATCH 1/2] added synapses in 08-spiking_neural_systems --- html/models/08-spiking_neural_systems.html | 305 ++++++++++--- markdown/models/08-spiking_neural_systems.md | 405 +++++++++++++----- .../models/08-spiking_neural_systems.ipynb | 165 ++++++- script/models/08-spiking_neural_systems.jl | 100 ++++- .../models/08-spiking_neural_systems.jmd | 170 +++++++- 5 files changed, 953 insertions(+), 192 deletions(-) diff --git a/html/models/08-spiking_neural_systems.html b/html/models/08-spiking_neural_systems.html index 57595244..5d38bca4 100644 --- a/html/models/08-spiking_neural_systems.html +++ b/html/models/08-spiking_neural_systems.html @@ -662,10 +662,10 @@
Daniel Müller-Komorowska
-

This is an introduction to spiking neural systems with Julia's DifferentialEquations package. We will cover four different models: leaky integrate-and-fire, Izhikevich, adaptive exponential integrate-and-fire and Hodgkin-Huxley model. Let's get started with the leaky integrate-and-fire (LIF) model.

-

The Leaky-Integrate-and-Fire Model

-

The LIF model is an extension of the integrate-and-fire (IF) model. While the IF model simply integrates input until it fires, the LIF model integrates input but also decays towards an equilibrium potential. This means that inputs that arrive in quick succession have a much higher chance to make the cell spike as opposed to inputs that are further apart in time. The LIF is a more realistic neuron model than the IF because it is known from real neurons that the timing of inputs is extremely relevant for their spiking.

-

The LIF model has four parameters, gL, EL, C, Vth, I and we define it in the lif(u, p, t) function.

+

This is an introduction to spiking neural systems with Julia's DifferentialEquations package. We will cover three different models: leaky integrate-and-fire, Izhikevich, and Hodgkin-Huxley. Finally we will also learn about two mechanisms that simulate synaptic inputs like real neurons receive them. The alpha synapse and the Tsodyks-Markram synapse. Let's get started with the leaky integrate-and-fire (LIF) model.

+

The Leaky Integrate-and-Fire Model

+

The LIF model is an extension of the integrate-and-fire (IF) model. While the IF model simply integrates input until it fires, the LIF model integrates input but also decays towards an equilibrium potential. This means that inputs that arrive in quick succession have a much higher chance to make the cell spike as opposed to inputs that are further apart in time. The LIF is a more realistic neuron model than the IF, because it is known from real neurons that the timing of inputs is extremely relevant for their spiking.

+

The LIF model has five parameters, gL, EL, C, Vth, I and we define it in the lif(u, p, t) function.

@@ -686,7 +686,7 @@ 

The Leaky-Integrate-and-Fire Model

Our system is described by one differential equation: (-gL*(u-EL)+I)/C, where u is the voltage, I is the input, gL is the leak conductance, EL is the equilibrium potential of the leak conductance and C is the membrane capacitance. Generally, any change of the voltage is slowed down (filtered) by the membrane capacitance. That's why we divide the whole equation by C. Without any external input, the voltage always converges towards EL. If u is larger than EL, u decreases until it is at EL. If u is smaller than EL, u increases until it is at EL. The only other thing that can change the voltage is the external input I.

-

Our lif function requires a certain parameter structure because it will need to be compatible with the DifferentialEquations interface. The input signature is lif(u, p, t) where u is the voltage, p is the collection of the parameters that describe the equation and t is time. You might wonder why time does not show up in our equation, although we need to calculate the change in voltage with respect to time. The ODE solver will take care of time for us. One of the advantages of the ODE solver as opposed to calculating the change of u in a for loop is that many ODE solver algorithm can dynamically adjust the time step in a way that is efficient and accurate.

+

Our lif function requires a certain parameter structure because it will need to be compatible with the DifferentialEquations interface. The input signature is lif(u, p, t) where u is the voltage, p is the collection of the parameters that describe the equation and t is time. You might wonder why time does not show up in our equation, although we need to calculate the change in voltage with respect to time. The ODE solver will take care of time for us. One of the advantages of the ODE solver as opposed to calculating the change of u in a for loop is that many ODE solver algorithms can dynamically adjust the time step in a way that is efficient and accurate.

One crucial thing is still missing however. This is supposed to be a model of neural spiking, right? So we need a mechanism that recognizes the spike and hyperpolarizes u in response. For this purpose we will use callbacks. They can make discontinuous changes to the model when certain conditions are met.

@@ -727,7 +727,7 @@

The Leaky-Integrate-and-Fire Model

-

Our condition is thr(u,t,integrator) and the condition kicks in when integrator.u > integrator.p[4] where p[4] is our threshold parameter Vth. Our effect of the condition is reset!(integrator). It sets u back to the equilibrium potential p[2]. We then wrap both the condition and the effect into a DiscreteCallback called threshold. There is one more particularly handy callback called PresetTimeCallback. This one increases the input p[5] at t=2 and t=15 by 210.0. Both callbacks are then combined into a CallbackSet. We are almost done to simulate our system we just need to put numbers on our initial voltage and parameters.

+

Our condition is thr(u,t,integrator) and the condition kicks in when integrator.u > integrator.p[4] where p[4] is our threshold parameter Vth. Our effect of the condition is reset!(integrator). It sets u back to the equilibrium potential p[2]. We then wrap both the condition and the effect into a DiscreteCallback called threshold. There is one more callback called PresetTimeCallback that is particularly useful. This one increases the input p[5] at t=2 and t=15 by 210.0. Both callbacks are then combined into a CallbackSet. We are almost done to simulate our system we just need to put numbers on our initial voltage and parameters.

@@ -755,55 +755,12 @@ 

The Leaky-Integrate-and-Fire Model

-
-retcode: Success
-Interpolation: automatic order switching interpolation
-t: 153-element Array{Float64,1}:
-  0.0
-  9.999999999999999e-5
-  0.0010999999999999998
-  0.011099999999999997
-  0.11109999999999996
-  1.1110999999999995
-  2.0
-  2.0
-  2.6300346673750097
-  2.9226049547524595
-  ⋮
- 38.34157935968204
- 38.78215179003683
- 38.78215179003683
- 39.222724173706894
- 39.222724173706894
- 39.6632965982261
- 39.6632965982261
- 40.0
- 40.0
-u: 153-element Array{Float64,1}:
- -75.0
- -75.0
- -75.0
- -75.0
- -75.0
- -75.0
- -75.0
- -75.0
- -59.978080111690375
- -57.32999167299642
-   ⋮
- -75.0
- -50.40489310815222
- -75.0
- -50.404894730067554
- -75.0
- -50.404893310891545
- -75.0
- -54.419318668318546
- -75.0
+
+ERROR: UndefVarError: top not defined
 
-

First of all the solve output tells us if solving the system generally worked. In this case we know it worked because the return code (retcode) say Success. Then we get the numbers for the timestep and the solution to u. The raw numbers are not super interesting to let's plot our solution.

+

First of all the solve output tells us if solving the system generally worked. In this case we know it worked because the return code (retcode) says Success. Then we get the numbers for the timestep and the solution to u. The raw numbers are not super interesting to let's plot our solution.

@@ -811,7 +768,10 @@ 

The Leaky-Integrate-and-Fire Model

- +
+ERROR: UndefVarError: sol not defined
+
+

We see that the model is resting at -75 while there is no input. At t=2 the input increases by 210 and the model starts to spike. Spiking does not start immediately because the input first has to charge the membrane capacitance. Notice how once spiking starts it very quickly becomes extremely regular. Increasing the input again at t=15 increases firing as we would expect but it is still extremely regular. This is one of the features of the LIF. The firing frequency is regular for constant input and a linear function of the input strength. There are ways to make LIF models less regular. For example we could use certain noise types at the input. We could also simulate a large number of LIF models and connect them synaptically. Instead of going into those topics, we will move on to the Izhikevich model, which is known for its ability to generate a large variety of spiking dynamics during constant inputs.

The Izhikevich Model

@@ -837,7 +797,7 @@

The Izhikevich Model

-

This is our Izhikevich model. There are two important changes here. First of all, note the additional input parameter du. This is a sequence of differences. du[1] corresponds to the voltage (the first dimension of the system) and du[2]corresponds to the second dimension. This second dimension is called u in the original Izhikevich work amnd it makes the notation a little annoying. In this tutorial I will generally stick to Julia and DifferentialEquations conventions as opposed to conventions of the specific models and du is commonly used. We will never define du ourselves outside of the function but the ODE solver will use it internally. The other change here is the ! after our function name. This signifies that du will be preallocated before integration, which saves a lot of allocation time. In other words, du will be changed in place. Now we just need our callbacks to take care of spikes and increase the input.

+

This is our Izhikevich model. There are two important changes here. First of all, note the additional input parameter du. This is a sequence of differences. du[1] corresponds to the voltage (the first dimension of the system) and du[2] corresponds to the second dimension. This second dimension is called u in the original Izhikevich work amnd it makes the notation a little annoying. In this tutorial I will generally stick to Julia and DifferentialEquations conventions as opposed to conventions of the specific models and du is commonly used. We will never define du ourselves outside of the function but the ODE solver will use it internally. The other change here is the ! after our function name. This signifies that du will be preallocated before integration and then updated in-place, which saves a lot of allocation time. Now we just need our callbacks to take care of spikes and increase the input.

@@ -903,7 +863,10 @@ 

The Izhikevich Model

- +
+ERROR: UndefVarError: top not defined
+
+

This spiking type is called chattering. It fires with intermittent periods of silence. Note that the input starts at t=50 and remain constant for the duration of the simulation. One of mechanisms that sustains this type of firing is the spike induced hyperpolarization coming from our second dimension, so let's look at this variable.

@@ -913,9 +876,12 @@

The Izhikevich Model

- +
+ERROR: UndefVarError: sol not defined
+
+ -

Our second dimension u[2] increases with every spike. When it becomes too large and the system cannot generate another spike until u[2]has decayed to a value small enough that spiking can resume. This process repeats. In this model spiking is no longer regular like it was in the LIF. Here we have two frequencies, the frequency during the spiking state and the frequency between spiking states. The LIF model was dominated by one single frequency that was a function of the input strength. Let's see if we can generate another spiking type by changing the parameters.

+

Our second dimension u[2] increases with every spike. When it becomes too large, the system cannot generate another spike until u[2] has decayed to a value small enough that spiking can resume. This process repeats. In this model, spiking is no longer regular like it was in the LIF. Here we have two frequencies, the frequency during the spiking state and the frequency between spiking states. The LIF model was dominated by one single frequency that was a function of the input strength. Let's see if we can generate another spiking type by changing the parameters.

@@ -929,11 +895,14 @@ 

The Izhikevich Model

- +
+ERROR: UndefVarError: top not defined
+
+

This type is called regularly spiking and we created it just by lowering p[3] and increasing p[4]. Note that the type is called regularly spiking but it is not instantaneously regular. The instantenous frequency is higher in the beginning. This is called spike frequency adaptation and is a common property of real neurons. There are many more spike types that can be generated. Check out the original Izhikevich work and create your own favorite neuron!

Hodgkin-Huxley Model

-

The Hodgkin-Huxley (HH) model is our first so called biophysically realistic model. This means that all parameters and mechanisms of the model represent biological mechanisms. Specifically, the HH model simulates the ionic currents that depolarize and hyperpolarize a neuron during an action potential. This makes the HH model four-dimensional. Let's see how it looks.

+

The Hodgkin-Huxley (HH) model is our first biophysically realistic model. This means that all parameters and mechanisms of the model represent biological mechanisms. Specifically, the HH model simulates the ionic currents that depolarize and hyperpolarize a neuron during an action potential. This makes the HH model four-dimensional. Let's see how it looks.

@@ -955,7 +924,7 @@ 

Hodgkin-Huxley Model

gK, gNa, gL, EK, ENa, EL, C, I = p v, n, m, h = u - du[1] = ((gK * (n^4.0) * (EK - v)) + (gNa * (m ^ 3.0) * h * (ENa - v)) + (gL * (EL - v)) + I) / C + du[1] = (-(gK * (n^4.0) * (v - EK)) - (gNa * (m ^ 3.0) * h * (v - ENa)) - (gL * (v - EL)) + I) / C du[2] = (alpha_n(v) * (1.0 - n)) - (beta_n(v) * n) du[3] = (alpha_m(v) * (1.0 - m)) - (beta_m(v) * m) du[4] = (alpha_h(v) * (1.0 - h)) - (beta_h(v) * h) @@ -968,8 +937,8 @@

Hodgkin-Huxley Model

-

We have three different types of ionic conductances. Potassium, sodium and the leak. The potassium and sodium conducance are voltage gated. They increase or decrease depending on the voltage. In ion channel terms, open channels can transition to the closed state and closed channels can transition to the open state. It's probably easiest to start with the potassium current described by gK * (n^4.0) * (EK - v). Here gK is the total possible conductance that we could reach if all potassium channels were open. If all channels were open, n would equal 1 which is usually not the case. The transition from open state to closed state is modeled in alpha_n(v) while the transition from closed to open is in beta_n(v). Because potassium conductance is voltage gated, these transitions depend on v. The numbers in alpha_n; beta_n were calculated by Hodgkin and Huxley based on their extensive experiments on the squid giant axon. They also determined, that n needs to be taken to the power of 4 to correctly model the amount of open channels.

-

The sodium current is not very different but it two gating variables, m; h instead of one. The leak conductance gL has no gating variables because it is not voltage gated. Let's move on to the parameter. If you want all the details on the HH model you can find a great description here.

+

We have three different types of ionic conductances. Potassium, sodium and the leak. The potassium and sodium conducance are voltage gated. They increase or decrease depending on the voltage. In ion channel terms, open channels can transition to the closed state and closed channels can transition to the open state. It's probably easiest to start with the potassium current described by gK * (n^4.0) * (EK - v). Here gK is the total possible conductance that we could reach if all potassium channels were open. If all channels were open, n would equal 1 which is usually not the case. The transition from open state to closed state is modeled in alpha_n(v) while the transition from closed to open is in beta_n(v). Because potassium conductance is voltage gated, these transitions depend on v. The numbers in alpha_n; beta_n were calculated by Hodgkin and Huxley based on their extensive experiments on the squid giant axon. They also determined, that n needs to be taken to the power of 4 to correctly model the amount of open channels.

+

The sodium current is not very different but it has two gating variables, m, h instead of one. The leak conductance gL has no gating variables because it is not voltage gated. Let's move on to the parameters. If you want all the details on the HH model you can find a great description here.

@@ -1005,7 +974,10 @@ 

Hodgkin-Huxley Model

- +
+ERROR: UndefVarError: top not defined
+
+

That's some good regular voltage spiking. One of the cool things about a biophysically realistic model is that the gating variables tell us something about the mechanisms behind the action potential. You might have seen something like the following plot in a biology textbook.

@@ -1015,16 +987,217 @@

Hodgkin-Huxley Model

- +
+ERROR: UndefVarError: sol not defined
+
+ + +

So far we have only given our neurons very simple step inputs by simply changing the number I. Actual neurons recieve their inputs mostly from chemical synapses. They produce conductance changes with very complex structures. In the next chapter we will try to incorporate a synapse into our HH model.

+

Alpha Synapse

+

One of the most simple synaptic mechanisms used in computational neuroscience is the alpha synapse. When this mechanism is triggered, it causes an instantanouse rise in conductance followed by an exponential decay. Let's incorporate that into our HH model.

+ + +
+function gSyn(max_gsyn, tau, tf, t);
+    if t-tf >= 0
+        return max_gsyn * exp(-(t-tf)/tau)
+    else
+        return 0.0
+    end
+end
+function HH!(du,u,p,t);
+    gK, gNa, gL, EK, ENa, EL, C, I, max_gSyn, ESyn, tau, tf = p
+    v, n, m, h = u
+
+    ISyn = gSyn(max_gSyn, tau, tf, t) * (v - ESyn)
+
+    du[1] = (-(gK * (n^4.0) * (v - EK)) - (gNa * (m ^ 3.0) * h * (v - ENa)) - (gL * (v - EL)) + I - ISyn) / C
+    du[2] = (alpha_n(v) * (1.0 - n)) - (beta_n(v) * n)
+    du[3] = (alpha_m(v) * (1.0 - m)) - (beta_m(v) * m)
+    du[4] = (alpha_h(v) * (1.0 - h)) - (beta_h(v) * h)
+end
+
+ + +
+HH! (generic function with 1 method)
+
+ + +

gSyn models the step to the maximum conductance and the following exponential decay with time constant tau. Of course we only want to integrate the conductance at and after time tf, the onset of the synaptic response. Before tf, gSyn returns zero. To convert the conductance to a current, we multiply by the difference between the current voltage and the synapses equilibrium voltage: ISyn = gSyn(max_gSyn, tau, tf, t) * (v - ESyn). Later we will set the parameter ESyn to 0, making this synapse an excitatory synapse. Excitatory synapses have equilibrium potentials far above the resting potential. Let's see what our synapse does to the voltage of the cell.

+ + +
+p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 0.008, 0, 20, 100]
+tspan = (0.0, 200)
+prob = ODEProblem(HH!, u0, tspan, p)
+sol = solve(prob);
+plot(sol, vars=1)
+
+ + +
+ERROR: UndefVarError: top not defined
+
+ + +

What you see here is called an excitatory postsynaptic potential (EPSP). It is the voltage response to a synaptic current. While our synaptic conductance rises instantly, the voltage response rises at a slower time course that is given by the membrane capacitance C. This particular voltage response is not strong enough to evoke spiking, so we say it is subthreshold. To get a suprathreshold response that evokes spiking we simply increase the parameter max_gSyn to increase the maximum conductance.

+ + +
+p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 0.01, 0, 20, 100]
+tspan = (0.0, 200)
+prob = ODEProblem(HH!, u0, tspan, p)
+sol = solve(prob);
+plot!(sol, vars=1)
+
+ + +
+ERROR: UndefVarError: top not defined
+
+ + +

This plot shows both the subthreshold EPSP from above as well as the suprathreshold EPSP. Alpha synapses are nice because of their simplicity. Real synapses however, are extremely complex structures. One of the most important features of real synapses is that their maximum conductance is not the same on every event. The number and frequency of synaptic events changes the size of the maximum conductance in a dynamic way. While we usually avoid anatomical and biophysical details of real synapses, there is a widely used phenomenological way to capture those dynamics called the Tsodyks-Markram synapse.

+

Tsodyks-Markram Synapse

+

The Tsodyks-Markram synapse (TMS) is a dynamic system that models the changes of maximum conductance that occur between EPSPs at different frequencies. The single response is similar to the alpha synapse in that it rises instantaneously and decays exponentially. The maximum conductance it reaches depends on the event history. To simulate the TMS we need to incorporate three more dimensions, u, R, gsyn into our system. u decays towards 0, R decays towards 1 and gsyn decays towards 0 as it did with the alpha synapse. The crucial part of the TMS is in epsp!, where we handle the discontinuities when a synaptic event occurs. Instead of just setting gsyn to the maximum conductance gmax, we increment gsyn by a fraction of gmax that depends on the other two dynamic parameters. The frequency dependence comes from the size of the time constants tau_u and tau_R. Enough talk, let's simulate it.

+ + +
+function HH!(du,u,p,t);
+    gK, gNa, gL, EK, ENa, EL, C, I, tau, tau_u, tau_R, u0, gmax, Esyn  = p
+    v, n, m, h, u, R, gsyn = u
+
+    du[1] = ((gK * (n^4.0) * (EK - v)) + (gNa * (m ^ 3.0) * h * (ENa - v)) + (gL * (EL - v)) + I + gsyn * (Esyn - v)) / C
+    du[2] = (alpha_n(v) * (1.0 - n)) - (beta_n(v) * n)
+    du[3] = (alpha_m(v) * (1.0 - m)) - (beta_m(v) * m)
+    du[4] = (alpha_h(v) * (1.0 - h)) - (beta_h(v) * h)
+
+    # Synaptic variables
+    du[5] = -(u/tau_u)
+    du[6] = (1-R)/tau_R
+    du[7] = -(gsyn/tau)
+end
+
+function epsp!(integrator);
+    integrator.u[5] += integrator.p[12] * (1 - integrator.u[5])
+    integrator.u[7] += integrator.p[13] * integrator.u[5] * integrator.u[6]
+    integrator.u[6] -= integrator.u[5] * integrator.u[6]
+
+end
+
+epsp_ts= PresetTimeCallback(100:100:500, epsp!)
+
+p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 30, 1000, 50, 0.5, 0.005, 0]
+u0 = [-60, n_inf(-60), m_inf(-60), h_inf(-60), 0.0, 1.0, 0.0]
+tspan = (0.0, 700)
+prob = ODEProblem(HH!, u0, tspan, p, callback=epsp_ts)
+sol = solve(prob);
+plot(sol, vars=1)
+
+ + +
+ERROR: UndefVarError: top not defined
+
+ + + +
+plot(sol, vars=7)
+
+ + +
+ERROR: UndefVarError: sol not defined
+
+ + +

Both the voltage response as well as the conductances show what is called short-term facilitation. An increase in peak conductance over multiple synaptic events. Here the first event has a conductance of around 0.0025 and the last one of 0.004. We can plot the other two varialbes to see what underlies those dynamics

+ + +
+plot(sol, vars=[5,6])
+
+ + +
+ERROR: UndefVarError: sol not defined
+
+ + +

Because of the time courses at play here, this facilitation is frequency dependent. If we increase the period between these events, facilitation does not occur.

+ + +
+epsp_ts= PresetTimeCallback(100:1000:5100, epsp!)
+
+p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 30, 500, 50, 0.5, 0.005, 0]
+u0 = [-60, n_inf(-60), m_inf(-60), h_inf(-60), 0.0, 1.0, 0.0]
+tspan = (0.0, 5300)
+prob = ODEProblem(HH!, u0, tspan, p, callback=epsp_ts)
+sol = solve(prob);
+plot(sol, vars=7)
+
+ + +
+ERROR: UndefVarError: top not defined
+
+ + + +
+plot(sol, vars=[5,6])
+
+ + +
+ERROR: UndefVarError: sol not defined
+
+ + +

We can also change these time constants such that the dynamics show short-term depression instead of facilitation.

+ + +
+epsp_ts= PresetTimeCallback(100:100:500, epsp!)
+
+p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 30, 100, 1000, 0.5, 0.005, 0]
+u0 = [-60, n_inf(-60), m_inf(-60), h_inf(-60), 0.0, 1.0, 0.0]
+tspan = (0.0, 700)
+prob = ODEProblem(HH!, u0, tspan, p, callback=epsp_ts)
+sol = solve(prob);
+plot(sol, vars=7)
+
+ + +
+ERROR: UndefVarError: top not defined
+
+ + + +
+plot(sol, vars=[5,6])
+
+ + +
+ERROR: UndefVarError: sol not defined
+
+ -

That's it for the tutorial on spiking neural systems.

+

Just changing those two time constants has changed the dynamics to short-term depression. This is still frequency dependent. Changing these parameters can generate a variety of different short-term dynamics.

+

Summary

+

That's it for now. Thanks for making it this far. If you want to learn more about neuronal dynamics, this is a great resource. If you want to learn more about Julia check out the official website and to learn more about the DifferentialEquations package you are in the right place, because this chapter is part of a larger tutorial series about just that.


diff --git a/markdown/models/08-spiking_neural_systems.md b/markdown/models/08-spiking_neural_systems.md index e1718b03..7c85b449 100644 --- a/markdown/models/08-spiking_neural_systems.md +++ b/markdown/models/08-spiking_neural_systems.md @@ -5,19 +5,20 @@ title: "Spiking Neural Systems" This is an introduction to spiking neural systems with Julia's DifferentialEquations package. -We will cover four different models: leaky integrate-and-fire, Izhikevich, adaptive exponential - integrate-and-fire and Hodgkin-Huxley model. Let's get started with the leaky integrate-and-fire (LIF) model. - -## The Leaky-Integrate-and-Fire Model +We will cover three different models: leaky integrate-and-fire, Izhikevich, and Hodgkin-Huxley. +Finally we will also learn about two mechanisms that simulate synaptic inputs like +real neurons receive them. The alpha synapse and the Tsodyks-Markram synapse. Let's get started +with the leaky integrate-and-fire (LIF) model. +## The Leaky Integrate-and-Fire Model The LIF model is an extension of the integrate-and-fire (IF) model. While the IF model simply integrates input until it fires, the LIF model integrates input but also decays towards an equilibrium potential. This means that inputs that arrive in quick succession have a much higher chance to make the cell spike as opposed to inputs that are further apart in time. The LIF is a more realistic neuron -model than the IF because it is known from real neurons that the timing of +model than the IF, because it is known from real neurons that the timing of inputs is extremely relevant for their spiking. -The LIF model has four parameters, `gL, EL, C, Vth, I` and we define it in the `lif(u, p, t)` function. +The LIF model has five parameters, `gL, EL, C, Vth, I` and we define it in the `lif(u, p, t)` function. ````julia @@ -57,7 +58,7 @@ that describe the equation and `t` is time. You might wonder why time does not show up in our equation, although we need to calculate the change in voltage with respect to time. The ODE solver will take care of time for us. One of the advantages of the ODE solver as opposed to calculating the change of -u in a for loop is that many ODE solver algorithm can dynamically adjust the +`u` in a for loop is that many ODE solver algorithms can dynamically adjust the time step in a way that is efficient and accurate. One crucial thing is still missing however. This is supposed to be a model of @@ -84,21 +85,21 @@ cb = CallbackSet(current_step,threshold) ```` DiffEqBase.CallbackSet{Tuple{},Tuple{DiffEqBase.DiscreteCallback{DiffEqCall backs.var"#61#64"{Array{Int64,1}},DiffEqCallbacks.var"#62#65"{Main.##WeaveS -andBox#321.var"#1#2"},DiffEqCallbacks.var"#63#66"{typeof(DiffEqBase.INITIAL -IZE_DEFAULT),Bool,Array{Int64,1},Main.##WeaveSandBox#321.var"#1#2"}},DiffEq -Base.DiscreteCallback{typeof(Main.##WeaveSandBox#321.thr),typeof(Main.##Wea -veSandBox#321.reset!),typeof(DiffEqBase.INITIALIZE_DEFAULT)}}}((), (DiffEqB +andBox#309.var"#1#2"},DiffEqCallbacks.var"#63#66"{typeof(DiffEqBase.INITIAL +IZE_DEFAULT),Bool,Array{Int64,1},Main.##WeaveSandBox#309.var"#1#2"}},DiffEq +Base.DiscreteCallback{typeof(Main.##WeaveSandBox#309.thr),typeof(Main.##Wea +veSandBox#309.reset!),typeof(DiffEqBase.INITIALIZE_DEFAULT)}}}((), (DiffEqB ase.DiscreteCallback{DiffEqCallbacks.var"#61#64"{Array{Int64,1}},DiffEqCall -backs.var"#62#65"{Main.##WeaveSandBox#321.var"#1#2"},DiffEqCallbacks.var"#6 +backs.var"#62#65"{Main.##WeaveSandBox#309.var"#1#2"},DiffEqCallbacks.var"#6 3#66"{typeof(DiffEqBase.INITIALIZE_DEFAULT),Bool,Array{Int64,1},Main.##Weav -eSandBox#321.var"#1#2"}}(DiffEqCallbacks.var"#61#64"{Array{Int64,1}}([2, 15 -]), DiffEqCallbacks.var"#62#65"{Main.##WeaveSandBox#321.var"#1#2"}(Main.##W -eaveSandBox#321.var"#1#2"()), DiffEqCallbacks.var"#63#66"{typeof(DiffEqBase -.INITIALIZE_DEFAULT),Bool,Array{Int64,1},Main.##WeaveSandBox#321.var"#1#2"} -(DiffEqBase.INITIALIZE_DEFAULT, true, [2, 15], Main.##WeaveSandBox#321.var" +eSandBox#309.var"#1#2"}}(DiffEqCallbacks.var"#61#64"{Array{Int64,1}}([2, 15 +]), DiffEqCallbacks.var"#62#65"{Main.##WeaveSandBox#309.var"#1#2"}(Main.##W +eaveSandBox#309.var"#1#2"()), DiffEqCallbacks.var"#63#66"{typeof(DiffEqBase +.INITIALIZE_DEFAULT),Bool,Array{Int64,1},Main.##WeaveSandBox#309.var"#1#2"} +(DiffEqBase.INITIALIZE_DEFAULT, true, [2, 15], Main.##WeaveSandBox#309.var" #1#2"()), Bool[1, 1]), DiffEqBase.DiscreteCallback{typeof(Main.##WeaveSandB -ox#321.thr),typeof(Main.##WeaveSandBox#321.reset!),typeof(DiffEqBase.INITIA -LIZE_DEFAULT)}(Main.##WeaveSandBox#321.thr, Main.##WeaveSandBox#321.reset!, +ox#309.thr),typeof(Main.##WeaveSandBox#309.reset!),typeof(DiffEqBase.INITIA +LIZE_DEFAULT)}(Main.##WeaveSandBox#309.thr, Main.##WeaveSandBox#309.reset!, DiffEqBase.INITIALIZE_DEFAULT, Bool[1, 1]))) ```` @@ -106,7 +107,7 @@ LIZE_DEFAULT)}(Main.##WeaveSandBox#321.thr, Main.##WeaveSandBox#321.reset!, -Our condition is `thr(u,t,integrator)` and the condition kicks in when `integrator.u > integrator.p[4]` where `p[4]` is our threshold parameter `Vth`. Our effect of the condition is `reset!(integrator)`. It sets `u` back to the equilibrium potential `p[2]`. We then wrap both the condition and the effect into a `DiscreteCallback` called threshold. There is one more particularly handy callback called `PresetTimeCallback`. This one increases the input `p[5]` at `t=2` and `t=15` by `210.0`. Both callbacks are then combined into a `CallbackSet`. We are almost done to simulate our system we just need to put numbers on our initial voltage and parameters. +Our condition is `thr(u,t,integrator)` and the condition kicks in when `integrator.u > integrator.p[4]` where `p[4]` is our threshold parameter `Vth`. Our effect of the condition is `reset!(integrator)`. It sets `u` back to the equilibrium potential `p[2]`. We then wrap both the condition and the effect into a `DiscreteCallback` called threshold. There is one more callback called `PresetTimeCallback` that is particularly useful. This one increases the input `p[5]` at `t=2` and `t=15` by `210.0`. Both callbacks are then combined into a `CallbackSet`. We are almost done to simulate our system we just need to put numbers on our initial voltage and parameters. ````julia @@ -138,57 +139,14 @@ sol = solve(prob) ```` -retcode: Success -Interpolation: automatic order switching interpolation -t: 153-element Array{Float64,1}: - 0.0 - 9.999999999999999e-5 - 0.0010999999999999998 - 0.011099999999999997 - 0.11109999999999996 - 1.1110999999999995 - 2.0 - 2.0 - 2.6300346673750097 - 2.9226049547524595 - ⋮ - 38.34157935968204 - 38.78215179003683 - 38.78215179003683 - 39.222724173706894 - 39.222724173706894 - 39.6632965982261 - 39.6632965982261 - 40.0 - 40.0 -u: 153-element Array{Float64,1}: - -75.0 - -75.0 - -75.0 - -75.0 - -75.0 - -75.0 - -75.0 - -75.0 - -59.978080111690375 - -57.32999167299642 - ⋮ - -75.0 - -50.40489310815222 - -75.0 - -50.404894730067554 - -75.0 - -50.404893310891545 - -75.0 - -54.419318668318546 - -75.0 -```` - - - - - -First of all the `solve` output tells us if solving the system generally worked. In this case we know it worked because the return code (`retcode`) say `Success`. Then we get the numbers for the timestep and the solution to `u`. The raw numbers are not super interesting to let's plot our solution. +Error: UndefVarError: top not defined +```` + + + + + +First of all the `solve` output tells us if solving the system generally worked. In this case we know it worked because the return code (`retcode`) says `Success`. Then we get the numbers for the timestep and the solution to `u`. The raw numbers are not super interesting to let's plot our solution. ````julia @@ -196,7 +154,11 @@ plot(sol) ```` -![](figures/08-spiking_neural_systems_5_1.png) +```` +Error: UndefVarError: sol not defined +```` + + @@ -228,7 +190,7 @@ izh! (generic function with 1 method) -This is our Izhikevich model. There are two important changes here. First of all, note the additional input parameter `du`. This is a sequence of differences. `du[1]` corresponds to the voltage (the first dimension of the system) and `du[2]`corresponds to the second dimension. This second dimension is called `u` in the original Izhikevich work amnd it makes the notation a little annoying. In this tutorial I will generally stick to Julia and `DifferentialEquations` conventions as opposed to conventions of the specific models and `du` is commonly used. We will never define `du` ourselves outside of the function but the ODE solver will use it internally. The other change here is the `!` after our function name. This signifies that du will be preallocated before integration, which saves a lot of allocation time. In other words, du will be changed in place. Now we just need our callbacks to take care of spikes and increase the input. +This is our Izhikevich model. There are two important changes here. First of all, note the additional input parameter `du`. This is a sequence of differences. `du[1]` corresponds to the voltage (the first dimension of the system) and `du[2]` corresponds to the second dimension. This second dimension is called `u` in the original Izhikevich work amnd it makes the notation a little annoying. In this tutorial I will generally stick to Julia and `DifferentialEquations` conventions as opposed to conventions of the specific models and `du` is commonly used. We will never define `du` ourselves outside of the function but the ODE solver will use it internally. The other change here is the `!` after our function name. This signifies that `du` will be preallocated before integration and then updated in-place, which saves a lot of allocation time. Now we just need our callbacks to take care of spikes and increase the input. ````julia @@ -249,22 +211,22 @@ cb = CallbackSet(current_step,threshold) ```` DiffEqBase.CallbackSet{Tuple{},Tuple{DiffEqBase.DiscreteCallback{DiffEqCall -backs.var"#61#64"{Int64},DiffEqCallbacks.var"#62#65"{Main.##WeaveSandBox#32 -1.var"#3#4"},DiffEqCallbacks.var"#63#66"{typeof(DiffEqBase.INITIALIZE_DEFAU -LT),Bool,Int64,Main.##WeaveSandBox#321.var"#3#4"}},DiffEqBase.DiscreteCallb -ack{typeof(Main.##WeaveSandBox#321.thr),typeof(Main.##WeaveSandBox#321.rese +backs.var"#61#64"{Int64},DiffEqCallbacks.var"#62#65"{Main.##WeaveSandBox#30 +9.var"#3#4"},DiffEqCallbacks.var"#63#66"{typeof(DiffEqBase.INITIALIZE_DEFAU +LT),Bool,Int64,Main.##WeaveSandBox#309.var"#3#4"}},DiffEqBase.DiscreteCallb +ack{typeof(Main.##WeaveSandBox#309.thr),typeof(Main.##WeaveSandBox#309.rese t!),typeof(DiffEqBase.INITIALIZE_DEFAULT)}}}((), (DiffEqBase.DiscreteCallba ck{DiffEqCallbacks.var"#61#64"{Int64},DiffEqCallbacks.var"#62#65"{Main.##We -aveSandBox#321.var"#3#4"},DiffEqCallbacks.var"#63#66"{typeof(DiffEqBase.INI -TIALIZE_DEFAULT),Bool,Int64,Main.##WeaveSandBox#321.var"#3#4"}}(DiffEqCallb +aveSandBox#309.var"#3#4"},DiffEqCallbacks.var"#63#66"{typeof(DiffEqBase.INI +TIALIZE_DEFAULT),Bool,Int64,Main.##WeaveSandBox#309.var"#3#4"}}(DiffEqCallb acks.var"#61#64"{Int64}(50), DiffEqCallbacks.var"#62#65"{Main.##WeaveSandBo -x#321.var"#3#4"}(Main.##WeaveSandBox#321.var"#3#4"()), DiffEqCallbacks.var" +x#309.var"#3#4"}(Main.##WeaveSandBox#309.var"#3#4"()), DiffEqCallbacks.var" #63#66"{typeof(DiffEqBase.INITIALIZE_DEFAULT),Bool,Int64,Main.##WeaveSandBo -x#321.var"#3#4"}(DiffEqBase.INITIALIZE_DEFAULT, true, 50, Main.##WeaveSandB -ox#321.var"#3#4"()), Bool[1, 1]), DiffEqBase.DiscreteCallback{typeof(Main.# -#WeaveSandBox#321.thr),typeof(Main.##WeaveSandBox#321.reset!),typeof(DiffEq -Base.INITIALIZE_DEFAULT)}(Main.##WeaveSandBox#321.thr, Main.##WeaveSandBox# -321.reset!, DiffEqBase.INITIALIZE_DEFAULT, Bool[1, 1]))) +x#309.var"#3#4"}(DiffEqBase.INITIALIZE_DEFAULT, true, 50, Main.##WeaveSandB +ox#309.var"#3#4"()), Bool[1, 1]), DiffEqBase.DiscreteCallback{typeof(Main.# +#WeaveSandBox#309.thr),typeof(Main.##WeaveSandBox#309.reset!),typeof(DiffEq +Base.INITIALIZE_DEFAULT)}(Main.##WeaveSandBox#309.thr, Main.##WeaveSandBox# +309.reset!, DiffEqBase.INITIALIZE_DEFAULT, Bool[1, 1]))) ```` @@ -298,7 +260,11 @@ plot(sol, vars=1) ```` -![](figures/08-spiking_neural_systems_9_1.png) +```` +Error: UndefVarError: top not defined +```` + + @@ -310,11 +276,15 @@ plot(sol, vars=2) ```` -![](figures/08-spiking_neural_systems_10_1.png) +```` +Error: UndefVarError: sol not defined +```` + + -Our second dimension `u[2]` increases with every spike. When it becomes too large and the system cannot generate another spike until `u[2]`has decayed to a value small enough that spiking can resume. This process repeats. In this model spiking is no longer regular like it was in the LIF. Here we have two frequencies, the frequency during the spiking state and the frequency between spiking states. The LIF model was dominated by one single frequency that was a function of the input strength. Let's see if we can generate another spiking type by changing the parameters. +Our second dimension `u[2]` increases with every spike. When it becomes too large, the system cannot generate another spike until `u[2]` has decayed to a value small enough that spiking can resume. This process repeats. In this model, spiking is no longer regular like it was in the LIF. Here we have two frequencies, the frequency during the spiking state and the frequency between spiking states. The LIF model was dominated by one single frequency that was a function of the input strength. Let's see if we can generate another spiking type by changing the parameters. ````julia @@ -328,14 +298,18 @@ plot(sol, vars=1) ```` -![](figures/08-spiking_neural_systems_11_1.png) +```` +Error: UndefVarError: top not defined +```` + + This type is called regularly spiking and we created it just by lowering `p[3]` and increasing `p[4]`. Note that the type is called regularly spiking but it is not instantaneously regular. The instantenous frequency is higher in the beginning. This is called spike frequency adaptation and is a common property of real neurons. There are many more spike types that can be generated. Check out the [original Izhikevich work](https://www.izhikevich.org/publications/spikes.htm) and create your own favorite neuron! ## Hodgkin-Huxley Model -The Hodgkin-Huxley (HH) model is our first so called biophysically realistic model. This means that all parameters and mechanisms of the model represent biological mechanisms. Specifically, the HH model simulates the ionic currents that depolarize and hyperpolarize a neuron during an action potential. This makes the HH model four-dimensional. Let's see how it looks. +The Hodgkin-Huxley (HH) model is our first biophysically realistic model. This means that all parameters and mechanisms of the model represent biological mechanisms. Specifically, the HH model simulates the ionic currents that depolarize and hyperpolarize a neuron during an action potential. This makes the HH model four-dimensional. Let's see how it looks. ````julia @@ -357,7 +331,7 @@ function HH!(du,u,p,t); gK, gNa, gL, EK, ENa, EL, C, I = p v, n, m, h = u - du[1] = ((gK * (n^4.0) * (EK - v)) + (gNa * (m ^ 3.0) * h * (ENa - v)) + (gL * (EL - v)) + I) / C + du[1] = (-(gK * (n^4.0) * (v - EK)) - (gNa * (m ^ 3.0) * h * (v - ENa)) - (gL * (v - EL)) + I) / C du[2] = (alpha_n(v) * (1.0 - n)) - (beta_n(v) * n) du[3] = (alpha_m(v) * (1.0 - m)) - (beta_m(v) * m) du[4] = (alpha_h(v) * (1.0 - h)) - (beta_h(v) * h) @@ -373,9 +347,9 @@ HH! (generic function with 1 method) -We have three different types of ionic conductances. Potassium, sodium and the leak. The potassium and sodium conducance are voltage gated. They increase or decrease depending on the voltage. In ion channel terms, open channels can transition to the closed state and closed channels can transition to the open state. It's probably easiest to start with the potassium current described by `gK * (n^4.0) * (EK - v)`. Here gK is the total possible conductance that we could reach if all potassium channels were open. If all channels were open, `n` would equal 1 which is usually not the case. The transition from open state to closed state is modeled in `alpha_n(v)` while the transition from closed to open is in `beta_n(v)`. Because potassium conductance is voltage gated, these transitions depend on `v`. The numbers in `alpha_n; beta_n` were calculated by Hodgkin and Huxley based on their extensive experiments on the squid giant axon. They also determined, that `n` needs to be taken to the power of 4 to correctly model the amount of open channels. +We have three different types of ionic conductances. Potassium, sodium and the leak. The potassium and sodium conducance are voltage gated. They increase or decrease depending on the voltage. In ion channel terms, open channels can transition to the closed state and closed channels can transition to the open state. It's probably easiest to start with the potassium current described by `gK * (n^4.0) * (EK - v)`. Here `gK` is the total possible conductance that we could reach if all potassium channels were open. If all channels were open, `n` would equal 1 which is usually not the case. The transition from open state to closed state is modeled in `alpha_n(v)` while the transition from closed to open is in `beta_n(v)`. Because potassium conductance is voltage gated, these transitions depend on `v`. The numbers in `alpha_n; beta_n` were calculated by Hodgkin and Huxley based on their extensive experiments on the squid giant axon. They also determined, that `n` needs to be taken to the power of 4 to correctly model the amount of open channels. -The sodium current is not very different but it two gating variables, `m; h` instead of one. The leak conductance gL has no gating variables because it is not voltage gated. Let's move on to the parameter. If you want all the details on the HH model you can find a great description [here](https://neuronaldynamics.epfl.ch/online/Ch2.S2.html). +The sodium current is not very different but it has two gating variables, `m, h` instead of one. The leak conductance gL has no gating variables because it is not voltage gated. Let's move on to the parameters. If you want all the details on the HH model you can find a great description [here](https://neuronaldynamics.epfl.ch/online/Ch2.S2.html). ````julia @@ -414,7 +388,11 @@ plot(sol, vars=1) ```` -![](figures/08-spiking_neural_systems_14_1.png) +```` +Error: UndefVarError: top not defined +```` + + @@ -426,8 +404,243 @@ plot(sol, vars=[2,3,4], tspan=(105.0,130.0)) ```` -![](figures/08-spiking_neural_systems_15_1.png) +```` +Error: UndefVarError: sol not defined +```` + + + + + +So far we have only given our neurons very simple step inputs by simply changing +the number `I`. Actual neurons recieve their inputs mostly from chemical synapses. +They produce conductance changes with very complex structures. In the next +chapter we will try to incorporate a synapse into our HH model. + +## Alpha Synapse +One of the most simple synaptic mechanisms used in computational neuroscience +is the alpha synapse. When this mechanism is triggered, it causes an +instantanouse rise in conductance followed by an exponential decay. Let's +incorporate that into our HH model. + +````julia + +function gSyn(max_gsyn, tau, tf, t); + if t-tf >= 0 + return max_gsyn * exp(-(t-tf)/tau) + else + return 0.0 + end +end +function HH!(du,u,p,t); + gK, gNa, gL, EK, ENa, EL, C, I, max_gSyn, ESyn, tau, tf = p + v, n, m, h = u + + ISyn = gSyn(max_gSyn, tau, tf, t) * (v - ESyn) + + du[1] = (-(gK * (n^4.0) * (v - EK)) - (gNa * (m ^ 3.0) * h * (v - ENa)) - (gL * (v - EL)) + I - ISyn) / C + du[2] = (alpha_n(v) * (1.0 - n)) - (beta_n(v) * n) + du[3] = (alpha_m(v) * (1.0 - m)) - (beta_m(v) * m) + du[4] = (alpha_h(v) * (1.0 - h)) - (beta_h(v) * h) +end +```` + + +```` +HH! (generic function with 1 method) +```` + + + + + +`gSyn` models the step to the maximum conductance and the following exponential decay with time constant `tau`. Of course we only want to integrate the conductance at and after time `tf`, the onset of the synaptic response. Before `tf`, `gSyn` returns zero. To convert the conductance to a current, we multiply by the difference between the current voltage and the synapses equilibrium voltage: `ISyn = gSyn(max_gSyn, tau, tf, t) * (v - ESyn)`. Later we will set the parameter `ESyn` to 0, making this synapse an excitatory synapse. Excitatory synapses have equilibrium potentials far above the resting potential. Let's see what our synapse does to the voltage of the cell. + +````julia + +p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 0.008, 0, 20, 100] +tspan = (0.0, 200) +prob = ODEProblem(HH!, u0, tspan, p) +sol = solve(prob); +plot(sol, vars=1) +```` + + +```` +Error: UndefVarError: top not defined +```` + + + + + +What you see here is called an excitatory postsynaptic potential (EPSP). It is the voltage response to a synaptic current. While our synaptic conductance rises instantly, the voltage response rises at a slower time course that is given by the membrane capacitance `C`. This particular voltage response is not strong enough to evoke spiking, so we say it is subthreshold. To get a suprathreshold response that evokes spiking we simply increase the parameter `max_gSyn` to increase the maximum conductance. + +````julia + +p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 0.01, 0, 20, 100] +tspan = (0.0, 200) +prob = ODEProblem(HH!, u0, tspan, p) +sol = solve(prob); +plot!(sol, vars=1) +```` + + +```` +Error: UndefVarError: top not defined +```` + + + + + +This plot shows both the subthreshold EPSP from above as well as the suprathreshold EPSP. Alpha synapses are nice because of their simplicity. Real synapses however, are extremely complex structures. One of the most important features of real synapses is that their maximum conductance is not the same on every event. The number and frequency of synaptic events changes the size of the maximum conductance in a dynamic way. While we usually avoid anatomical and biophysical details of real synapses, there is a widely used phenomenological way to capture those dynamics called the Tsodyks-Markram synapse. + +## Tsodyks-Markram Synapse +The Tsodyks-Markram synapse (TMS) is a dynamic system that models the changes of maximum conductance that occur between EPSPs at different frequencies. The single response is similar to the alpha synapse in that it rises instantaneously and decays exponentially. The maximum conductance it reaches depends on the event history. To simulate the TMS we need to incorporate three more dimensions, `u, R, gsyn` into our system. `u` decays towards 0, R decays towards 1 and gsyn decays towards 0 as it did with the alpha synapse. The crucial part of the TMS is in `epsp!`, where we handle the discontinuities when a synaptic event occurs. Instead of just setting `gsyn` to the maximum conductance `gmax`, we increment `gsyn` by a fraction of gmax that depends on the other two dynamic parameters. The frequency dependence comes from the size of the time constants `tau_u` and `tau_R`. Enough talk, let's simulate it. + +````julia + +function HH!(du,u,p,t); + gK, gNa, gL, EK, ENa, EL, C, I, tau, tau_u, tau_R, u0, gmax, Esyn = p + v, n, m, h, u, R, gsyn = u + + du[1] = ((gK * (n^4.0) * (EK - v)) + (gNa * (m ^ 3.0) * h * (ENa - v)) + (gL * (EL - v)) + I + gsyn * (Esyn - v)) / C + du[2] = (alpha_n(v) * (1.0 - n)) - (beta_n(v) * n) + du[3] = (alpha_m(v) * (1.0 - m)) - (beta_m(v) * m) + du[4] = (alpha_h(v) * (1.0 - h)) - (beta_h(v) * h) + + # Synaptic variables + du[5] = -(u/tau_u) + du[6] = (1-R)/tau_R + du[7] = -(gsyn/tau) +end + +function epsp!(integrator); + integrator.u[5] += integrator.p[12] * (1 - integrator.u[5]) + integrator.u[7] += integrator.p[13] * integrator.u[5] * integrator.u[6] + integrator.u[6] -= integrator.u[5] * integrator.u[6] + +end + +epsp_ts= PresetTimeCallback(100:100:500, epsp!) + +p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 30, 1000, 50, 0.5, 0.005, 0] +u0 = [-60, n_inf(-60), m_inf(-60), h_inf(-60), 0.0, 1.0, 0.0] +tspan = (0.0, 700) +prob = ODEProblem(HH!, u0, tspan, p, callback=epsp_ts) +sol = solve(prob); +plot(sol, vars=1) +```` + + +```` +Error: UndefVarError: top not defined +```` + + + +````julia + +plot(sol, vars=7) +```` + + +```` +Error: UndefVarError: sol not defined +```` + + + + + +Both the voltage response as well as the conductances show what is called short-term facilitation. An increase in peak conductance over multiple synaptic events. Here the first event has a conductance of around 0.0025 and the last one of 0.004. We can plot the other two varialbes to see what underlies those dynamics + +````julia + +plot(sol, vars=[5,6]) +```` + + +```` +Error: UndefVarError: sol not defined +```` + + + + + +Because of the time courses at play here, this facilitation is frequency dependent. If we increase the period between these events, facilitation does not occur. + +````julia + +epsp_ts= PresetTimeCallback(100:1000:5100, epsp!) + +p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 30, 500, 50, 0.5, 0.005, 0] +u0 = [-60, n_inf(-60), m_inf(-60), h_inf(-60), 0.0, 1.0, 0.0] +tspan = (0.0, 5300) +prob = ODEProblem(HH!, u0, tspan, p, callback=epsp_ts) +sol = solve(prob); +plot(sol, vars=7) +```` + + +```` +Error: UndefVarError: top not defined +```` + + + +````julia + +plot(sol, vars=[5,6]) +```` + + +```` +Error: UndefVarError: sol not defined +```` + + + + + +We can also change these time constants such that the dynamics show short-term depression instead of facilitation. + +````julia + +epsp_ts= PresetTimeCallback(100:100:500, epsp!) + +p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 30, 100, 1000, 0.5, 0.005, 0] +u0 = [-60, n_inf(-60), m_inf(-60), h_inf(-60), 0.0, 1.0, 0.0] +tspan = (0.0, 700) +prob = ODEProblem(HH!, u0, tspan, p, callback=epsp_ts) +sol = solve(prob); +plot(sol, vars=7) +```` + + +```` +Error: UndefVarError: top not defined +```` + + + +````julia + +plot(sol, vars=[5,6]) +```` + + +```` +Error: UndefVarError: sol not defined +```` + + + +Just changing those two time constants has changed the dynamics to short-term depression. This is still frequency dependent. Changing these parameters can generate a variety of different short-term dynamics. -That's it for the tutorial on spiking neural systems. +## Summary +That's it for now. Thanks for making it this far. If you want to learn more about neuronal dynamics, [this is a great resource](https://neuronaldynamics.epfl.ch/online/index.html). If you want to learn more about Julia check out the [official website](https://julialang.org/) and to learn more about the DifferentialEquations package you are in the right place, because this chapter is part of a [larger tutorial series about just that](https://github.com/SciML/SciMLTutorials.jl). diff --git a/notebook/models/08-spiking_neural_systems.ipynb b/notebook/models/08-spiking_neural_systems.ipynb index 6f2b59e8..abdc2d78 100644 --- a/notebook/models/08-spiking_neural_systems.ipynb +++ b/notebook/models/08-spiking_neural_systems.ipynb @@ -3,7 +3,7 @@ { "cell_type": "markdown", "source": [ - "This is an introduction to spiking neural systems with Julia's DifferentialEquations package.\nWe will cover four different models: leaky integrate-and-fire, Izhikevich, adaptive exponential\n integrate-and-fire and Hodgkin-Huxley model. Let's get started with the leaky integrate-and-fire (LIF) model.\n\n## The Leaky-Integrate-and-Fire Model\nThe LIF model is an extension of the integrate-and-fire (IF) model. While the IF\nmodel simply integrates input until it fires, the LIF model integrates input but\nalso decays towards an equilibrium potential. This means that inputs that arrive\nin quick succession have a much higher chance to make the cell spike as opposed\nto inputs that are further apart in time. The LIF is a more realistic neuron\nmodel than the IF because it is known from real neurons that the timing of\ninputs is extremely relevant for their spiking.\n\nThe LIF model has four parameters, `gL, EL, C, Vth, I` and we define it in the `lif(u, p, t)` function." + "This is an introduction to spiking neural systems with Julia's DifferentialEquations package.\nWe will cover three different models: leaky integrate-and-fire, Izhikevich, and Hodgkin-Huxley.\nFinally we will also learn about two mechanisms that simulate synaptic inputs like\nreal neurons receive them. The alpha synapse and the Tsodyks-Markram synapse. Let's get started\nwith the leaky integrate-and-fire (LIF) model.\n## The Leaky Integrate-and-Fire Model\nThe LIF model is an extension of the integrate-and-fire (IF) model. While the IF\nmodel simply integrates input until it fires, the LIF model integrates input but\nalso decays towards an equilibrium potential. This means that inputs that arrive\nin quick succession have a much higher chance to make the cell spike as opposed\nto inputs that are further apart in time. The LIF is a more realistic neuron\nmodel than the IF, because it is known from real neurons that the timing of\ninputs is extremely relevant for their spiking.\n\nThe LIF model has five parameters, `gL, EL, C, Vth, I` and we define it in the `lif(u, p, t)` function." ], "metadata": {} }, @@ -19,7 +19,7 @@ { "cell_type": "markdown", "source": [ - "Our system is described by one differential equation: `(-gL*(u-EL)+I)/C`, where\n`u` is the voltage, `I` is the input, `gL` is the leak conductance, `EL` is the\nequilibrium potential of the leak conductance and `C` is the membrane capacitance.\nGenerally, any change of the voltage is slowed down (filtered) by the membrane\ncapacitance. That's why we divide the whole equation by `C`. Without any external\ninput, the voltage always converges towards `EL`. If `u` is larger than `EL`,\n`u` decreases until it is at `EL`. If `u` is smaller than `EL`, `u` increases\nuntil it is at `EL`. The only other thing that can change the voltage is the\nexternal input `I`.\n\nOur `lif` function requires a certain parameter structure because it will need\nto be compatible with the `DifferentialEquations` interface. The input signature\nis `lif(u, p, t)` where `u` is the voltage, `p` is the collection of the parameters\nthat describe the equation and `t` is time. You might wonder why time does not\nshow up in our equation, although we need to calculate the change in voltage\nwith respect to time. The ODE solver will take care of time for us. One of\nthe advantages of the ODE solver as opposed to calculating the change of\nu in a for loop is that many ODE solver algorithm can dynamically adjust the\ntime step in a way that is efficient and accurate.\n\nOne crucial thing is still missing however. This is supposed to be a model of\nneural spiking, right? So we need a mechanism that recognizes the spike and\nhyperpolarizes `u` in response. For this purpose we will use callbacks.\nThey can make discontinuous changes to the model when certain conditions are met." + "Our system is described by one differential equation: `(-gL*(u-EL)+I)/C`, where\n`u` is the voltage, `I` is the input, `gL` is the leak conductance, `EL` is the\nequilibrium potential of the leak conductance and `C` is the membrane capacitance.\nGenerally, any change of the voltage is slowed down (filtered) by the membrane\ncapacitance. That's why we divide the whole equation by `C`. Without any external\ninput, the voltage always converges towards `EL`. If `u` is larger than `EL`,\n`u` decreases until it is at `EL`. If `u` is smaller than `EL`, `u` increases\nuntil it is at `EL`. The only other thing that can change the voltage is the\nexternal input `I`.\n\nOur `lif` function requires a certain parameter structure because it will need\nto be compatible with the `DifferentialEquations` interface. The input signature\nis `lif(u, p, t)` where `u` is the voltage, `p` is the collection of the parameters\nthat describe the equation and `t` is time. You might wonder why time does not\nshow up in our equation, although we need to calculate the change in voltage\nwith respect to time. The ODE solver will take care of time for us. One of\nthe advantages of the ODE solver as opposed to calculating the change of\n`u` in a for loop is that many ODE solver algorithms can dynamically adjust the\ntime step in a way that is efficient and accurate.\n\nOne crucial thing is still missing however. This is supposed to be a model of\nneural spiking, right? So we need a mechanism that recognizes the spike and\nhyperpolarizes `u` in response. For this purpose we will use callbacks.\nThey can make discontinuous changes to the model when certain conditions are met." ], "metadata": {} }, @@ -35,7 +35,7 @@ { "cell_type": "markdown", "source": [ - "Our condition is `thr(u,t,integrator)` and the condition kicks in when `integrator.u > integrator.p[4]` where `p[4]` is our threshold parameter `Vth`. Our effect of the condition is `reset!(integrator)`. It sets `u` back to the equilibrium potential `p[2]`. We then wrap both the condition and the effect into a `DiscreteCallback` called threshold. There is one more particularly handy callback called `PresetTimeCallback`. This one increases the input `p[5]` at `t=2` and `t=15` by `210.0`. Both callbacks are then combined into a `CallbackSet`. We are almost done to simulate our system we just need to put numbers on our initial voltage and parameters." + "Our condition is `thr(u,t,integrator)` and the condition kicks in when `integrator.u > integrator.p[4]` where `p[4]` is our threshold parameter `Vth`. Our effect of the condition is `reset!(integrator)`. It sets `u` back to the equilibrium potential `p[2]`. We then wrap both the condition and the effect into a `DiscreteCallback` called threshold. There is one more callback called `PresetTimeCallback` that is particularly useful. This one increases the input `p[5]` at `t=2` and `t=15` by `210.0`. Both callbacks are then combined into a `CallbackSet`. We are almost done to simulate our system we just need to put numbers on our initial voltage and parameters." ], "metadata": {} }, @@ -67,7 +67,7 @@ { "cell_type": "markdown", "source": [ - "First of all the `solve` output tells us if solving the system generally worked. In this case we know it worked because the return code (`retcode`) say `Success`. Then we get the numbers for the timestep and the solution to `u`. The raw numbers are not super interesting to let's plot our solution." + "First of all the `solve` output tells us if solving the system generally worked. In this case we know it worked because the return code (`retcode`) says `Success`. Then we get the numbers for the timestep and the solution to `u`. The raw numbers are not super interesting to let's plot our solution." ], "metadata": {} }, @@ -99,7 +99,7 @@ { "cell_type": "markdown", "source": [ - "This is our Izhikevich model. There are two important changes here. First of all, note the additional input parameter `du`. This is a sequence of differences. `du[1]` corresponds to the voltage (the first dimension of the system) and `du[2]`corresponds to the second dimension. This second dimension is called `u` in the original Izhikevich work amnd it makes the notation a little annoying. In this tutorial I will generally stick to Julia and `DifferentialEquations` conventions as opposed to conventions of the specific models and `du` is commonly used. We will never define `du` ourselves outside of the function but the ODE solver will use it internally. The other change here is the `!` after our function name. This signifies that du will be preallocated before integration, which saves a lot of allocation time. In other words, du will be changed in place. Now we just need our callbacks to take care of spikes and increase the input." + "This is our Izhikevich model. There are two important changes here. First of all, note the additional input parameter `du`. This is a sequence of differences. `du[1]` corresponds to the voltage (the first dimension of the system) and `du[2]` corresponds to the second dimension. This second dimension is called `u` in the original Izhikevich work amnd it makes the notation a little annoying. In this tutorial I will generally stick to Julia and `DifferentialEquations` conventions as opposed to conventions of the specific models and `du` is commonly used. We will never define `du` ourselves outside of the function but the ODE solver will use it internally. The other change here is the `!` after our function name. This signifies that `du` will be preallocated before integration and then updated in-place, which saves a lot of allocation time. Now we just need our callbacks to take care of spikes and increase the input." ], "metadata": {} }, @@ -156,7 +156,7 @@ { "cell_type": "markdown", "source": [ - "Our second dimension `u[2]` increases with every spike. When it becomes too large and the system cannot generate another spike until `u[2]`has decayed to a value small enough that spiking can resume. This process repeats. In this model spiking is no longer regular like it was in the LIF. Here we have two frequencies, the frequency during the spiking state and the frequency between spiking states. The LIF model was dominated by one single frequency that was a function of the input strength. Let's see if we can generate another spiking type by changing the parameters." + "Our second dimension `u[2]` increases with every spike. When it becomes too large, the system cannot generate another spike until `u[2]` has decayed to a value small enough that spiking can resume. This process repeats. In this model, spiking is no longer regular like it was in the LIF. Here we have two frequencies, the frequency during the spiking state and the frequency between spiking states. The LIF model was dominated by one single frequency that was a function of the input strength. Let's see if we can generate another spiking type by changing the parameters." ], "metadata": {} }, @@ -172,7 +172,7 @@ { "cell_type": "markdown", "source": [ - "This type is called regularly spiking and we created it just by lowering `p[3]` and increasing `p[4]`. Note that the type is called regularly spiking but it is not instantaneously regular. The instantenous frequency is higher in the beginning. This is called spike frequency adaptation and is a common property of real neurons. There are many more spike types that can be generated. Check out the [original Izhikevich work](https://www.izhikevich.org/publications/spikes.htm) and create your own favorite neuron!\n\n## Hodgkin-Huxley Model\nThe Hodgkin-Huxley (HH) model is our first so called biophysically realistic model. This means that all parameters and mechanisms of the model represent biological mechanisms. Specifically, the HH model simulates the ionic currents that depolarize and hyperpolarize a neuron during an action potential. This makes the HH model four-dimensional. Let's see how it looks." + "This type is called regularly spiking and we created it just by lowering `p[3]` and increasing `p[4]`. Note that the type is called regularly spiking but it is not instantaneously regular. The instantenous frequency is higher in the beginning. This is called spike frequency adaptation and is a common property of real neurons. There are many more spike types that can be generated. Check out the [original Izhikevich work](https://www.izhikevich.org/publications/spikes.htm) and create your own favorite neuron!\n\n## Hodgkin-Huxley Model\nThe Hodgkin-Huxley (HH) model is our first biophysically realistic model. This means that all parameters and mechanisms of the model represent biological mechanisms. Specifically, the HH model simulates the ionic currents that depolarize and hyperpolarize a neuron during an action potential. This makes the HH model four-dimensional. Let's see how it looks." ], "metadata": {} }, @@ -180,7 +180,7 @@ "outputs": [], "cell_type": "code", "source": [ - "using DifferentialEquations\nusing Plots\n\n# Potassium ion-channel rate functions\nalpha_n(v) = (0.02 * (v - 25.0)) / (1.0 - exp((-1.0 * (v - 25.0)) / 9.0))\nbeta_n(v) = (-0.002 * (v - 25.0)) / (1.0 - exp((v - 25.0) / 9.0))\n\n# Sodium ion-channel rate functions\nalpha_m(v) = (0.182*(v + 35.0)) / (1.0 - exp((-1.0 * (v + 35.0)) / 9.0))\nbeta_m(v) = (-0.124 * (v + 35.0)) / (1.0 - exp((v + 35.0) / 9.0))\n\nalpha_h(v) = 0.25 * exp((-1.0 * (v + 90.0)) / 12.0)\nbeta_h(v) = (0.25 * exp((v + 62.0) / 6.0)) / exp((v + 90.0) / 12.0)\n\nfunction HH!(du,u,p,t);\n gK, gNa, gL, EK, ENa, EL, C, I = p\n v, n, m, h = u\n\n du[1] = ((gK * (n^4.0) * (EK - v)) + (gNa * (m ^ 3.0) * h * (ENa - v)) + (gL * (EL - v)) + I) / C\n du[2] = (alpha_n(v) * (1.0 - n)) - (beta_n(v) * n)\n du[3] = (alpha_m(v) * (1.0 - m)) - (beta_m(v) * m)\n du[4] = (alpha_h(v) * (1.0 - h)) - (beta_h(v) * h)\nend" + "using DifferentialEquations\nusing Plots\n\n# Potassium ion-channel rate functions\nalpha_n(v) = (0.02 * (v - 25.0)) / (1.0 - exp((-1.0 * (v - 25.0)) / 9.0))\nbeta_n(v) = (-0.002 * (v - 25.0)) / (1.0 - exp((v - 25.0) / 9.0))\n\n# Sodium ion-channel rate functions\nalpha_m(v) = (0.182*(v + 35.0)) / (1.0 - exp((-1.0 * (v + 35.0)) / 9.0))\nbeta_m(v) = (-0.124 * (v + 35.0)) / (1.0 - exp((v + 35.0) / 9.0))\n\nalpha_h(v) = 0.25 * exp((-1.0 * (v + 90.0)) / 12.0)\nbeta_h(v) = (0.25 * exp((v + 62.0) / 6.0)) / exp((v + 90.0) / 12.0)\n\nfunction HH!(du,u,p,t);\n gK, gNa, gL, EK, ENa, EL, C, I = p\n v, n, m, h = u\n\n du[1] = (-(gK * (n^4.0) * (v - EK)) - (gNa * (m ^ 3.0) * h * (v - ENa)) - (gL * (v - EL)) + I) / C\n du[2] = (alpha_n(v) * (1.0 - n)) - (beta_n(v) * n)\n du[3] = (alpha_m(v) * (1.0 - m)) - (beta_m(v) * m)\n du[4] = (alpha_h(v) * (1.0 - h)) - (beta_h(v) * h)\nend" ], "metadata": {}, "execution_count": null @@ -188,7 +188,7 @@ { "cell_type": "markdown", "source": [ - "We have three different types of ionic conductances. Potassium, sodium and the leak. The potassium and sodium conducance are voltage gated. They increase or decrease depending on the voltage. In ion channel terms, open channels can transition to the closed state and closed channels can transition to the open state. It's probably easiest to start with the potassium current described by `gK * (n^4.0) * (EK - v)`. Here gK is the total possible conductance that we could reach if all potassium channels were open. If all channels were open, `n` would equal 1 which is usually not the case. The transition from open state to closed state is modeled in `alpha_n(v)` while the transition from closed to open is in `beta_n(v)`. Because potassium conductance is voltage gated, these transitions depend on `v`. The numbers in `alpha_n; beta_n` were calculated by Hodgkin and Huxley based on their extensive experiments on the squid giant axon. They also determined, that `n` needs to be taken to the power of 4 to correctly model the amount of open channels.\n\nThe sodium current is not very different but it two gating variables, `m; h` instead of one. The leak conductance gL has no gating variables because it is not voltage gated. Let's move on to the parameter. If you want all the details on the HH model you can find a great description [here](https://neuronaldynamics.epfl.ch/online/Ch2.S2.html)." + "We have three different types of ionic conductances. Potassium, sodium and the leak. The potassium and sodium conducance are voltage gated. They increase or decrease depending on the voltage. In ion channel terms, open channels can transition to the closed state and closed channels can transition to the open state. It's probably easiest to start with the potassium current described by `gK * (n^4.0) * (EK - v)`. Here `gK` is the total possible conductance that we could reach if all potassium channels were open. If all channels were open, `n` would equal 1 which is usually not the case. The transition from open state to closed state is modeled in `alpha_n(v)` while the transition from closed to open is in `beta_n(v)`. Because potassium conductance is voltage gated, these transitions depend on `v`. The numbers in `alpha_n; beta_n` were calculated by Hodgkin and Huxley based on their extensive experiments on the squid giant axon. They also determined, that `n` needs to be taken to the power of 4 to correctly model the amount of open channels.\n\nThe sodium current is not very different but it has two gating variables, `m, h` instead of one. The leak conductance gL has no gating variables because it is not voltage gated. Let's move on to the parameters. If you want all the details on the HH model you can find a great description [here](https://neuronaldynamics.epfl.ch/online/Ch2.S2.html)." ], "metadata": {} }, @@ -236,7 +236,146 @@ { "cell_type": "markdown", "source": [ - "That's it for the tutorial on spiking neural systems." + "So far we have only given our neurons very simple step inputs by simply changing\nthe number `I`. Actual neurons recieve their inputs mostly from chemical synapses.\nThey produce conductance changes with very complex structures. In the next\nchapter we will try to incorporate a synapse into our HH model.\n\n## Alpha Synapse\nOne of the most simple synaptic mechanisms used in computational neuroscience\nis the alpha synapse. When this mechanism is triggered, it causes an\ninstantanouse rise in conductance followed by an exponential decay. Let's\nincorporate that into our HH model." + ], + "metadata": {} + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "function gSyn(max_gsyn, tau, tf, t);\n if t-tf >= 0\n return max_gsyn * exp(-(t-tf)/tau)\n else\n return 0.0\n end\nend\nfunction HH!(du,u,p,t);\n gK, gNa, gL, EK, ENa, EL, C, I, max_gSyn, ESyn, tau, tf = p\n v, n, m, h = u\n\n ISyn = gSyn(max_gSyn, tau, tf, t) * (v - ESyn)\n\n du[1] = (-(gK * (n^4.0) * (v - EK)) - (gNa * (m ^ 3.0) * h * (v - ENa)) - (gL * (v - EL)) + I - ISyn) / C\n du[2] = (alpha_n(v) * (1.0 - n)) - (beta_n(v) * n)\n du[3] = (alpha_m(v) * (1.0 - m)) - (beta_m(v) * m)\n du[4] = (alpha_h(v) * (1.0 - h)) - (beta_h(v) * h)\nend" + ], + "metadata": {}, + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "`gSyn` models the step to the maximum conductance and the following exponential decay with time constant `tau`. Of course we only want to integrate the conductance at and after time `tf`, the onset of the synaptic response. Before `tf`, `gSyn` returns zero. To convert the conductance to a current, we multiply by the difference between the current voltage and the synapses equilibrium voltage: `ISyn = gSyn(max_gSyn, tau, tf, t) * (v - ESyn)`. Later we will set the parameter `ESyn` to 0, making this synapse an excitatory synapse. Excitatory synapses have equilibrium potentials far above the resting potential. Let's see what our synapse does to the voltage of the cell." + ], + "metadata": {} + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 0.008, 0, 20, 100]\ntspan = (0.0, 200)\nprob = ODEProblem(HH!, u0, tspan, p)\nsol = solve(prob);\nplot(sol, vars=1)" + ], + "metadata": {}, + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "What you see here is called an excitatory postsynaptic potential (EPSP). It is the voltage response to a synaptic current. While our synaptic conductance rises instantly, the voltage response rises at a slower time course that is given by the membrane capacitance `C`. This particular voltage response is not strong enough to evoke spiking, so we say it is subthreshold. To get a suprathreshold response that evokes spiking we simply increase the parameter `max_gSyn` to increase the maximum conductance." + ], + "metadata": {} + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 0.01, 0, 20, 100]\ntspan = (0.0, 200)\nprob = ODEProblem(HH!, u0, tspan, p)\nsol = solve(prob);\nplot!(sol, vars=1)" + ], + "metadata": {}, + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "This plot shows both the subthreshold EPSP from above as well as the suprathreshold EPSP. Alpha synapses are nice because of their simplicity. Real synapses however, are extremely complex structures. One of the most important features of real synapses is that their maximum conductance is not the same on every event. The number and frequency of synaptic events changes the size of the maximum conductance in a dynamic way. While we usually avoid anatomical and biophysical details of real synapses, there is a widely used phenomenological way to capture those dynamics called the Tsodyks-Markram synapse.\n\n## Tsodyks-Markram Synapse\nThe Tsodyks-Markram synapse (TMS) is a dynamic system that models the changes of maximum conductance that occur between EPSPs at different frequencies. The single response is similar to the alpha synapse in that it rises instantaneously and decays exponentially. The maximum conductance it reaches depends on the event history. To simulate the TMS we need to incorporate three more dimensions, `u, R, gsyn` into our system. `u` decays towards 0, R decays towards 1 and gsyn decays towards 0 as it did with the alpha synapse. The crucial part of the TMS is in `epsp!`, where we handle the discontinuities when a synaptic event occurs. Instead of just setting `gsyn` to the maximum conductance `gmax`, we increment `gsyn` by a fraction of gmax that depends on the other two dynamic parameters. The frequency dependence comes from the size of the time constants `tau_u` and `tau_R`. Enough talk, let's simulate it." + ], + "metadata": {} + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "function HH!(du,u,p,t);\n gK, gNa, gL, EK, ENa, EL, C, I, tau, tau_u, tau_R, u0, gmax, Esyn = p\n v, n, m, h, u, R, gsyn = u\n\n du[1] = ((gK * (n^4.0) * (EK - v)) + (gNa * (m ^ 3.0) * h * (ENa - v)) + (gL * (EL - v)) + I + gsyn * (Esyn - v)) / C\n du[2] = (alpha_n(v) * (1.0 - n)) - (beta_n(v) * n)\n du[3] = (alpha_m(v) * (1.0 - m)) - (beta_m(v) * m)\n du[4] = (alpha_h(v) * (1.0 - h)) - (beta_h(v) * h)\n\n # Synaptic variables\n du[5] = -(u/tau_u)\n du[6] = (1-R)/tau_R\n du[7] = -(gsyn/tau)\nend\n\nfunction epsp!(integrator);\n integrator.u[5] += integrator.p[12] * (1 - integrator.u[5])\n integrator.u[7] += integrator.p[13] * integrator.u[5] * integrator.u[6]\n integrator.u[6] -= integrator.u[5] * integrator.u[6]\n\nend\n\nepsp_ts= PresetTimeCallback(100:100:500, epsp!)\n\np = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 30, 1000, 50, 0.5, 0.005, 0]\nu0 = [-60, n_inf(-60), m_inf(-60), h_inf(-60), 0.0, 1.0, 0.0]\ntspan = (0.0, 700)\nprob = ODEProblem(HH!, u0, tspan, p, callback=epsp_ts)\nsol = solve(prob);\nplot(sol, vars=1)" + ], + "metadata": {}, + "execution_count": null + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "plot(sol, vars=7)" + ], + "metadata": {}, + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "Both the voltage response as well as the conductances show what is called short-term facilitation. An increase in peak conductance over multiple synaptic events. Here the first event has a conductance of around 0.0025 and the last one of 0.004. We can plot the other two varialbes to see what underlies those dynamics" + ], + "metadata": {} + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "plot(sol, vars=[5,6])" + ], + "metadata": {}, + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "Because of the time courses at play here, this facilitation is frequency dependent. If we increase the period between these events, facilitation does not occur." + ], + "metadata": {} + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "epsp_ts= PresetTimeCallback(100:1000:5100, epsp!)\n\np = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 30, 500, 50, 0.5, 0.005, 0]\nu0 = [-60, n_inf(-60), m_inf(-60), h_inf(-60), 0.0, 1.0, 0.0]\ntspan = (0.0, 5300)\nprob = ODEProblem(HH!, u0, tspan, p, callback=epsp_ts)\nsol = solve(prob);\nplot(sol, vars=7)" + ], + "metadata": {}, + "execution_count": null + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "plot(sol, vars=[5,6])" + ], + "metadata": {}, + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "We can also change these time constants such that the dynamics show short-term depression instead of facilitation." + ], + "metadata": {} + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "epsp_ts= PresetTimeCallback(100:100:500, epsp!)\n\np = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 30, 100, 1000, 0.5, 0.005, 0]\nu0 = [-60, n_inf(-60), m_inf(-60), h_inf(-60), 0.0, 1.0, 0.0]\ntspan = (0.0, 700)\nprob = ODEProblem(HH!, u0, tspan, p, callback=epsp_ts)\nsol = solve(prob);\nplot(sol, vars=7)" + ], + "metadata": {}, + "execution_count": null + }, + { + "outputs": [], + "cell_type": "code", + "source": [ + "plot(sol, vars=[5,6])" + ], + "metadata": {}, + "execution_count": null + }, + { + "cell_type": "markdown", + "source": [ + "Just changing those two time constants has changed the dynamics to short-term depression. This is still frequency dependent. Changing these parameters can generate a variety of different short-term dynamics.\n\n## Summary\nThat's it for now. Thanks for making it this far. If you want to learn more about neuronal dynamics, [this is a great resource](https://neuronaldynamics.epfl.ch/online/index.html). If you want to learn more about Julia check out the [official website](https://julialang.org/) and to learn more about the DifferentialEquations package you are in the right place, because this chapter is part of a [larger tutorial series about just that](https://github.com/SciML/SciMLTutorials.jl)." ], "metadata": {} } @@ -247,11 +386,11 @@ "file_extension": ".jl", "mimetype": "application/julia", "name": "julia", - "version": "1.4.2" + "version": "1.5.1" }, "kernelspec": { - "name": "julia-1.4", - "display_name": "Julia 1.4.2", + "name": "julia-1.5", + "display_name": "Julia 1.5.1", "language": "julia" } }, diff --git a/script/models/08-spiking_neural_systems.jl b/script/models/08-spiking_neural_systems.jl index d45fb92b..7d33603e 100644 --- a/script/models/08-spiking_neural_systems.jl +++ b/script/models/08-spiking_neural_systems.jl @@ -103,7 +103,7 @@ function HH!(du,u,p,t); gK, gNa, gL, EK, ENa, EL, C, I = p v, n, m, h = u - du[1] = ((gK * (n^4.0) * (EK - v)) + (gNa * (m ^ 3.0) * h * (ENa - v)) + (gL * (EL - v)) + I) / C + du[1] = (-(gK * (n^4.0) * (v - EK)) - (gNa * (m ^ 3.0) * h * (v - ENa)) - (gL * (v - EL)) + I) / C du[2] = (alpha_n(v) * (1.0 - n)) - (beta_n(v) * n) du[3] = (alpha_m(v) * (1.0 - m)) - (beta_m(v) * m) du[4] = (alpha_h(v) * (1.0 - h)) - (beta_h(v) * h) @@ -130,3 +130,101 @@ plot(sol, vars=1) plot(sol, vars=[2,3,4], tspan=(105.0,130.0)) + +function gSyn(max_gsyn, tau, tf, t); + if t-tf >= 0 + return max_gsyn * exp(-(t-tf)/tau) + else + return 0.0 + end +end +function HH!(du,u,p,t); + gK, gNa, gL, EK, ENa, EL, C, I, max_gSyn, ESyn, tau, tf = p + v, n, m, h = u + + ISyn = gSyn(max_gSyn, tau, tf, t) * (v - ESyn) + + du[1] = (-(gK * (n^4.0) * (v - EK)) - (gNa * (m ^ 3.0) * h * (v - ENa)) - (gL * (v - EL)) + I - ISyn) / C + du[2] = (alpha_n(v) * (1.0 - n)) - (beta_n(v) * n) + du[3] = (alpha_m(v) * (1.0 - m)) - (beta_m(v) * m) + du[4] = (alpha_h(v) * (1.0 - h)) - (beta_h(v) * h) +end + + +p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 0.008, 0, 20, 100] +tspan = (0.0, 200) +prob = ODEProblem(HH!, u0, tspan, p) +sol = solve(prob); +plot(sol, vars=1) + + +p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 0.01, 0, 20, 100] +tspan = (0.0, 200) +prob = ODEProblem(HH!, u0, tspan, p) +sol = solve(prob); +plot!(sol, vars=1) + + +function HH!(du,u,p,t); + gK, gNa, gL, EK, ENa, EL, C, I, tau, tau_u, tau_R, u0, gmax, Esyn = p + v, n, m, h, u, R, gsyn = u + + du[1] = ((gK * (n^4.0) * (EK - v)) + (gNa * (m ^ 3.0) * h * (ENa - v)) + (gL * (EL - v)) + I + gsyn * (Esyn - v)) / C + du[2] = (alpha_n(v) * (1.0 - n)) - (beta_n(v) * n) + du[3] = (alpha_m(v) * (1.0 - m)) - (beta_m(v) * m) + du[4] = (alpha_h(v) * (1.0 - h)) - (beta_h(v) * h) + + # Synaptic variables + du[5] = -(u/tau_u) + du[6] = (1-R)/tau_R + du[7] = -(gsyn/tau) +end + +function epsp!(integrator); + integrator.u[5] += integrator.p[12] * (1 - integrator.u[5]) + integrator.u[7] += integrator.p[13] * integrator.u[5] * integrator.u[6] + integrator.u[6] -= integrator.u[5] * integrator.u[6] + +end + +epsp_ts= PresetTimeCallback(100:100:500, epsp!) + +p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 30, 1000, 50, 0.5, 0.005, 0] +u0 = [-60, n_inf(-60), m_inf(-60), h_inf(-60), 0.0, 1.0, 0.0] +tspan = (0.0, 700) +prob = ODEProblem(HH!, u0, tspan, p, callback=epsp_ts) +sol = solve(prob); +plot(sol, vars=1) + + +plot(sol, vars=7) + + +plot(sol, vars=[5,6]) + + +epsp_ts= PresetTimeCallback(100:1000:5100, epsp!) + +p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 30, 500, 50, 0.5, 0.005, 0] +u0 = [-60, n_inf(-60), m_inf(-60), h_inf(-60), 0.0, 1.0, 0.0] +tspan = (0.0, 5300) +prob = ODEProblem(HH!, u0, tspan, p, callback=epsp_ts) +sol = solve(prob); +plot(sol, vars=7) + + +plot(sol, vars=[5,6]) + + +epsp_ts= PresetTimeCallback(100:100:500, epsp!) + +p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 30, 100, 1000, 0.5, 0.005, 0] +u0 = [-60, n_inf(-60), m_inf(-60), h_inf(-60), 0.0, 1.0, 0.0] +tspan = (0.0, 700) +prob = ODEProblem(HH!, u0, tspan, p, callback=epsp_ts) +sol = solve(prob); +plot(sol, vars=7) + + +plot(sol, vars=[5,6]) + diff --git a/tutorials/models/08-spiking_neural_systems.jmd b/tutorials/models/08-spiking_neural_systems.jmd index e87ae724..b598ef59 100644 --- a/tutorials/models/08-spiking_neural_systems.jmd +++ b/tutorials/models/08-spiking_neural_systems.jmd @@ -4,19 +4,20 @@ author: Daniel Müller-Komorowska --- This is an introduction to spiking neural systems with Julia's DifferentialEquations package. -We will cover four different models: leaky integrate-and-fire, Izhikevich, adaptive exponential - integrate-and-fire and Hodgkin-Huxley model. Let's get started with the leaky integrate-and-fire (LIF) model. - -## The Leaky-Integrate-and-Fire Model +We will cover three different models: leaky integrate-and-fire, Izhikevich, and Hodgkin-Huxley. +Finally we will also learn about two mechanisms that simulate synaptic inputs like +real neurons receive them. The alpha synapse and the Tsodyks-Markram synapse. Let's get started +with the leaky integrate-and-fire (LIF) model. +## The Leaky Integrate-and-Fire Model The LIF model is an extension of the integrate-and-fire (IF) model. While the IF model simply integrates input until it fires, the LIF model integrates input but also decays towards an equilibrium potential. This means that inputs that arrive in quick succession have a much higher chance to make the cell spike as opposed to inputs that are further apart in time. The LIF is a more realistic neuron -model than the IF because it is known from real neurons that the timing of +model than the IF, because it is known from real neurons that the timing of inputs is extremely relevant for their spiking. -The LIF model has four parameters, `gL, EL, C, Vth, I` and we define it in the `lif(u, p, t)` function. +The LIF model has five parameters, `gL, EL, C, Vth, I` and we define it in the `lif(u, p, t)` function. ```julia using DifferentialEquations @@ -46,7 +47,7 @@ that describe the equation and `t` is time. You might wonder why time does not show up in our equation, although we need to calculate the change in voltage with respect to time. The ODE solver will take care of time for us. One of the advantages of the ODE solver as opposed to calculating the change of -u in a for loop is that many ODE solver algorithm can dynamically adjust the +`u` in a for loop is that many ODE solver algorithms can dynamically adjust the time step in a way that is efficient and accurate. One crucial thing is still missing however. This is supposed to be a model of @@ -68,7 +69,7 @@ current_step= PresetTimeCallback([2,15],integrator -> integrator.p[5] += 210.0) cb = CallbackSet(current_step,threshold) ``` -Our condition is `thr(u,t,integrator)` and the condition kicks in when `integrator.u > integrator.p[4]` where `p[4]` is our threshold parameter `Vth`. Our effect of the condition is `reset!(integrator)`. It sets `u` back to the equilibrium potential `p[2]`. We then wrap both the condition and the effect into a `DiscreteCallback` called threshold. There is one more particularly handy callback called `PresetTimeCallback`. This one increases the input `p[5]` at `t=2` and `t=15` by `210.0`. Both callbacks are then combined into a `CallbackSet`. We are almost done to simulate our system we just need to put numbers on our initial voltage and parameters. +Our condition is `thr(u,t,integrator)` and the condition kicks in when `integrator.u > integrator.p[4]` where `p[4]` is our threshold parameter `Vth`. Our effect of the condition is `reset!(integrator)`. It sets `u` back to the equilibrium potential `p[2]`. We then wrap both the condition and the effect into a `DiscreteCallback` called threshold. There is one more callback called `PresetTimeCallback` that is particularly useful. This one increases the input `p[5]` at `t=2` and `t=15` by `210.0`. Both callbacks are then combined into a `CallbackSet`. We are almost done to simulate our system we just need to put numbers on our initial voltage and parameters. ```julia u0 = -75 @@ -85,7 +86,7 @@ Our initial voltage is `u0 = - 75`, which will be the same as our equilibrium po sol = solve(prob) ``` -First of all the `solve` output tells us if solving the system generally worked. In this case we know it worked because the return code (`retcode`) say `Success`. Then we get the numbers for the timestep and the solution to `u`. The raw numbers are not super interesting to let's plot our solution. +First of all the `solve` output tells us if solving the system generally worked. In this case we know it worked because the return code (`retcode`) says `Success`. Then we get the numbers for the timestep and the solution to `u`. The raw numbers are not super interesting to let's plot our solution. ```julia plot(sol) @@ -109,7 +110,7 @@ function izh!(du,u,p,t); end ``` -This is our Izhikevich model. There are two important changes here. First of all, note the additional input parameter `du`. This is a sequence of differences. `du[1]` corresponds to the voltage (the first dimension of the system) and `du[2]`corresponds to the second dimension. This second dimension is called `u` in the original Izhikevich work amnd it makes the notation a little annoying. In this tutorial I will generally stick to Julia and `DifferentialEquations` conventions as opposed to conventions of the specific models and `du` is commonly used. We will never define `du` ourselves outside of the function but the ODE solver will use it internally. The other change here is the `!` after our function name. This signifies that du will be preallocated before integration, which saves a lot of allocation time. In other words, du will be changed in place. Now we just need our callbacks to take care of spikes and increase the input. +This is our Izhikevich model. There are two important changes here. First of all, note the additional input parameter `du`. This is a sequence of differences. `du[1]` corresponds to the voltage (the first dimension of the system) and `du[2]` corresponds to the second dimension. This second dimension is called `u` in the original Izhikevich work amnd it makes the notation a little annoying. In this tutorial I will generally stick to Julia and `DifferentialEquations` conventions as opposed to conventions of the specific models and `du` is commonly used. We will never define `du` ourselves outside of the function but the ODE solver will use it internally. The other change here is the `!` after our function name. This signifies that `du` will be preallocated before integration and then updated in-place, which saves a lot of allocation time. Now we just need our callbacks to take care of spikes and increase the input. ```julia function thr(u,t,integrator) @@ -147,7 +148,7 @@ This spiking type is called chattering. It fires with intermittent periods of si plot(sol, vars=2) ``` -Our second dimension `u[2]` increases with every spike. When it becomes too large and the system cannot generate another spike until `u[2]`has decayed to a value small enough that spiking can resume. This process repeats. In this model spiking is no longer regular like it was in the LIF. Here we have two frequencies, the frequency during the spiking state and the frequency between spiking states. The LIF model was dominated by one single frequency that was a function of the input strength. Let's see if we can generate another spiking type by changing the parameters. +Our second dimension `u[2]` increases with every spike. When it becomes too large, the system cannot generate another spike until `u[2]` has decayed to a value small enough that spiking can resume. This process repeats. In this model, spiking is no longer regular like it was in the LIF. Here we have two frequencies, the frequency during the spiking state and the frequency between spiking states. The LIF model was dominated by one single frequency that was a function of the input strength. Let's see if we can generate another spiking type by changing the parameters. ```julia p = [0.02, 0.2, -65, 8, 0] @@ -162,7 +163,7 @@ plot(sol, vars=1) This type is called regularly spiking and we created it just by lowering `p[3]` and increasing `p[4]`. Note that the type is called regularly spiking but it is not instantaneously regular. The instantenous frequency is higher in the beginning. This is called spike frequency adaptation and is a common property of real neurons. There are many more spike types that can be generated. Check out the [original Izhikevich work](https://www.izhikevich.org/publications/spikes.htm) and create your own favorite neuron! ## Hodgkin-Huxley Model -The Hodgkin-Huxley (HH) model is our first so called biophysically realistic model. This means that all parameters and mechanisms of the model represent biological mechanisms. Specifically, the HH model simulates the ionic currents that depolarize and hyperpolarize a neuron during an action potential. This makes the HH model four-dimensional. Let's see how it looks. +The Hodgkin-Huxley (HH) model is our first biophysically realistic model. This means that all parameters and mechanisms of the model represent biological mechanisms. Specifically, the HH model simulates the ionic currents that depolarize and hyperpolarize a neuron during an action potential. This makes the HH model four-dimensional. Let's see how it looks. ```julia using DifferentialEquations @@ -183,16 +184,16 @@ function HH!(du,u,p,t); gK, gNa, gL, EK, ENa, EL, C, I = p v, n, m, h = u - du[1] = ((gK * (n^4.0) * (EK - v)) + (gNa * (m ^ 3.0) * h * (ENa - v)) + (gL * (EL - v)) + I) / C + du[1] = (-(gK * (n^4.0) * (v - EK)) - (gNa * (m ^ 3.0) * h * (v - ENa)) - (gL * (v - EL)) + I) / C du[2] = (alpha_n(v) * (1.0 - n)) - (beta_n(v) * n) du[3] = (alpha_m(v) * (1.0 - m)) - (beta_m(v) * m) du[4] = (alpha_h(v) * (1.0 - h)) - (beta_h(v) * h) end ``` -We have three different types of ionic conductances. Potassium, sodium and the leak. The potassium and sodium conducance are voltage gated. They increase or decrease depending on the voltage. In ion channel terms, open channels can transition to the closed state and closed channels can transition to the open state. It's probably easiest to start with the potassium current described by `gK * (n^4.0) * (EK - v)`. Here gK is the total possible conductance that we could reach if all potassium channels were open. If all channels were open, `n` would equal 1 which is usually not the case. The transition from open state to closed state is modeled in `alpha_n(v)` while the transition from closed to open is in `beta_n(v)`. Because potassium conductance is voltage gated, these transitions depend on `v`. The numbers in `alpha_n; beta_n` were calculated by Hodgkin and Huxley based on their extensive experiments on the squid giant axon. They also determined, that `n` needs to be taken to the power of 4 to correctly model the amount of open channels. +We have three different types of ionic conductances. Potassium, sodium and the leak. The potassium and sodium conducance are voltage gated. They increase or decrease depending on the voltage. In ion channel terms, open channels can transition to the closed state and closed channels can transition to the open state. It's probably easiest to start with the potassium current described by `gK * (n^4.0) * (EK - v)`. Here `gK` is the total possible conductance that we could reach if all potassium channels were open. If all channels were open, `n` would equal 1 which is usually not the case. The transition from open state to closed state is modeled in `alpha_n(v)` while the transition from closed to open is in `beta_n(v)`. Because potassium conductance is voltage gated, these transitions depend on `v`. The numbers in `alpha_n; beta_n` were calculated by Hodgkin and Huxley based on their extensive experiments on the squid giant axon. They also determined, that `n` needs to be taken to the power of 4 to correctly model the amount of open channels. -The sodium current is not very different but it two gating variables, `m; h` instead of one. The leak conductance gL has no gating variables because it is not voltage gated. Let's move on to the parameter. If you want all the details on the HH model you can find a great description [here](https://neuronaldynamics.epfl.ch/online/Ch2.S2.html). +The sodium current is not very different but it has two gating variables, `m, h` instead of one. The leak conductance gL has no gating variables because it is not voltage gated. Let's move on to the parameters. If you want all the details on the HH model you can find a great description [here](https://neuronaldynamics.epfl.ch/online/Ch2.S2.html). ```julia current_step= PresetTimeCallback(100,integrator -> integrator.p[8] += 1) @@ -222,4 +223,141 @@ That's some good regular voltage spiking. One of the cool things about a biophys plot(sol, vars=[2,3,4], tspan=(105.0,130.0)) ``` -That's it for the tutorial on spiking neural systems. +So far we have only given our neurons very simple step inputs by simply changing +the number `I`. Actual neurons recieve their inputs mostly from chemical synapses. +They produce conductance changes with very complex structures. In the next +chapter we will try to incorporate a synapse into our HH model. + +## Alpha Synapse +One of the most simple synaptic mechanisms used in computational neuroscience +is the alpha synapse. When this mechanism is triggered, it causes an +instantanouse rise in conductance followed by an exponential decay. Let's +incorporate that into our HH model. + +```julia +function gSyn(max_gsyn, tau, tf, t); + if t-tf >= 0 + return max_gsyn * exp(-(t-tf)/tau) + else + return 0.0 + end +end +function HH!(du,u,p,t); + gK, gNa, gL, EK, ENa, EL, C, I, max_gSyn, ESyn, tau, tf = p + v, n, m, h = u + + ISyn = gSyn(max_gSyn, tau, tf, t) * (v - ESyn) + + du[1] = (-(gK * (n^4.0) * (v - EK)) - (gNa * (m ^ 3.0) * h * (v - ENa)) - (gL * (v - EL)) + I - ISyn) / C + du[2] = (alpha_n(v) * (1.0 - n)) - (beta_n(v) * n) + du[3] = (alpha_m(v) * (1.0 - m)) - (beta_m(v) * m) + du[4] = (alpha_h(v) * (1.0 - h)) - (beta_h(v) * h) +end +``` + +`gSyn` models the step to the maximum conductance and the following exponential decay with time constant `tau`. Of course we only want to integrate the conductance at and after time `tf`, the onset of the synaptic response. Before `tf`, `gSyn` returns zero. To convert the conductance to a current, we multiply by the difference between the current voltage and the synapses equilibrium voltage: `ISyn = gSyn(max_gSyn, tau, tf, t) * (v - ESyn)`. Later we will set the parameter `ESyn` to 0, making this synapse an excitatory synapse. Excitatory synapses have equilibrium potentials far above the resting potential. Let's see what our synapse does to the voltage of the cell. + +```julia +p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 0.008, 0, 20, 100] +tspan = (0.0, 200) +prob = ODEProblem(HH!, u0, tspan, p) +sol = solve(prob); +plot(sol, vars=1) +``` + +What you see here is called an excitatory postsynaptic potential (EPSP). It is the voltage response to a synaptic current. While our synaptic conductance rises instantly, the voltage response rises at a slower time course that is given by the membrane capacitance `C`. This particular voltage response is not strong enough to evoke spiking, so we say it is subthreshold. To get a suprathreshold response that evokes spiking we simply increase the parameter `max_gSyn` to increase the maximum conductance. + +```julia +p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 0.01, 0, 20, 100] +tspan = (0.0, 200) +prob = ODEProblem(HH!, u0, tspan, p) +sol = solve(prob); +plot!(sol, vars=1) +``` + +This plot shows both the subthreshold EPSP from above as well as the suprathreshold EPSP. Alpha synapses are nice because of their simplicity. Real synapses however, are extremely complex structures. One of the most important features of real synapses is that their maximum conductance is not the same on every event. The number and frequency of synaptic events changes the size of the maximum conductance in a dynamic way. While we usually avoid anatomical and biophysical details of real synapses, there is a widely used phenomenological way to capture those dynamics called the Tsodyks-Markram synapse. + +## Tsodyks-Markram Synapse +The Tsodyks-Markram synapse (TMS) is a dynamic system that models the changes of maximum conductance that occur between EPSPs at different frequencies. The single response is similar to the alpha synapse in that it rises instantaneously and decays exponentially. The maximum conductance it reaches depends on the event history. To simulate the TMS we need to incorporate three more dimensions, `u, R, gsyn` into our system. `u` decays towards 0, R decays towards 1 and gsyn decays towards 0 as it did with the alpha synapse. The crucial part of the TMS is in `epsp!`, where we handle the discontinuities when a synaptic event occurs. Instead of just setting `gsyn` to the maximum conductance `gmax`, we increment `gsyn` by a fraction of gmax that depends on the other two dynamic parameters. The frequency dependence comes from the size of the time constants `tau_u` and `tau_R`. Enough talk, let's simulate it. + +```julia +function HH!(du,u,p,t); + gK, gNa, gL, EK, ENa, EL, C, I, tau, tau_u, tau_R, u0, gmax, Esyn = p + v, n, m, h, u, R, gsyn = u + + du[1] = ((gK * (n^4.0) * (EK - v)) + (gNa * (m ^ 3.0) * h * (ENa - v)) + (gL * (EL - v)) + I + gsyn * (Esyn - v)) / C + du[2] = (alpha_n(v) * (1.0 - n)) - (beta_n(v) * n) + du[3] = (alpha_m(v) * (1.0 - m)) - (beta_m(v) * m) + du[4] = (alpha_h(v) * (1.0 - h)) - (beta_h(v) * h) + + # Synaptic variables + du[5] = -(u/tau_u) + du[6] = (1-R)/tau_R + du[7] = -(gsyn/tau) +end + +function epsp!(integrator); + integrator.u[5] += integrator.p[12] * (1 - integrator.u[5]) + integrator.u[7] += integrator.p[13] * integrator.u[5] * integrator.u[6] + integrator.u[6] -= integrator.u[5] * integrator.u[6] + +end + +epsp_ts= PresetTimeCallback(100:100:500, epsp!) + +p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 30, 1000, 50, 0.5, 0.005, 0] +u0 = [-60, n_inf(-60), m_inf(-60), h_inf(-60), 0.0, 1.0, 0.0] +tspan = (0.0, 700) +prob = ODEProblem(HH!, u0, tspan, p, callback=epsp_ts) +sol = solve(prob); +plot(sol, vars=1) +``` + +```julia +plot(sol, vars=7) +``` + +Both the voltage response as well as the conductances show what is called short-term facilitation. An increase in peak conductance over multiple synaptic events. Here the first event has a conductance of around 0.0025 and the last one of 0.004. We can plot the other two varialbes to see what underlies those dynamics + +```julia +plot(sol, vars=[5,6]) +``` + +Because of the time courses at play here, this facilitation is frequency dependent. If we increase the period between these events, facilitation does not occur. + +```julia +epsp_ts= PresetTimeCallback(100:1000:5100, epsp!) + +p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 30, 500, 50, 0.5, 0.005, 0] +u0 = [-60, n_inf(-60), m_inf(-60), h_inf(-60), 0.0, 1.0, 0.0] +tspan = (0.0, 5300) +prob = ODEProblem(HH!, u0, tspan, p, callback=epsp_ts) +sol = solve(prob); +plot(sol, vars=7) +``` + +```julia +plot(sol, vars=[5,6]) +``` + +We can also change these time constants such that the dynamics show short-term depression instead of facilitation. + +```julia +epsp_ts= PresetTimeCallback(100:100:500, epsp!) + +p = [35.0, 40.0, 0.3, -77.0, 55.0, -65.0, 1, 0, 30, 100, 1000, 0.5, 0.005, 0] +u0 = [-60, n_inf(-60), m_inf(-60), h_inf(-60), 0.0, 1.0, 0.0] +tspan = (0.0, 700) +prob = ODEProblem(HH!, u0, tspan, p, callback=epsp_ts) +sol = solve(prob); +plot(sol, vars=7) +``` + +```julia +plot(sol, vars=[5,6]) +``` + +Just changing those two time constants has changed the dynamics to short-term depression. This is still frequency dependent. Changing these parameters can generate a variety of different short-term dynamics. + +## Summary +That's it for now. Thanks for making it this far. If you want to learn more about neuronal dynamics, [this is a great resource](https://neuronaldynamics.epfl.ch/online/index.html). If you want to learn more about Julia check out the [official website](https://julialang.org/) and to learn more about the DifferentialEquations package you are in the right place, because this chapter is part of a [larger tutorial series about just that](https://github.com/SciML/SciMLTutorials.jl). From 9492dd4a0ef082180c8287262b9d94c152396439 Mon Sep 17 00:00:00 2001 From: Christopher Rackauckas Date: Fri, 11 Sep 2020 13:23:52 -0400 Subject: [PATCH 2/2] Update 08-spiking_neural_systems.jmd --- tutorials/models/08-spiking_neural_systems.jmd | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/tutorials/models/08-spiking_neural_systems.jmd b/tutorials/models/08-spiking_neural_systems.jmd index b598ef59..d76dba39 100644 --- a/tutorials/models/08-spiking_neural_systems.jmd +++ b/tutorials/models/08-spiking_neural_systems.jmd @@ -361,3 +361,8 @@ Just changing those two time constants has changed the dynamics to short-term de ## Summary That's it for now. Thanks for making it this far. If you want to learn more about neuronal dynamics, [this is a great resource](https://neuronaldynamics.epfl.ch/online/index.html). If you want to learn more about Julia check out the [official website](https://julialang.org/) and to learn more about the DifferentialEquations package you are in the right place, because this chapter is part of a [larger tutorial series about just that](https://github.com/SciML/SciMLTutorials.jl). + +```{julia; echo=false; skip="notebook"} +using SciMLTutorials +SciMLTutorials.tutorial_footer(WEAVE_ARGS[:folder],WEAVE_ARGS[:file]) +```