[00:00:10] >> Under the right are hurting but. He was a very nice little fish a little something for. Everybody else but those were good days but he's also he's my boss or because the result of that of a slow boat to be worth it maybe we should let somebody out. [00:00:35] Work. Somewhere please pay doing something well thank you very much for inviting me and I already had some great conversations this morning looking forward to more you really have an amazing department and it's amazing how many people were on the list and then how many people I'm not even going to get a chance to meet with who I would love to see today I'm going to talk about some efforts to tease apart Microcircuits of the brain and both have an element of some general modeling issues even if you care nothing about the Microcircuits I'm going to talk about which are mostly going to be in the eye movement system as well as I think some more general conclusions about about some problems that are fairly fundamental to thinking about brain computations and this is where I'll start so the 1st have the talk I want to talk about a really simple topos but it turns out very difficult to answer problem and it's the following so when many memory in decision making circuits this is a cartoon we see neurons who cumulate maintain signals on the one to 10 2nd time scale so what you hear see here in the cartoon is a stimulus which is presented transiently this is my cartoon of neuronal activity whose firing rate accumulate in the mathematical sense that is mathematically integrate that signal and then in the absence of the stimulus that activity is maintained and of course this is the essence of short term memory to have to have the neuronal firing rates maintained in the absence of a stimulus and I should. [00:02:04] Comment that everything all talk about is actually analog short term memory in the sense that if this stimulus were larger or wider than the accumulation would have gone up to a different level and the circuits I'll be talking about all can maintain that analog level of activity so where's the puzzle here so I mention the time scale of $1.00 to $10.00 seconds well that's much longer than the classic time scales we think of for single neuron dynamics so you know our classical sort of model of an integrated fire neuron would be that a stimulus comes in the firing rate goes up and then when the stimulus turns off there's a decay of activity on some biophysical time scale I'm just calling it Tao neuron it might be in the neuron it might be in this enough of 10 to 100 milliseconds and the circuits that that I'll be showing you at least intracellular current injection suggest more like these timescales for the decay of currents so the question is how the systems bridge these timescales and develop a long time skills of short term memory and can we figure this out in a real circuit so the question today will be really focusing at the circuit level what circuit architectures or motifs generate such persistent activity beyond the timescales of the intrinsic neurons so let me just start with the standard toy model of how these systems work so the standard toy model is a command and but comes into the neuron shown here the firing rate goes up and then decays on this neuronal time scale well typically again this is a short time scale but neurons are in isolation so imagine that this neuron is part of a circuit which has self and think of this really is probably a population of neurons with Taishan and then mutual inhibition between these 2 neurons will in that case the story can be the input comes in it activates these neurons and then through either the self connection the self excited Tory connections or through a double negative form of positive feedback these neurons can maintain their activity so basically they start firing due to the command and put and then they excite themselves or they inhibit the neurons that were inhibiting themselves in that case activity can be maintained over a long time scales. [00:04:09] So this is the basic idea this is actually from a paper looking at a monkey prefrontal low persistent activity. Experiment but I'm going to actually switch to showing you a model system for this and what I want to point out again because you'll see it in the model system is the 2 different possible pathways for positive feedback one shown here which is recurrent XA Taishan and the 2nd one being this double negative form of recurrent disinhibition where this population inhibits the population that were inhibiting it. [00:04:42] OK so now switching to a model system I actually want to show you a similar circuit that exists in the movement system of vertebrate animals and all happen to show you experiments from goldfish and this talk you can ask me about some later zebrafish work that we've been doing and I should say we refers to these data are from David tank and Emory oxide my experimental collaborators I'm the theory part OK So here's the So here's the behavior and hopefully this will play so you're going to see it zoom in on the fish's eye and again the fish is I will move very much like your eyes or my eyes move as well. [00:05:31] So this is the typical way we move our eyes when our head is still and there's nothing in the world that smoothly moving to track we don't move smoothly we instead have fixations and rapid Secada that the rapid eye movements between the different fixations and the question here is going to be how does the fish keep its eyes still because the muscles of the fish I would tend to make its eyes relax back to a central position so it needs persistent neural commands to hold those eyes still at a given position so here's a cartoon picture of that so and these are actual neural recordings so this is a neural according of position in the fish and the fish you can see here is making a nice set of cods and fixations the cod fixation and so on so in terms of the circuitry these rapid Secada are driven by I've lost any coding commander runs they were recorded in recorded what I'm showing you so I'm just showing you a cartoon but there are excited Tori commands brief bursts of neural activity that drive the eyes one way and then inhibitory neurons that burst and drive the neurons that drive the eyes the other way and again what I was trying to convey previously was that if the I must just received a brief command the eyes would move but then because of the spring like muscles the eyes would not move back to a central position and they wouldn't be maintained and this immediately suggested that one needs to maintain the eyes of a given position to really books law they need a position command and that immediately suggested there must be a mathematical integral from the loss of the coding command neurons to a position position coding command and that's done by a devoted brain circuit called the ocular motor neural integrator and you can see the recording of one of the neurons in that brain circuit here so both a pulse comes in the firing rate goes up and it persistently maintains an elevated firing rate another pulse comes in. [00:07:32] And it persistently maintains a higher firing rate so in this case there is persistent activity that stores a running total of the input commands and this is the memory that I was referring to earlier so this is really a memory of the previous set of eye movement commands in this case it's an integrator so what it's storing in memory is a running total and again you can see that it will when there's an inhibitory pulse it will step down OK So in general I won't be showing you Rod traces all these focusing on tuning curves so all the summarized this data by the firing rate as a function of I position during one of these fixations I should note all the recordings are done in the dark so there's no visual feedback so again this is maintained without visual feedback and each of the docs in this graph refers to one fixation what the firing rate during that fixation was and you can see this nice threshold linear relationship between firing rate and I position just to show you what these data really look like this is actually a movie of the. [00:08:34] Of the Isen Unfortunately I don't have the audio plugged in right now any way you could you would be able to hear the action potentials crackling that's OK but what you're going to see is every time the animals I move you can see basically this is tracking the eye position in the corresponding firing rate and you can see the firing rate being maintained during a fixation you see that big came in that that's the deviation from that line came in and you can see the firing rate tracking the eye position and holding an eye position along this diagonal red line here again that's the tuning curve of the Center on. [00:09:14] OK So that was just showing you one neuron really there's a large population of neurons so that was that was one neuron record on the right side of the brain if you looked at all the neurons you could look at their firing rates as a function of eye position as the animal holds different eye positions from left to right and what you see is these Durand's all have these threshold linear relationships but there are different neurons have different slopes and different neurons of their relationship and they have different thresholds and when you get into the details that act I'll refer to neurons as having a low threshold if they recruit quickly as they move from one side to another and then having a high threshold so these would be high threshold neurons and these would be low threshold their arms and motor systems this is very very common to have different thresholds and recruitment patterns and I should say then there's also a neuron on the opposite side of the opposite side and statistically they are a mirror image in fact in this case I literally just created a mirror image. [00:10:12] But because actually the dataset we have has some from the right and some from the left and we just pretend that there's one on each side but statistically there are mirror image the nice thing about the fish system is that these neurons with the increasing slope as the eyes move from left to right all appear on the right side of the midline of the brain and those with a decreasing slope all appear on the left side of the brain so it's beautifully organize there's 2 populations on each side an excited Tory population sown in green and in inhibitory populations shown in red and we believe that the gross architecture is primarily as follows that the excited Tori neurons project it's a laterally that means to the same side and the inhibitory neurons project contra laterally that needs to the opposite side of the midline and what does that mean functionally in terms of the feedback loops that I was talking about earlier what that means is here we have the recurrent excited Tory loops that I was mentioning sit in all of the gym generic models of persistent activity of memory storing activity and we also have the dub. [00:11:14] Negative form of positive feedback because neurons on this side of the brain they have to get to the other side of the brain they have to go through an inhibitory neuron and then to get back they need to go through another inhibitory neuron So basically you can play this game if you want to get from any neuron back to itself it will either go through excited 3 neurons or through even numbers of inhibitory connections making the double negative. [00:11:38] OK So we want to build a model of this and here is here's our model it's actually going to be a spiking model but it's a lot easier to understand it just as a rate model so all explain the rate version of the model we basically broke each of these populations into $25.00 neurons each because we think in the fish there's about $100.00 in Iran participating in this circuit and what we want to fund fundamentally model is how to fire great changes what drives a change in the fire great and you can say in the absence of input there is certain dynamics of leak that's actually characterized experimentally and captured by a function F. of are a function of the firing rate There's same side exit Taishan and that's driven through a we're modeling that is the sum of inputs from all of the neighbors so this is neuron It receives input from neurons Jay and it goes through we assume is some synoptic or dendritic transfer non-linearity us and this is S. of our SO BASICALLY neuron J projects of this neuron goes through a non-linearity that is then weighted by the bio weight you sum up all of the inputs That's the total X. of Haitian and the nice thing is the exit patient comes from the same side likewise for inhibition so there's a different set of weights and a different non-linearity in general for inhibition and then there's also some tonic background inputs and burst commands those are actually the COT It commands that move the eyes from one place to another so our question is well what if you want to have persistent activity persistent activity is when 0 and I should say persistent activity when the burst command isn't there during a fixation so that basically says for persist negativity these terms must all some disease. [00:13:18] Beautifully if those sum to 0 we get more than persist neck to Vittie we actually get the mathematical integration we get that if D R T T is proportional the burst command input that R. is proportional to the integral of the burst commands and this is how we fundamentally think integrators come about how and whether it is accumulation of evidence in a cognitive task or whether in this case it's a velocity position integration So basically it's the same mechanism that causes the persistent activity and the integration and what it really is a balancing of the intrinsic leak being OP which would tend to make the firing rates decay being offset by the synoptic inputs OK so I'm not going to get it so I'm giving a reference here for fitting methods so I'm going to give you an essence of how we fit these models to actual data so basically we build the conductance based spiking model by constructing a cost function that simultaneously inforced a whole set of experiments our goal was basically to take a whole suite of experiments that have been done over a decade on this system put them all together into $1.00 cost function that says the model needs to try to match each of those experiments so there was an interest in their current generation experiments was which was basically trying to get the intrinsic responses of neurons half of are and how they respond how they respond by how at the sematic level they respond to input currents then we had a database of single neuron tuning curves that gives us the fire and rate as a function of any given I position the animal is holding and then we also will show you those in a moment have firing rate drift patterns following focal lesions and we basically make a different cost function term which is for the experts just a quadratic you know squared error type of term that enforces each of these experiments. [00:15:08] The unknowns is really the unknowns in most neuroscience experiments very rarely do we know what the actual synaptic weights in the circuit are or the weights coming from external inputs so that's where we hope the model can help us say something about these and also we don't know the form of the synaptic or dendritic nonlinearities So again what we're going to do is actually it turns out assume a form of synaptic non-linearity trunk through and pull out through a regression procedure what the weights are and then try different try different synoptic nonlinearities So we basically have a 2 parameter function for the synaptic nonlinearities that can be saturating it can be sigmoid all it can be super linear it can be linear. [00:15:50] OK So here's what comes out when you do this so this is an example of one of again this was actually a spiking conductance based model so here's the model you can see the psychotic inputs are coming in here it causes a brief burst of firing but then what I want you to focus on is during this fixation period here you see this nice persistent activity there's another psychotic in becoming in that gets integrated and now you see higher rate persistent activity so this is the integration usually will just plot firing rate so that's what's being shown here green is just a guide your idea of what a perfect integral of the topic inputs would be Gray was actually the raw model and black is what you'd see in a paper with after putting a little bit of smoothing on it and what you see this is the firing rate of one neuron as a function of time is basically it's playing right as perfectly following the integral as the animal would be at each of each of these vertical pieces is where the cod came in in the model and then the flat pieces are where the fixation was so just to summarize that data this is showing you for 4 neurons but it worked for every single neuron the the it's a little bit backwards from what you might think the solid line is actually the tuning curve from the experimental data base and what you see in the boxes is actually the responses of the model at different eye positions and we also were trying to fit some noise distributions so that's why you see box plots that's actually conveying the conveying the noise in the model so this so here are 4 neurons 2 of these are on the left side of the animal 2 on the right side one is what we call a low threat to the blue and the green are what we call low threshold neurons. [00:17:24] And red in the black are high threshold there and so the main point is 1st we have a whole circuit where we have you know a large number of tuning curves and this is one of the very few models of vertebra system where we actually have a model that recapitulates the data at a neuron by narrowing level so we can make a one to one mapping between neurons in the model and neurons in the system so we were happy. [00:17:49] With that but what do we learn about the system from this and this was really the key experiment that I think provided the most insight so it's the following we're going so my experimental collaborator Emery oxide in activated the left side of the circuit and recorded from neurons in the right side of the circuit and now you need to remember a little bit about the circuit that the crossing connections between the 2 sides are inhibitory So what I argued was to get persistent activity you needed this balance of currents to get persistent activity in some sense we've now undone that balance because we've lost inhibition so naively what you'd expect to spider it would start drifting upwards using unbalanced things to the point of having not enough inhibition and that's indeed what you saw in the recorded neurons at low fare and rates you see very sharp upward drift in the firing rates seen here the mystery and the surprise was that a high flying rates the neurons remained stable their activities didn't drift upwards and we could have and we could recapitulate that in the model you see again here the model drift sharp upwards if you low rates and persistent activity at high rates we could recapitulate that if there was some sort of threshold so let me just summarize persistence was maintained at a high for the high flying rates experimentally. [00:19:12] These hiring high firing rates we now need to think a little bit about those tuning curves so when these neurons on the right side are at high firing rates remember one side of the brain's tuning curves went up and went increased as the eyes move from left to right the other side decreased as the eyes move from the left to right so that means when these neurons are at high firing rates these neurons were at low firing rates so the high firing rates occur when the inactivated side would have been firing at low rates so what this suggests is that these low rates were below a threshold for contributing effectively what the network is doing is really the high fire great neurons are driving the network and the low firing rate neurons are below a threshold for influencing the circuit so I've left out a longer story we actually have some you know 2 out of this came 2 possible mechanisms for it such a threshold one was actually the actual firing threshold sort of a neuronal threshold was one possibility and the other possibility was that there's a synoptic threshold and bio physically that could be a synoptic the Cilla Taishan where a low rates basically don't facilitate the synopses and cause much trans transmission The other thing on excited Tory synapses could be something like an N N D A spike and there's known to be a lot of end in da on the excited Tory synapses here so it may be that low firing rates aren't enough to trigger an N.D.A. spike and M.D.A. plateau and again you can look at look at the work this is published if you want to get into the biophysical implications I want to actually step back and just talk about macroscopically what this says about the macroscopic mechanisms for maintaining memory in the circuit because I presented to you 2 possible mechanisms by which persistent activity could be maintained one was recurring exhibition and the other was recurrent distant a Bishan being a double negative form of positive feedback loop so now let's just kind of reason through this look at the network. [00:21:13] Activity when the eyes are directed rightward So when the eyes are directed right where the right side neurons are at high firing rates and there's a positive feedback loop here due to recurrent acceptation of these neurons exciting each other but now let's think about the inhibitory feedback loop the left side neurons are at low firing rates and we just said they appear to be below threshold for transmitting So that means we've broken the loop so the right side would be transmitting inhibition to the left side but the left side and that's why I'm showing dashed lines here is below threshold for transmitting back to the right side so due to threshold there is no mutual inhibitory feedback loop and I think in the era of Connacht home X. This is really really important because this is a case where the I should say the old theories of how the circuit worked also that the primary positive feedback loop should be disinhibition it was purely on theoretical grounds that so it wasn't that they had empirical evidence but they just thought gee what here's this double negative loop we think that's how it works but here we have a case where functionally there's really one way you need to reaction in addition from the high rate side to the low rate side and then when the animal moves its eyes to the other side of the head in the left side becomes active then the inhibition goes from the left side to the right side. [00:22:37] And that B. lies the fact that anatomically it looks like there is a double negative feedback loop so really there is no feedback loop here and the physiology because of thresholds be lies the enough to me and that's so X. the Taishan not inhibition maintains the persistent activity and inhibition is in atomically recurrent but functionally feedforward. [00:23:04] OK so that's that really just a gross level of saying what are the key contributors to positive feedback and maintaining persistent activity and integration in the circuit I now want to delve down to what about the micro circuitry and here's the problem I want to address so I said we fit a weight matrix but even in 100 non. [00:23:26] Circuit there are 100 squared potential connections 10000 turns out that because X. of patient only goes to the right side in addition to the left side we're down to 5000 only half of them are possible but how do you think how do you even say anything about what the conduct of it is I mean it's like the worst statistics problem ever you're trying to figure out 5000 parameters underlying this behavior so what I want to do and it really follows on a history of work and what's called sloppy models coming from the physic literature is to say how one can go through how one can work through this problem and try to figure out at least what is important and maybe what isn't important I hope this is a generally valuable lesson for whatever system you work on so here's my cartoon of the model cos function remember we had a cost function for fitting the data and here I'm just going to make it really really simple imagine that we just look at 2 individual synaptic weights hoops What are they just a story to individual synaptic weights W one and W 2 in this case W one it's really sensitive if that tune correctly you get the right model behavior and if it's mis tune it's very it's it's going to cause a lot of cost and your models are not going to fit. [00:24:40] On the other hand W. to this and that's really it doesn't matter what you tuned it to you're basically moving along the trough of this cough cough service and that's a very in sense insensitive direction now in general what it will be is not individual synapses but in fact linear combinations of synaptic weights that the model fitting is sensitive to or insensitive to so what do you do the cost function curvature is described by its matrix A 2nd derivative the Hessian matrix the squared cost D W I D W J And what you can do is to figure out where the steepest in the least steep directions are as you just do principal components analysis on this matrix and that will identify the patterns of weight changes to which the system is most sensitive and when you do this what we found was there were only 4 most sensitive components those are the ones that jumped out in the principle component Alice and corresponded the directions of the steepest curvature and the others were largely insensitive so on the show you show you the 1st 3 of these and mixes the crowded Slidell working through it so these are the 3 principal component 12 and 3 and what I'm plotting here is the components of the principle eigenvector so each neuron in this model received input from 50 other neurons the 1st 25 are the it's excited Tory input from the 2nd if they're from the same side the next 25 are it's inhibitory inputs from the other side here I happen to order the inputs from the lower threshold to the highest threshold their odds but for this 1st principle component you didn't need to know much not surprisingly the most sensitive thing you can do to screw up the model is to make all the connections more excited Torrie So the direction positive here either means excited Tory connections getting more excited Tory or inhibitory connections getting less and hit Tory So hence effectively more excited Tory if you make a perturbation of a fix vector length along this direction it's a complete disaster the neurons just go skyrocketing until they set until they. [00:26:40] Because he made everything more excited Tory So the next most important and I should say eigenvectors are only indicated direction so you could also equivalently make everything less excited Tory So I just had to pick one for illustration so next component would be either strengthening or weakening all of the synopses excited Tory inhibitory So here I'm going to show you making the excited Tory connections less excited Tory in the end inhibitory neurons less inhibited ory more excited Tory less inhibitory So I'm just turning down the absolute value of the weights and everything you do that perturbation and you get just an exponential decay to an intermediate fix points you can see everything's decaying to a fixed point around here the next component I won't get into has to do with the balance of low and high threshold neurons and so on and so forth what I want to show here you here is complete eigenvector component 10 here you can see that there are groups of neurons that are either made more excited Torrie or less excited Tori but look what happens when you make the same magnitude perturbation of this pattern to the neurons basically nothing you still have a beautiful integration where you see driven steps and maintaining persist negativity during the intervening intervals This is joining the neurons but and I'm only showing you one neuron but this works across all when there are. [00:27:59] So now you can actually step back and say well let's look at. Let's push this idea of insensitive directions further So here I'm actually showing you the weight matrix for one of the models so again I've broken down the neurons the same into 100 nons the 1st 50 of the left side the 2nd 50 of the right side the 1st $25.00 were the excited Tory neurons the 2nd $25.00 are the inhibitory neurons that excited Tori neurons inhibitory neurons So for example this block would be the left side excited Tory too excited Tory connections and here I've actually become sort of anticipating some work we've done in the zebrafish actually given them a spatial locations so here I ordered them from rostral to coddle So this would be a fairly all to all contact of any pattern and here is another circuit where there is what I'd call a topographic pattern so you can see the strength is mostly along the diagonal So those are very local specialty local cop conduct to Vittie Now here's your psych a physics test these are the outputs is the firing rate of one neuron but I could have shown it to you for any from the left side or the right side again the the bright neon colors are just to guide your eyes of what a perfect integral is black is actually showing this is actually star this is the average of the right side but I could have shown you individual neurons to their very similar is the average of the right Saima side activity in black the average of the Left left side activity in gray. [00:29:25] I mean they are just identical I mean try again I would love to do psych a physics on having someone distinguish between these yet completely different circuit architecture one of the all excited very connectivity one local excited Harry Connick to Vittie So is there a formal way we can see that in some sense these circuits were identical in what was important in the sense of directions Well we can what we can do is we can take these Wait major sees these patterns of activity and project them onto the sensitive and insensitive eigenvectors of the cost function and that's what we've done here so what I'm showing you is in log units so these are exponentially different when you see 10 that's either the 10 and here's the eigenvector component so I can vector component one was the most sensitive one eigenvectors higher numbers are insensitive ones and what you should see remember I told you that there were 4 sensitive components Well this is now what I'm sorry I didn't say something I took the difference between these 2 ways. [00:30:33] To say where they different in projected the difference between these 2 weight meter seize upon the sensitive and insensitive eigenvectors and what you see is that the difference between the 2 major cities is 0 is only really. Sorry the differences are nearly 0 so this is like to the minus 10 difference between them essentially 0 in the 4 sensitive components so from the point of view of the sensitive components these major cities are identical and all of their differences are in these insensitive components and I should have made clear that insensitive components are basically components where you have exit Taishan inhibition that are canceling each other so either you're getting more execution from one cell and less Texas ation from another or are you getting more exit Haitian from one cell and more inhibition from another cell so that the net change in input is 0 so hopefully we think this is the way we're going to have to look at circuits we're not going to know exactly what the conic to Vittie is and if you but hopefully we can figure out what is the import what are the important features of the conic to be for making the circuit work and what really and what really is what really is driving the circuit and how do we say what sensitive and what's insensitive what's important what's not. [00:32:00] Right so this one because we did a quadratic cost function it turns out the Hessian is identical everywhere in the space so that was a upfront modeling trick that we did because we knew where we were going and we knew what we wanted to get out of it so the Hessian here actually is everywhere there are some subtleties because of signs that you accept a look at the gradient when if you have synapses which are truly excited Tory are truly inhibitory weights can't flip from negative to positive so we do have to look at the components of the gradient when things are on the boundary it turns out the directions of the gradient line up pretty basically cover the same space as those 1st 4 conductors so there's nothing new really to learn from those but yes good question and. [00:32:47] I kind of expect expected answer this a thank you for asking it's OK So summary of this part I briefly mentioned neural integrators are circuits that accumulate and store a running total of their inputs they're important both in motor control is the saw here as well as in more cognitive tasks like accumulation of evidence for decision making a model system for those of the ocular motor neural integrator and their positive feedback appears to be due to exit Taishan not mutual inhibition and that's and that's what appears to be critical to memory storage the modeling issue I brought up in which I'm going to continue into the 2nd half of the talk is this degeneracy of modeling model fitting solutions and I think this is just inherent when we talk about something like fitting the weights of a conic to Vittie matrix and one key question is does this model degeneracy reflect lack of experimental constraints or patterns of conduct to Vittie that may differ from animal to animal in a given animal at a given time there is a conic to any matrix but I think this brings up the question even if we measure the exact conduct of any matrix in one animal which we're actually trying to do how important would that be VS still needing to figure out what was important about that comic to the matrix and what wasn't important about it and I'm guessing if you look from animal to animal what's going to be preserved is a sensitive directions and what's not going to be preserved is the insensitive directions and if you think about reinforcement learning reinforcement learning would naturally pick up and try to correct the sense of directions but would basically let go free without any punishment insensitive directions because there'd be no feedback to the animal those had the had somehow diffused to other parts of the parameter space if you wish. [00:34:29] OK Part 2 different background color sorry about that model degeneracy now I'm going to step to plasticity and show you how the some of the same nasty issues can come up and I think also push an interesting debate from from the field of cerebellar plasticity model degeneracy identifying States and signs of plasticity So what I now want to talk about is another movement it's called the vestibule ocular reflex and basically the punch line is the stimulator input that comes from head turns can also drive movements so here is a video basically showing an animal doing the V.O.R. the ocular reflex and this is this is all based on monkey experiments that supposed to be a monkey so this is a reflex you have it's your gaze stabilisation reflex when you turn your head automatically through a brick reflex your eyes will counter-rotating that's how you keep images in the world stable to your head movements so the sensory input here was was this back and forth sinusoidal head velocity and again what you see here is the motor output counteracting that. [00:35:41] And I should say that the gain of the reflex is the ratio of the loss of the to the head of losses to I'm going to use that a lot I'm going to talk of the gain being the ratio of the head loss to the a perfect again for a normal animal B. would be one for every degree of head movement you move your eyes one degree to offset that head movement and keep your eyes on the same looking the same direction so here's the nice thing about this system even though it's a reef simple reflex you can train the gain to go up and down you can fool animals into adjusting their feet you are so this is how you do gain training to train the B.O.R. so that you turn your eyes more what you going to do when you're going to see it right here is you have the visual image move oppositely to the head so now you see that the monkey needs to move its eyes more in order to keep its keep its focus on that banana likewise again down so now we're trying we basically tell the animal hey you shouldn't count to rotate your eyes you move the banana with the monkeys head and now the right thing to do to stabilize vision is not to move your eyes at all so what happens if you train with this gain up or this gain Parrott down paradigm for long periods of time. [00:36:57] Sorry. OK So this is for gain for gain up what you see is here's the head loss of the Here was the eye the last of the before training and then after training what you see is that the V.O.R. gain goes up the loss of the output will be increased and the testing here is in the dark so just moving the animal back and forth and dark and you see the game goes up Likewise for the game down case of the you are gain which started out again at that level decreases OK So those are the basic paradigms and now I want to talk about the circuitry underlying this so here's the basic circuitry again the reflex is a very simple Reef is a very simple reflex and there's a direct reflex pathway I'm simplifying it a little bit but to connect to the previous side of the talk your input is head rotation basically from your ear canals come the last of the commands telling you what your head loss of the is that goes through the same merilyn a greater that I was telling you about earlier to send out position commands that then lead to pulling the muscles and making the pulling the eye muscles and making the eye rotation so that's the basic direct pathway but in terms of a just doing this reflex it's known that if you get rid of the cerebellum you can adjust this reflex so there's also a side pathway that goes through the cerebellum goes into the granule cell and making a cartoon on the output of the cerebellum are the Perkins cells and they provide actually inhibitory output to this to this basic circuit for the reflex the basic direct circuit for the reflex so the question I want to ask is what plasticity mechanisms specifically what are the sights and signs of plasticity within this circuit that enable the velocity command to be increased or decreased. [00:38:47] And here's the key issue and again hopefully stepping this back to really if you work in learning and plasticity to very basic issues that come up in kind of any system so the key issue is identifying the sites of plasticity and how it's a challenge in neural circuits that have the back loops so here is a toy circuit and we have an upstream neuron and a downstream neuron I kept the signs the park and the cells are inhibitory and there is a minus sign that's when you move your head way head one way the eyes move the other way but we don't really need to get in those details suppose you're doing a recording and you see that this neuron right here decreased its activity if it's just a simple feed forward circuit you know exactly where the side of plasticity was it was that this then ups or someplace further upstream from that center so that's pretty easy there was some functional depression of the set ups and I should say for this talk I'm going to refer just because it's easy to remember too L T D N L T P long term depression long term potentiation I really don't know here whether it is for example depression of an excited Tory input or potentiation of an It inhibits or Ian put so I'm really talking about functional depression or functional potentiation of pathways but anyway in the feedforward case it's really easy you see the spying rate go down you know something upstream of it had depression but now let's consider the trick your case where we add a feedback loop so now we have a feedback loop from the motor command back to the neuron and we see decreased activity Well here's another possibility what if this then ups right here underwent L.T.P. that would increase the activity of this neuron and through this inhibitory connection then decrease the activity of that neuron and then you have no idea what happened at this it could have been L.T.P. or could have been L.T.D. So this is the problem once you've got a feedback loop with identifying sites and signs of plus this. [00:40:47] Well it turns out that the cartoon I just gave you is the essence of a 35 year old debate in the cerebellum between the Mar Albus Ito model the cerebellum and the miles list Berger model of the cerebellum and you can find this you know these names in candle you know so any basic neuroscience but this is that I've boiled down the essence of the argument so let me just step through it really really carefully 1st the physiology so the physiology is when you do. [00:41:16] Training to increase the amplitude of the V.O.R. to increase the gain of the you are you see that Perkins cells like I showed you in the previous cartoon decrease their firing rates the cerebellar nuclei firing rate increase the last of the. Increases OK and it's in amplitude it's still going the opposite direction so now that Maher Albus Ito theory says this is really simple to explain there was L.T.D. here that decrease these cells firing right they inhibit these cells and that what that's what increased their firing rates nice and simple so miles a Lisper come back and they say we're going to do some behavioral recordings I'll show you what the essence of it was but they did some behavioral recordings to try to isolate this pathway this the stimulator pathway and they said functionally when we try to isolate it looks like there was L.T.P. on this in UPS and if there's L.T.P. on this than UPS They called that they said that that's backwards from what we would have expected so there must have been that would've made these flying rates go down so it must have been that there was L.T.P. down here in the direct pathway and then due to an inhibitory sit ups that comes back up and causes a decrease in fire in their. [00:42:33] And this debate has raged on for 35 years the one thing that neither of these theories focused heavily on is that there's another feedback loop which they're not really paying attention to carefully which is that I have a lot of the itself what's the point of moving your eyes it's to keep the world stable Do they really keep the world stable you know they're always slipping a little so there's also a visual feedback pathway which comes into these Perkins cells and they haven't really considered that so we set about trying to go back and figure out the fundamental question is. [00:43:09] This called copy of the motor command to copy that sent back is it necessary because that is funny that the friends copy positive feedback loop here is fundamentally the difference between these theories so Moslems Berger said absolutely you need our experiment says that there's L.T.P. here and therefore the only explanation has to be that the main plasticity driver was in the was in the direct pathway so what do we do we build the simple model the circuit it's all based on linear filters so the linnet Basically here's your different sites that are Kinsey cells the brain stem neurons and there's I've lost the input and we just put linear filters that we're going to fit at each of these different sites and we're going to attempt to fit the data the key one is here with and without this feedback pathway and actually do gradient rated levels in between as well. [00:44:01] What data do we fit this to it's actually data from Jennifer Raymond recordings and Steve Lispers lab she had 23 different conditions combinations of head movements and visual input to different frequencies so for example that cancel the one which I showed you for increasing the gain or decrease in the gain where either the visual stimulus is moving with the head or the visual stimulus is moving opposite the head that would be a condition for example and after learning there were 15 conditions mostly it was behavior fewer neural recordings OK So here's the surprising thing so I'm just going to now show you you know 4 of the 23 conditions so this is going to be the stimulus this one is one where that only the head was moving in the dark and this is one where vision was on in the head in the vision of the same so this is where basically the animals having the visual world move with its head and then the same thing with steps rather than sinusoids these are Perkins recordings during that these are I've lost the recordings during that simultaneously so we tried the no positive feedback model and it's remarkably well almost perfectly fits the velocity traces and the traces I mean this is neural data so it's remarked remarkably well and then you try the posit the with positive feedback model and it fits almost identically so here's another complete degeneracy between these 2 models despite the fact that each of these camps have been saying it has to be this winter has to be the other one and I should say that for them are Albus Ito model most of the molecular studies suggest that there is depression at that site so in some sense we have 2 different cases where there is evident seemingly experimental evidence on each side of the debate. [00:45:47] OK so we won now look under the hood and say well OK so they both can fit the data but I'm guessing they're going to fit the data in different ways so let's see how they fit the data so 1st so them so the punch line is going to be the 2 models predict different patterns of weight changes during learning 1st I want to show you something that the models agree on in both models if we look at this brain stem site of plasticity right here this is looking at the linear filters this is in the feedback equals 0 model in blue this isn't orange the feedback equals strength one mile's this burger model there are almost identical filters get fit in both cases so they both models agree that there is does appear to be a change at this site so far so good list burger would be the miles lisper group would be very happy but now we get into the difference so now let's look at this light of plasticity So in the So if you assume that there is a feedback strength of one here then basically the model only can fit if there is a L.T.P. potentiation direction change at this site if you assume there is no feedback and there is this isn't there then you can only get the model to fit and I'm not showing you that but trust me if there is L.T. D. if there is depression functional depression at that Syn ups and this really is what Marc Albus Ito and miles and list Burger said So our model fits to the data confirm these 2 different possibilities but importantly say either one of them could work where as they were people previously were saying no either one can't you know it can only be one or the other OK So let's delve a little delve a little deeper. [00:47:33] So now the key question is how do these different synoptic changes lead to the same neural activity how is it that these really both work so now imagine what's happened is we've done gain up training so it's already been trained to increase its gain we're now going to turn off the lights and the animals going to do V.O.R. In the dark and because it's been trained again up it's eyes are going to move more than it did previously and what I told you previously is when you do that gain up training the activity of the broken cell decreases with the increase in view our gain How does that occur in both models this is really a review of what I said before so in the simple Mar Albus Ito model the no positive feedback model that decrease in firing is driven by there being Ltd of this pathway in the strong positive feedback model where we assume there's a strong positive feedback pathway that decrease in firing is driven by there being L.T.P. of the direct brain stem pathway and then that having an inhibitory synapse on the brick in the cell that decreases the firing eat and in fact this neuron actually has L.T.P. OK So that's the 1st so that's the 1st one that's how the 2 to the 2 different models produce the same output through very very different sites of plasticity in very very different mechanisms Let's do one more now let's consider this motor cancellation task this is the one where the animals head moves together so I should say the animals already been trained to gain up so this isn't a training paradigm this is now a testing paradigms that they are been trained increase their gain but just to test we're now going to have the animal. [00:49:07] Moved its head with a vision with vision making it tried to not move its eyes too much and I should say this is what Miles and lisp were used and they did this because they thought OK this feedback pathway is now intentionally making it hard to see this is an this is a motor output pathway the animals are going to be moving their eyes very much so they did this to try to isolate the head input to the broken G. cells while functionally through a clever but experimental behavioral trick make this pathway relatively minimal because the eyes are moving much the trick is part of the reason they're not moving their eyes much is because they're making micro corrections all the time through the external negative feedback pathway and that's what Miles in the spring or didn't consider as much OK So again you do this experiment what you see is that after training the activity increases with increase in view our gain and this is why miles and list Berger said that's direct evidence we removed the feedback pathway and that's direct evidence that there was L.T.P. at the site that's what caused that increase in firing Well how does Mark ALBISTON Ito explain explain so Mark Alba's Ito who said that there's L T D at this how can you get the increased firing it's because the increased firing is due to the visual slip actually driving these neurons. [00:50:37] So again 2 very different mechanisms of getting increased by one is L.T.P. here one is L.T.D. there but one is using the vision has a strong visual slip pathway OK Finally what it basically said is we can't separate out this old controversy so what experiment might someone do to separate this out so the kicker is in the muck in the miles list Berger model that feedback pathway of strength one what I didn't tell you is there's a double negative here so it goes back to the 1st half of the talk the broken D. cells inhibit the brain stem nuclei the brain stem nuclei and hit the. [00:51:16] Inhibit some of the motor neurons or actually synapse up from the motor neurons and then that sends oxidation back that's a double that's a positive feedback loop and the miles lisper model what I really meant by strong feedback was that this feedback was so strong in their model that it actually formed an integrator if it forms an integrator what you should be able to do is. [00:51:36] Electrically micro stimulate the candy cells and you can see in response that Microsoft immolation that it's just like the Secada can put you have a burst of input and it gets integrated and maintains that activity and that's actually what they said they actually said this provides an inertia for the system and they said this might be useful for being able to view things through in a Kluger and keep your eyes moving by contrast in the Mar Albus Ito experiment there's no feedback loop so if you if you stimulate the brain cells you should cause a rapid you know change and change and I've lost city and then it became right back. [00:52:13] So in fact Lisp Berger did this experiment in in monkeys he did the stimulation during this great period and then stopped it and it looks like the activity decayed back quite quickly so it looks so it could be somewhere in the week amplifying one but it looks like that experiment suggests it's not in this strong feedback zone that it's closer to the weak one and this will now conclude may help us resolve this controversy so just stepping back to the big problem studying learning and closed loop tasks quoting conclusions about the side even the sign of plasticity forget about the strength but even the sign of plasticity depends critically on assumptions about internal circuit feedback and external feedback through the environment possible resolution of the 35 year old they are circuit controversy maybe there's L.T.P. in the in the direct brain stem pathway both models agreed on that and that's what Miles and list Berger said but you can still have L T D which fits with a lot of the molecular studies in the pathway if appropriate this will feedback signals are present and finally all together on both parts of the talk Breaking degeneracy between models is highly challenging but may be addressable with causal perturbations and recordings OK Thank you. [00:53:37] And this is just acknowledging the people who really did this work who really do this work but I know I'm running on time you. Read. My. Ear. Yes That one was done with pharmacological in activation it was done 2 different ways once with the small and once with the cane and there you can look over time at recovery. [00:54:36] Punchline in a lot so I'm not sure you can recover you can recover from an entire half of the circuit being inactivated more subtle ones will recover more like our time scale probably cerebellar driven in the dark there doesn't tend to be much recovery there maybe a little tiny bit in the late there does appear to be recovery in it appears that that is actually cerebellar driven so now we're actually trying to study how the integration is learned as a nother hook on a simple task for understanding how the cerebellum. [00:55:11] Circuit now it's something it's the simplest thing how do you keep your eyes still no no movement stimuli but that hopefully helps. Yes So in principle yes now we can just get philosophical about thinking about our guesses of how different animals solve it. My guess is that animals solve it the same way but what seems to be interesting is actually that there seems to be a break between gain up and gain down so Jennifer Raymond's lab has shown that it looks like again up may be dependent on cerebellar cortex Ltd but not game down so this presents the possibility that actually it's even more paradigm specific about which mechanism so it may be one mechanism is for one and one mechanism is for the other and there's even subtleties when you get to sinusoidal frequencies about certain frequencies tend to be more dependent on cerebellar cortical Ltd than others so I think there are a lot of subtleties to work out. [00:56:22] There are some other subtleties we could get into later but. I don't know my guess is it's going to be you know the usual thing of the mean is here in most animals are there and there's this much scatter and I'm not sure where the scatter would be enough to totally flip to 2 classes of animal solving the problem but at least different paradigms it seems like might be solved different ways like gain up versus gain down and we don't have a good normative model of why that would be although we have some thoughts. [00:56:51] That.