% ERPs and repetition suppression - a demonstration %__________________________________________________________________________ clear G % 1) Create a canonical generative model. This model will be used to create % stimuli that are more or less familiar (generated by changing the % parameters to a greater or lesser degree). The same model is used as a % neuronal network to simulate event-related responses, both in terms of % LFPs and PSTH of unit activity. For simplicity, we will assume the % generators of LFPs (and PSTHs) are the superficial pyramidal cells that % encode prediction error. %========================================================================== % The generative [likelihood] model: This mode is very simple and % comprises two levels. The second level has single states and can be % regarded as the cause of stimuli. The first level defines how that % stimulus is expressed in the sensorium. This level has a dynamic aspect, % in the sense that the cause perturbs a dynamical system (with two state % variables). These [hidden] states are then mixed linearly to provide 4 % data or sensory channels. The parameters of the model are therefore the % Jacobian of the hidden states (controlling damped oscillations induced % by a cause) and the parameters linking the hidden states to the data %========================================================================== % set dimensions for generalised coordinates %-------------------------------------------------------------------------- G(1).E.d = 3; % approximation order G(1).E.n = 8; % embedding order G(1).E.s = 1/2; % temporal smoothness - s.d. of kernel % level 1 %-------------------------------------------------------------------------- G(1).m = 1; % 1 input or cause G(1).n = 2; % 2 hidden states G(1).l = 4; % 4 outputs Pf = [-1 4 ; % the Jacobian for the -2 -1]/4; % hidden sates Pg = [spm_dctmtx(4,2)]/4; % The mixing parameters pE = [Pf(:); Pg(:)]; G(1).f = inline('reshape(P(1:4),2,2)*x + [v; 0]','x','v','P'); G(1).g = inline('reshape(P(5:12),4,2)*x','x','v','P'); G(1).pE = pE; % The prior expectation G(1).pC = blkdiag(speye(length(pE))); % The prior covariance G(1).V = speye(G(1).l,G(1).l)*exp(8); % error variances (response) G(1).W = speye(G(1).n,G(1).n)*exp(8); % error variances (states) % with a low level of noise % level 2 %-------------------------------------------------------------------------- G(2).l = 1; % 1 output G(2).V = 1; % with unit variance as a prior % 2) The data: Data [stimuli] are created by integrating the model for some % input. The input here is simply a bump [Gaussian] function. To create % different stimuli one can add some random values to the parameters of the % canonical form before integrating. The larger these additions the more % 'unfamiliar' the stimulus will be. %========================================================================== M = G; % make M the canonical model % and use G to generate data % time-series specification: Add random values to the parameters of G. A % standard deviation of 1/32 will give a familiar stimulus. A standard % deviation of 1/4 will give an unfamiliar stimulus that can be learned % within a few presentations (i.e. iterations) %-------------------------------------------------------------------------- P = pE + randn(size(pE))/4; N = 32; % length of data sequence % create innovations & add causes %-------------------------------------------------------------------------- c = exp(-([1:N] - 8).^2/(1.^2)); % this is the Gaussian cause % integrate G to obtain causal (v) and hidden states (x) %-------------------------------------------------------------------------- DEM = spm_DEM_generate(G,c,P,{[] 16},{16 []}); Y = DEM.pU.v{1}; % data = output C = DEM.pU.v{end}; % cause = input % plot causal and hidden states %-------------------------------------------------------------------------- spm_DEM_qU(DEM.pU) % 3) DEM estimation: Here we expose the model M to the data and record the % responses. The DEM scheme is essential a form of Variational Learning % that provides an upper bound on perceptual inference and learning. % We use this bound to simulate neuronal responses, under the assumption % they are near-optimal. %========================================================================== M(1).E.linear = 1; % tell spm_DEM this is a linear model M(1).E.nD = 1; % D-steps per time bin (1 for dynamic systems) M(1).E.nE = 1; % E-steps per iteration M(1).E.nM = 1; % M-steps per iteration DEM.M = M; % Because we want to record the response of the model over time, to each % stimulus we will proceed one iteration at a time and replace the starting % values of the parameters (initialised with the prior expectation) with % the conditional estimates of the previous trial. We will consider 8 % presentations %-------------------------------------------------------------------------- for i = 1:8 DEM = spm_DEM(DEM); % compute conditional densities DEM.M(1).pE = DEM.qP.P{1}; % update parameter estimates qU{i} = DEM.qU; % record states F(i) = DEM.F; % record the free energy attained end % plot the simulated ERPs (see spm_dem_ERP) %-------------------------------------------------------------------------- spm_figure('GetWin','Figure 1'); R = spm_dem_ERP(qU{:}); % plot the mean responses, (over time bins) over repetitions and both % cortical levels %-------------------------------------------------------------------------- spm_figure('GetWin','Figure 2'); subplot(3,1,1),bar(-F), title('Free Energy'),axis square subplot(3,1,2),bar(R{1} - 1),title('Lower area'), axis square subplot(3,1,3),bar(R{2} - 1),title('Higher area'),axis square xlabel('repetition') % plot the stimulus and the percept (i.e. prediction at the lowest level) % before and after learning (i.e., for the first and last presentation) %-------------------------------------------------------------------------- spm_figure('GetWin','Figure 3'); Y = qU{1}.v{1} + qU{1}.z{1}; subplot(3,1,1),imagesc(Y),title('Sensation'), axis image subplot(3,1,2),imagesc(qU{1}.v{1}), title('First percept'),axis image subplot(3,1,3),imagesc(qU{end}.v{1}),title('Last percept'), axis image