## Carpenter plots of the bayesian estimator

>> t_reaccion=bayestim_medias_nosave(10^5,10,1000,2,0,1,10^-3,.995);
>> hist(t_reaccion,0:1000)

>> t_reaccion2histog_carpenter(t_reaccion,20,’r.-‘);

## Mistake creating the system matrix

In previous posts, I made a mistake: I used the transposed of the chemical connectivity matrix. Fortunately, it seems there is not a fundamental difference:

Transposed,

>> solucion=matrizsistema2solucion(M,V0);
>> plot(solucion.autovalores,’.’)

Not transposed:

solucion=matrizsistema2solucion(M,V0);
plot(solucion.autovalores,’.’)

## Modelling reaction times with two overlapping processes

It turns out that the sum of normally distributed and exponentially distributed random numbers gives rise to Carpenter’s distribution:

>> t=.2+.02*randn(1,10000) + exprnd(.05,[1 10000]);
>> hist(t,30)

>> hist(1./t,30)

>> t_reaccion2histog_carpenter(t,20,’r.-‘);

## Model for reaction times with distractions

First, with all distractions having the same duration:

>> estimulo=generaestimulo(10000,[1 2]);
>> [t_reaccion,mom_despistes,t_reaccion_intrinseco]=estimulo2reaccion(estimulo,[.2 .03],[3 .2]);
>> hist(t_reaccion,30)

>> hist(1./t_reaccion,30)

>> t_reaccion2histog_carpenter(t_reaccion,20,’r.-‘);

The problem is that distractions produce a plateau which is not very natural. Let’s see a very clear case:

>> [t_reaccion,mom_despistes,t_reaccion_intrinseco]=estimulo2reaccion(estimulo,[.2 .03],[1 .2]);
>> hist(t_reaccion,30)

A more natural situation is that in which distractions have different durations:

## Second experiment with fixed time

Subject: Alfonso

More statistics (1000 trials) and better focused in the relevant region.

>> save APE_1000_01jun09_1524
>> cd ..
>> [prob,bins]=atiempo2probcorrecto(atiempo,20,1);

Conclusions:

– Rests are needed.
– The relevant region seems to be b/w 0.15 and 0.3.
– Looks nice, but in order to see the form of the function, MUCH MORE data are needed.

There are two possible ways of modelling this:

1. One can only identify the side when the decision signal reaches the threshold on time. Thus, I would think that the curve above should be related to the cumulative distribution of latency times.
2. One guesses the side from the available information. So the curve that we get is something closely related to the decision signal.

From my experience, I would say that number 1 is more likely. When I have to guess, I use more the information of previous trials (I tend to repeat if I got it right, and to change if I missed it).

And this brings the following idea: It seems to me that estimation of the prior (right/left) is made in real time, rather than realizing that stimuli are independent on each other (at least for this case, that is 50%-50%). I feel inevitable to think that if a choice failed in the previous trial, I should change my guess (and also, but weaker, that if a choice has been successful for a while, it is time to change). Does anybody take this into account? Maybe we could try develop a “dynamic prior” model.

## First experiment with fixed time

Experiment design: The subject must press the button at a fixed time (indicated by the moment a bar reaches a mark). Some time before (uniformly random between 0.05 and 0.3 seconds), a circle appears either on the right or on the left. The subject must indicate the side at which the circle appeared, at the fixed time. If the subject fails to press some button at the right moment with an interval of (0,05 seconds), the measurement is rejected. Then, we count how many times the subject guessed correctly the position of the circle, as a function of the time between the aparison of the circle and the key pulsation.

First experiment, subject Alfonso:

>> cd ..
>> [prob,bins]=atiempo2probcorrecto(atiempo,10,1);

## Re-analysis of Carpenter plots

Sara:

>> cd ..
>> t_reaccion2histog_carpenter(correctos.t_reaccion(~correctos.dcha,1),20,’r.-‘);
>> hold on
>> t_reaccion2histog_carpenter(correctos.t_reaccion(correctos.dcha,1),20,’.-‘);

Agh!

I take only the last 250 trials (see post “Learning for Sara”):
>> buenos=false(1,size(correctos.t_reaccion,1));
>> buenos(250:end)=true;
>> t_reaccion2histog_carpenter(correctos.t_reaccion(~correctos.dcha & buenos’,1),20,’r.-‘);
>> hold on
>> t_reaccion2histog_carpenter(correctos.t_reaccion(correctos.dcha & buenos’,1),20,’.-‘);

## Amount of data needed to get a clean Carpenter plot

It seems that about 100 stimuli are enough to get a clean plot.

>> a=randn(1,50)+5;
>> t_reaccion2histog_carpenter(1./a,10,’k.-‘);

>> a=randn(1,100)+5;
>> t_reaccion2histog_carpenter(1./a,10,’k.-‘);

>> a=randn(1,200)+5;
>> t_reaccion2histog_carpenter(1./a,10,’k.-‘);

>> a=randn(1,500)+5;
>> t_reaccion2histog_carpenter(1./a,10,’k.-‘);

>> a=randn(1,1000)+5;
>> t_reaccion2histog_carpenter(1./a,10,’k.-‘);

## Learning for Sara

>> cd ..
>> aprendizaje=correctos2aprendizaje(correctos,50,1);

Seems stable in the last 250 trials.

## Analysis of Eloisaâ€™s 1000 trials with feedback

>> aprendizaje=correctos2aprendizaje(corr_filt,50,1);

A clear bias due to previous knowledge: Faster to the right from the beginning.

At least, seems quite stable.

>> [histog,bins]=hist_norm(corr_filt.t_reaccion(corr_filt.dcha,1),30);
>> close all
>> plot(bins,histog)
>> hold on
>> [histog,bins]=hist_norm(corr_filt.t_reaccion(~corr_filt.dcha,1),20);
>> plot(bins,histog,’r’)
>> legend(‘Right’,’Left’)

>> t_reaccion2histog_carpenter(corr_filt.t_reaccion(corr_filt.dcha,1),20);
>> hold on
>> t_reaccion2histog_carpenter(corr_filt.t_reaccion(~corr_filt.dcha,1),20,’r.-‘);