More statistics (1000 trials) and better focused in the relevant region.
>> salida=estimulo_tiempofijo(1000,.5,[.05 .3],.05,’Sujeto:Alfonso. Ordenador: Alfonso’);
>> save APE_1000_01jun09_1524
>> cd ..
– Rests are needed.
– The relevant region seems to be b/w 0.15 and 0.3.
– Looks nice, but in order to see the form of the function, MUCH MORE data are needed.
There are two possible ways of modelling this:
1. One can only identify the side when the decision signal reaches the threshold on time. Thus, I would think that the curve above should be related to the cumulative distribution of latency times.
2. One guesses the side from the available information. So the curve that we get is something closely related to the decision signal.
From my experience, I would say that number 1 is more likely. When I have to guess, I use more the information of previous trials (I tend to repeat if I got it right, and to change if I missed it).
And this brings the following idea: It seems to me that estimation of the prior (right/left) is made in real time, rather than realizing that stimuli are independent on each other (at least for this case, that is 50%-50%). I feel inevitable to think that if a choice failed in the previous trial, I should change my guess (and also, but weaker, that if a choice has been successful for a while, it is time to change). Does anybody take this into account? Maybe we could try develop a “dynamic prior” model.