Generalized omega

16 Feb 09

 

We use a new “generalized omega” defined as the distance between the optimum and the furthest point for which the cost has increased less than x%. For x=10%, we get this shit (good prediction is all points above the line):

>> alfa=10^-1.1; beta=10^-.9; pot=.5;
>> [pos_cm,omega_general]=coste2pos_restofijas(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,pot,.1);
>> plot(abs(todas.pos_real-pos_cm),omega_general,’.’)
>> hold on
>> plot([0 .7],[0 .7],’k’)
>> xlabel(‘Deviation’)
>> ylabel(‘Predicted maximum deviation (10%)’)

for c=1:279

text(abs(todas.pos_real(c)-pos_cm(c)),omega_general(c),num2str(c))

end

Some points below the line still make some sense, for example neuron 157:

Others not even that (neuron 209):

All the above is WRONG. The mistake is that we normalize each neuron separately. We must define omega generalized as the maximum distance to a point where the cost has increased less than a certain threshold, but that threshold must be the same for all neurons. Doing it correctly (threshold 20%), we get

What happens with the above neurons is that the scale is not in context. Neurons 7, 157 and 209:

subplot(1,3,1)
>> miracostes(todas.A,todas.M,todas.S,todas.f,todas.pos_real,.5,alfa,beta,pos_cm,7)
Computing cost matrices…Done. 0.190727 seconds elapsed.
Mostrando neurona 7,
>> subplot(1,3,2)
>> miracostes(todas.A,todas.M,todas.S,todas.f,todas.pos_real,.5,alfa,beta,pos_cm,157)
Computing cost matrices…Done. 0.194888 seconds elapsed.
Mostrando neurona 157,
>> axis([0 1 0 4])
>> subplot(1,3,3)
>> miracostes(todas.A,todas.M,todas.S,todas.f,todas.pos_real,.5,alfa,beta,pos_cm,209)
Computing cost matrices…Done. 0.192309 seconds elapsed.
Mostrando neurona 209,
>> axis([0 1 0 4])

We plot the cost for each neuron, in the same way as in coli:

matriz=coste2graficamolona(todas.A*alfa,todas.M*beta+todas.S,todas.f,pot,todas.pos_real,0);
>> xlabel(‘Deviation’)

Values in the colorscale are (cost increment)/(cost of actual configuration).

Rescaling the colorbar, so that small values are highlighted:
>> caxis([0 .015])

17 Feb 09

I change the way neurons are sorted, so that the right-hand contour is smooth.

matriz=coste2graficamolona(todas.A*alfa,todas.M*beta+todas.S,todas.f,pot,todas.pos_real,[.005],0);

The same thing, without the contour and with rescaled caxis:

matriz=coste2graficamolona(todas.A*alfa,todas.M*beta+todas.S,todas.f,pot,todas.pos_real,[.005],0);
>> caxis([0 .01])

Answer to first comment by Gonzalo:

potencias=[.1 .25 .5 .75 1 1.5 2 2.5 3 3.5 4 5 6 7 8 9 10];
alfa=1/29; beta=1/29;
for c_pot=1:length(potencias)
pos_cm(1:279,c_pot)=coste2pos_restofijas(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,potencias(c_pot));
end
>> error=mean(abs(pos_cm-repmat(todas.pos_real,[1 length(potencias)])));
>> close all
>> plot(potencias,error,’.-‘)

I do it with the alfa and beta given by Bayes:

potencias=[.1 .25 .5 .75 1 1.5 2 2.5 3 3.5 4 5 6 7 8 9 10];
alfa=10^-1.1; beta=10^-.9;
for c_pot=1:length(potencias)
pos_cm(1:279,c_pot)=coste2pos_restofijas(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,potencias(c_pot));
end
error=mean(abs(pos_cm-repmat(todas.pos_real,[1 length(potencias)])));
close all
plot(potencias,error,’.-‘)
xlabel(‘Cost exponent’)
ylabel(‘Mean error’)

CURRENT POINT: coste2graficamolona AND coste2pos_num_ruido ARE NOT USING THE SAME COST FOR NEURON 37. THERE MUST BE SOME MISTAKE THERE.

9 Responses to “Generalized omega”

  1. gonzalopolavieja Says:

    POdrias calcular lo siguiente? Sean alfa y beta los teoricos. Si uno calcula la desviacion promedio con calculos globales frente al exponente, daba el minimo cerca de exponente 2.
    hazlo igual , please, pero para calculos locales (todas fijas en el valor exp menos la que estudias).

    Es para ver si la optimizcion local ya da mas cerca de 0.5 y no solo es la forma de la probabilidad.

    Otra cosa, mariposa. Está el paper de los alemanes que no nos gustaba nada y otro de Buzsaki que arguian que debia haber conexiones largas. Aunque los alemanes no me convencian y Buzsaki lo pone a huevo, lo qe nos sale es eso con el 0.5 este.

    Mas luego.

  2. al. Says:

    Actualizo el post con los resultados de esto. Para los alfa y beta de Dmitri el óptimo sigue en 1.5, para los alfa y beta de Bayes el óptimo está entre 1 y 1.5.

  3. al. Says:

    Y me voy a casita. Mañana más.

  4. Carlos D Says:

    Chicos, os habeis parado a pensar que esto lo indexa Google y se queda ahí por años y tarde o temprano los alemanes o Buzsaki o los referees lo van a leer?
    Además, puede venir cualquier tonto a meteros comentarios…
    Yo optaria por un Blog privado…

  5. alperezescudero Says:

    En realidad este blog no debería estar indexado en Google, pero es lo de menos. Quizá tengas razón en que deberíamos tener cierto cuidado de no ofender a nadie, porque es público. Que lo lean los referees no es problema, de hecho incluso nos plantearíamos decírselo nosotros mismos. La idea es precisamente 100% de transparencia. Lo de los tonticos metiendo comentarios sí puede ser un problema, pero creo que en general preferirán dedicar su valioso tiempo spameando blogs mucho más jugosos.

  6. gonzalopolavieja Says:

    Any luck in trying to find out why so many alpha and beta values have a significant metric?

  7. alperezescudero Says:

    Nop. Some permutations look better than the original in the omega-desv plot. I’ll keep trying tomorrow, in case I survive to the suicide trial I am in the point of commiting.

  8. gonzalopolavieja Says:

    What do you mean? That for the alpha and beta there are more permutation with lower cost than the original 0.033%? Or you mean for the new alpha and beta? In any case, r u implying that the metric calcualtion was wrong before?

    I guess you do not imply any of that, but something only in teh omega-desviations plot. But that a permutation is better in this plot is not that horrible because this plot is only approximate, especially for xi<2. What really counts is teh metric because is the one taking into account the full cost.

    BTW, I think it is ‘at the point of commiting’.

    Anyway, more tomorrow.

  9. alperezescudero Says:

    Yeah, it’s something that the plot looks better for the permutations. I still think the metric is working fine. New day, here I come!


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: