Generalized omega

16 Feb 09

 

We use a new “generalized omega” defined as the distance between the optimum and the furthest point for which the cost has increased less than x%. For x=10%, we get this shit (good prediction is all points above the line):

>> alfa=10^-1.1; beta=10^-.9; pot=.5;
>> [pos_cm,omega_general]=coste2pos_restofijas(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,pot,.1);
>> plot(abs(todas.pos_real-pos_cm),omega_general,’.’)
>> hold on
>> plot([0 .7],[0 .7],’k’)
>> xlabel(‘Deviation’)
>> ylabel(‘Predicted maximum deviation (10%)’)

for c=1:279

text(abs(todas.pos_real(c)-pos_cm(c)),omega_general(c),num2str(c))

end

Some points below the line still make some sense, for example neuron 157:

Others not even that (neuron 209):

All the above is WRONG. The mistake is that we normalize each neuron separately. We must define omega generalized as the maximum distance to a point where the cost has increased less than a certain threshold, but that threshold must be the same for all neurons. Doing it correctly (threshold 20%), we get

What happens with the above neurons is that the scale is not in context. Neurons 7, 157 and 209:

subplot(1,3,1)
>> miracostes(todas.A,todas.M,todas.S,todas.f,todas.pos_real,.5,alfa,beta,pos_cm,7)
Computing cost matrices…Done. 0.190727 seconds elapsed.
Mostrando neurona 7,
>> subplot(1,3,2)
>> miracostes(todas.A,todas.M,todas.S,todas.f,todas.pos_real,.5,alfa,beta,pos_cm,157)
Computing cost matrices…Done. 0.194888 seconds elapsed.
Mostrando neurona 157,
>> axis([0 1 0 4])
>> subplot(1,3,3)
>> miracostes(todas.A,todas.M,todas.S,todas.f,todas.pos_real,.5,alfa,beta,pos_cm,209)
Computing cost matrices…Done. 0.192309 seconds elapsed.
Mostrando neurona 209,
>> axis([0 1 0 4])

We plot the cost for each neuron, in the same way as in coli:

matriz=coste2graficamolona(todas.A*alfa,todas.M*beta+todas.S,todas.f,pot,todas.pos_real,0);
>> xlabel(‘Deviation’)

Values in the colorscale are (cost increment)/(cost of actual configuration).

Rescaling the colorbar, so that small values are highlighted:
>> caxis([0 .015])

17 Feb 09

I change the way neurons are sorted, so that the right-hand contour is smooth.

matriz=coste2graficamolona(todas.A*alfa,todas.M*beta+todas.S,todas.f,pot,todas.pos_real,[.005],0);

The same thing, without the contour and with rescaled caxis:

matriz=coste2graficamolona(todas.A*alfa,todas.M*beta+todas.S,todas.f,pot,todas.pos_real,[.005],0);
>> caxis([0 .01])

Answer to first comment by Gonzalo:

potencias=[.1 .25 .5 .75 1 1.5 2 2.5 3 3.5 4 5 6 7 8 9 10];
alfa=1/29; beta=1/29;
for c_pot=1:length(potencias)
pos_cm(1:279,c_pot)=coste2pos_restofijas(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,potencias(c_pot));
end
>> error=mean(abs(pos_cm-repmat(todas.pos_real,[1 length(potencias)])));
>> close all
>> plot(potencias,error,’.-‘)

I do it with the alfa and beta given by Bayes:

potencias=[.1 .25 .5 .75 1 1.5 2 2.5 3 3.5 4 5 6 7 8 9 10];
alfa=10^-1.1; beta=10^-.9;
for c_pot=1:length(potencias)
pos_cm(1:279,c_pot)=coste2pos_restofijas(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,potencias(c_pot));
end
error=mean(abs(pos_cm-repmat(todas.pos_real,[1 length(potencias)])));
close all
plot(potencias,error,’.-‘)
xlabel(‘Cost exponent’)
ylabel(‘Mean error’)

CURRENT POINT: coste2graficamolona AND coste2pos_num_ruido ARE NOT USING THE SAME COST FOR NEURON 37. THERE MUST BE SOME MISTAKE THERE.

Advertisement