Metric for different values of alfa and beta

18 Feb 09

m=mapametrica(todas.A,todas.M,todas.S,todas.f,todas.pos_real,10.^(-3:1:3),10.^(-3:1:3),2,1000);
>> imagesc(-3:3,-3:3,log10(m))
>> xlabel(‘log10(beta)’)
>> ylabel(‘log10(alfa)’)

19 Feb 09

Assuming quadratic cost and independent neurons, the increment of cost when interchanging the deviations of two neurons is

DeltaW=(omega1-omega2)(Deltax2^2-Deltax1^2). This is used in efectoperms.m.

For a ideal simple system, this works fine:

>> A=zeros(100);
>> B=rand(100,2);
>> f=[0 0];
>> pos=rand(100,1);
>> [Deltacostes_perm,Deltacostes_perm_teor]=efectoperms(A,B,f,pos,2,[]);

When alfa is small, also works great for elegans:

>> alfa=0.0001; beta=10;
>> [Deltacostes_perm,Deltacostes_perm_teor]=efectoperms(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,[]);

Good agreement for alfa and beta of the pnas:

>> alfa=0.05; beta=1.5;
[Deltacostes_perm,Deltacostes_perm_teor]=efectoperms(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,[]);

For alfa=10, fuzzier but reasonable:

>> alfa=10; beta=10;
>> [Deltacostes_perm,Deltacostes_perm_teor]=efectoperms(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,[]);

We try permutations of multiple neurons:

n_neur=[2 3 5 10 20 50];
for c=1:6
subplot(2,3,c)
[Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_multiples(A,B,f,pos,2,n_neur(c),1000,[]);
End

n_neur=[2 3 5 10 20 50];
alfa=.00001; beta=10;
for c=1:6
subplot(2,3,c)
[Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_multiples(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,n_neur(c),1000,[]);
End

n_neur=[2 3 5 10 20 50];
alfa=.05; beta=1.5;
for c=1:6
subplot(2,3,c)
[Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_multiples(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,n_neur(c),1000,[]);
end

For big alfas and large number of permutations, there is a shift:

n_neur=[2 3 5 10 20 50];
alfa=10; beta=10;
for c=1:6
subplot(2,3,c)
[Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_multiples(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,n_neur(c),1000,[]);
end

With permutations as in the metric:

>> [Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_completas(A,B,f,pos,2,1000,[]);

>> alfa=.05; beta=1.5;
>> [Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_completas(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,1000,[]);

>> alfa=10; beta=10;
>> [Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_completas(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,1000,[]);

Study of the effect neuron-by-neuron

[Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_neuraneur(A,B,f,pos,2,1000,[]);

alfa=10; beta=10;
>> [m,Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_neuraneur(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,10000,[]);

Oh, AVAL and AVAR!

Oh, it’s the opposite: AVAL and AVAR are not producing the shift, their effect is actually against the shift, as we see when removing them from the calculation:

>> [m,Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_completas(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,1000,[],[54 55]);

 

Advertisement

Comparison of costs and probabilities for different exponents

>> alfa=10^-1.1; beta=10^-.9;
>> analizaBayes(todas.A,todas.M,todas.S,todas.f,todas.pos_real,[.5 1],alfa,beta,infoarchivos,[],[],prob)
Computing global optimum…Done.
Computing cost matrices…Done. 0.186333 seconds elapsed.
ind_lambda =
2
ind_mu =
8
ind_tipo =
4
ind_lambda =
12
ind_mu =
9
ind_tipo =
3
tipo =
2.2000 1.0000
2.1000 2.0000
lambda =
2.9126 0.8483
mu =
0.4771 0.0856
Mostrando neurona 1,??? Operation terminated by user during ==> analizaBayes at 138

Bayes and “Center-of-mass” calculations with neurons fixed in their actual positions vs fixed in their optimal positions

From yesterday: CURRENT POINT: coste2graficamolona AND coste2pos_num_ruido ARE NOT USING THE SAME COST FOR NEURON 37. THERE MUST BE SOME MISTAKE THERE.

This is because the optimizer does not really converge, but in the end oscillates between two states of similar cost. So when we compute the cost from one of these states we get a different cost surface that leads to the other.

Caca.