## Calibration of the metric for E. coli: Preliminary results

I use the model for growth that is used in Figure 4.

fig_pap_coli_11(1,0)

(and use internal variables from now on)

New metric:

p=calibrametrica(model_growth,flujosexp_finales,1000,100,Delta);
K>> hist(p)

K>> hist(log10(p))

K>> sum(p<.05)
ans =
2

Looks good, but I will repeat it with more repetitions.

Now I test the old metric:

p2=calibrametrica(model_growth,flujosexp_finales,1000,100,Delta,2);
K>> hist(p2)

K>> hist(log10(p2))

K>> sum(p2<.05)
ans =
0

K>> sum(p2<.1)

ans =
3

It seems there was a bias. I will also repeat it with more repetitions.

## New Figure 4

Growth 0.1. New maintenance. Old data. New metric.

>> fig_pap_coli_11(1,1)

Growth 0.4:

>> fig_pap_coli_11(1,1)

## Figures 1, 2 and 3

This is the current form of these figures:

>> fig_pap_presentacion_03(1,[0 0 0])
>> fig_pap_elegans_13(2,zeros(100))
>> fig_pap_bayes_05(3,zeros(100))

## Metric for linear and 0.5 cases

If I try to compute more tan 10^7 permutations, Matlab runs out of memory. This can be solved, by saving the results in files as they are computed. But for now, I post results for 10^7 permutations (run in remotón).

Calculations for alfa, beta of our pnas, exponent 1:

pos_opt=coste2pos_num_ruido(todas.A*.05,todas.M*1.5+todas.S,todas.f,1,0);
[m,coste_real,coste_perm]=metrica_pot(todas.A*.05,todas.M*1.5+todas.S,todas.f,todas.pos_real,1,10^7,pos_opt,4);
hist(coste_perm,100)
>> hold on
>> ejes=axis;
>> plot(coste_real*[1 1],ejes(3:4),’k’)
>> save metrica_alfabetapnas_lineal m coste_perm coste_real

m is 0, no permutation has lower cost than the real case. So p<10^-7.

Calculations for alfa and beta predicted by Bayes, and exponent 0.5:

>> [m,coste_real,coste_perm]=metrica_pot(todas.A*10^-1.1,todas.M*10^-.9+todas.S,todas.f,todas.pos_real,0.5,10^7,datos.pos_opt,4);
>> cd ..
>> cd metricaelegans
>> save metrica_alfabetabayes_cerocinco m coste_perm coste_real
>> m
m =
0
>> close all
>> hist(coste_perm,100)
>> hold on
>> ejes=axis;
>> plot(coste_real*[1 1],ejes(3:4),’k’)

## Comparison of deviations for the cases quadratic and 0.5

It seems that the most deviated neurons are the same in both cases:

>> alfa=0.05; beta=1.5;
>> alfa=10^-1.1; beta=10^-.9; pot=.5;
>> [pos_cm,omega_general]=coste2pos_restofijas(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,pot,.2);
>> desv_cerocinco=abs(pos_cm-todas.pos_real);
ans =
0.0879
>> mean(desv_cerocinco)
ans =
0.0911
>> figure

Some neurons are quite deviated in one case and not the other. However, it seems that these neurons have shallow costs for both cases, as we see when we use the predictions of the quadratic case for the deviations of the 0.5 case (note that beta is very different in either case):

>> alfa=0.05; beta=1.5;
>> omega=sum([todas.A*alfa todas.M*beta+todas.S],2);
>> figure
>> plot(desv_cerocinco,omega,’.’)

## Metric for different values of alfa and beta

18 Feb 09

m=mapametrica(todas.A,todas.M,todas.S,todas.f,todas.pos_real,10.^(-3:1:3),10.^(-3:1:3),2,1000);
>> imagesc(-3:3,-3:3,log10(m))
>> xlabel(‘log10(beta)’)
>> ylabel(‘log10(alfa)’)

19 Feb 09

Assuming quadratic cost and independent neurons, the increment of cost when interchanging the deviations of two neurons is

DeltaW=(omega1-omega2)(Deltax2^2-Deltax1^2). This is used in efectoperms.m.

For a ideal simple system, this works fine:

>> A=zeros(100);
>> B=rand(100,2);
>> f=[0 0];
>> pos=rand(100,1);
>> [Deltacostes_perm,Deltacostes_perm_teor]=efectoperms(A,B,f,pos,2,[]);

When alfa is small, also works great for elegans:

>> alfa=0.0001; beta=10;
>> [Deltacostes_perm,Deltacostes_perm_teor]=efectoperms(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,[]);

Good agreement for alfa and beta of the pnas:

>> alfa=0.05; beta=1.5;
[Deltacostes_perm,Deltacostes_perm_teor]=efectoperms(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,[]);

For alfa=10, fuzzier but reasonable:

>> alfa=10; beta=10;
>> [Deltacostes_perm,Deltacostes_perm_teor]=efectoperms(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,[]);

We try permutations of multiple neurons:

n_neur=[2 3 5 10 20 50];
for c=1:6
subplot(2,3,c)
[Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_multiples(A,B,f,pos,2,n_neur(c),1000,[]);
End

n_neur=[2 3 5 10 20 50];
alfa=.00001; beta=10;
for c=1:6
subplot(2,3,c)
[Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_multiples(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,n_neur(c),1000,[]);
End

n_neur=[2 3 5 10 20 50];
alfa=.05; beta=1.5;
for c=1:6
subplot(2,3,c)
[Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_multiples(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,n_neur(c),1000,[]);
end

For big alfas and large number of permutations, there is a shift:

n_neur=[2 3 5 10 20 50];
alfa=10; beta=10;
for c=1:6
subplot(2,3,c)
[Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_multiples(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,n_neur(c),1000,[]);
end

With permutations as in the metric:

>> [Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_completas(A,B,f,pos,2,1000,[]);

>> alfa=.05; beta=1.5;
>> [Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_completas(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,1000,[]);

>> alfa=10; beta=10;
>> [Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_completas(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,1000,[]);

Study of the effect neuron-by-neuron

[Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_neuraneur(A,B,f,pos,2,1000,[]);

alfa=10; beta=10;
>> [m,Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_neuraneur(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,10000,[]);

Oh, AVAL and AVAR!

Oh, it’s the opposite: AVAL and AVAR are not producing the shift, their effect is actually against the shift, as we see when removing them from the calculation:

>> [m,Deltacostes_perm,Deltacostes_perm_teor]=efectoperms_completas(todas.A*alfa,todas.M*beta+todas.S,todas.f,todas.pos_real,2,1000,[],[54 55]);