Class about stochastic optimization

Figures for parabolic costs in 1D:

>> x=-10:.1:10;
>> plot(x,x.^2)
>> close all
>> plot(x,x.^2)
>> plot(x,x.^2+randn(1,length(x)))
>> plot(x,x.^2+randn(1,length(x))*10)
>> hold on
>> plot(x,x.^2,’r’,’LineWidth’,3)
>> set(gca,’FontSize’,15)
>> xlabel(‘x_1′,’FontSize’,15)
>> ylabel(‘Cost’,’FontSize’,15)
>> figure
>> plot(x,5*x.^2+randn(1,length(x))*10)
>> axis([-10 10 0 120])
>> axis([-10 10 0 120])
>> axis auto
>> axis([-10 10 -20 120])
>> hold on
>> plot(x,5*x.^2,’r’,’LineWidth’,3)
>> set(gca,’FontSize’,15)
>> xlabel(‘x_2′,’FontSize’,15)
>> ylabel(‘Cost’,’FontSize’,15)
>> plot(x,5*x.^2+randn(1,length(x))*10)
>> plot(x,5*x.^2+randn(1,length(x))*10)
>> plot(x,5*x.^2+randn(1,length(x))*10)
>> plot(x,5*x.^2+randn(1,length(x))*10)
>> plot(x,5*x.^2+randn(1,length(x))*10)


Figures for the cost of two different neurons:

x=0:.01:1;
>> plot(x,2*x.^2,’b’,’LineWidth’,2)
>> set(gca,’FontSize’,15,’Box’,’off’,’LineWidth’,2)
>> xlabel(‘x_1′,’FontSize’,15)
>> ylabel(‘Cost’,’FontSize’,15)
>> plot(x,2*x.^2,’b’,’LineWidth’,3)
>> set(gca,’FontSize’,15,’Box’,’off’,’LineWidth’,3)
>> xlabel(‘x_1′,’FontSize’,15)
>> ylabel(‘Cost’,’FontSize’,15)


Advertisement

Refinement of the metric for the linear and 0.5 cases

See post on 25 Feb 09 for the calculations of permutations.

Linear case:

load metrica_alfabetapnas_lineal
>> m=aproximametrica(coste_real,coste_perm,1)
m =
6.7742e-012

 

Exponent 0.5:

>> load metrica_alfabetabayes_cerocinco
>> m=aproximametrica(coste_real,coste_perm,1)
m =
8.6886e-025

Calibration of Bayes

fig_pap_calibracionbayesglobal_trocitos_01(1,0)

Works fine for most cases (except simulation 3). I will repeat that simulation with better binning. Anyway, simulations with xi=1 seem to work always well, so we can be still sure that xi<1.

Test of isocost lines for the cumulative distribution in elegans

>> fig_pap_controles_wvsdesv_elegans_01(1,0)

Figure of the metabolic network

Only the core metabolism (no oxygen). Only colors for reactions that involve ATP and metabolites that contribute to biomass.

New calibration of Bayes: Exponent 0.5

Calculations made in remoton

I explore different values of the noise:

ruidos2=[.01 .05 .1 .5 1 5];
matlabpool open local 4
parfor (c=1:length(ruidos2))
[pos{c},coste_hist{c},error_hist{c}]=simulaelegans_saltopeque_ruidonorm(todas.A*10^-1.1,todas.M*10^-.9+todas.S,todas.f,.5,.001,ruidos2(c),5*10^6,0);
end
pos_opt=NaN(279,100)

parfor (c=1:100)

pos_opt(:,c)=coste2pos_num_ruido(todas.A*10^-1.1,todas.M*10^-.9+todas.S,todas.f,.5,0);

end
for c=1:100

costes(c)=pos2coste(todas.A*10^-1.1,todas.M*10^-.9+todas.S,todas.f,pos_opt(:,c),.5);

end
[m,ind]=min(costes);
for c=1:length(pos)

errores(c)=mean(abs(pos{c}-pos_reopt));

end

I would say that below 0.5 the system gets stuck in global minima, and above 0.5 the standard deviation increases due to the noise. I take the point with noise 0.5, which has a convenient mean error.

I run Bayes:

infoarchivos=Bayes_alfabeta(todas.A,todas.M,todas.S,todas.f,pos{4},0:.02:1,0:.25:4,10.^(-2:.3:1),10.^(-2:.3:1),10.^(-2:6/6:4),10.^(-11:26/6:15),[],[2 2],[10 10],’Calibracionueva_predichoporbayes’,[],4,1);

On my desktop in the lab:
load(‘c:\hipertec\optimesto\bayes\resultados\info_Calibracionueva_predichoporbayes.mat’)
prob=infoarchivos2prob(infoarchivos,[1 2 3]);
>> plot(infoarchivos.pot,sum(sum(prob,2),3),’r’)
>> imagesc(log10(infoarchivos.beta),log10(infoarchivos.alfa),squeeze(sum(prob)))
>> close all
>> imagesc(log10(infoarchivos.beta),log10(infoarchivos.alfa),squeeze(sum(prob)))
>> hold on
>> plot(-.9,-1.1,’w.’)

This second figure seemed different when plotted in remotón. There was a long diagonal towards the right-bottom corner. But the maximum was in the same place, and the right value was inside the high probability area.

Anyway, works fine. I run it with a binning that includes the exact values for alfa and beta…

>> load(‘c:\hipertec\optimesto\bayes\resultados\info_Calibracionueva_predichoporbayes_binbuenos.mat’)
>> prob=infoarchivos2prob(infoarchivos,[1 2 3]);
>> close all
>> plot(infoarchivos.pot,sum(sum(prob,2),3),’r’)
>> figure
>> imagesc(log10(infoarchivos.beta),log10(infoarchivos.alfa),squeeze(sum(prob)))
>> hold on
>> plot(-.9,-1.1,’w.’)

Good enough. If we have time after running controls for other values of alpha, beta and the exponent, we should run a few more controls in this situation to see the dispersion. Also, the lower sd observed in real data might be because local minima may give rise to very specific configurations. We might try with lower noise, and see what happens.

Supplementary Figure about Bayes

fig_pap_bayes_suppl_01(2,[0 0])

>> fig_pap_bayes_suppl_01(2,[0 0])

Corrections in the labels of box c:

>> fig_pap_bayes_suppl_01(2,[0 0])

New Figure 3 (Bayes)

>> fig_pap_bayes_06(1,[0 0])

expectedvalue_pot =
0.4883
sd_pot =
0.0666
expectedvalue_logalfa =
-1.1050
sd_logalfa =
0.1020
expectedvalue_logbeta =
-0.9361
sd_logbeta =
0.1333

New:

Expected value and sd for the exponent for alfa and beta from pnas

Data come from calculations in post “Merging Bayes results from coarse and detailed calculations”

 

>> [m,ind_alfa]=min(abs(log10(.05)-logalfa_merge))
m =
0.0010
ind_alfa =
18
>> [m,ind_beta]=min(abs(log10(1.5)-logbeta_merge))
m =
0.0761
ind_beta =
16
>> close all
>> plot(pot_merge,prob_merge(:,ind_alfa,ind_beta))
>> figure
>> plot(pot_merge,log10(prob_merge(:,ind_alfa,ind_beta)))


>> prob_linea=prob_merge(:,ind_alfa,ind_beta)/sum(prob_merge(:,ind_alfa,ind_beta));
>> expected=sum(prob_linea’.*pot_merge)
expected =
1.3382
>> sd=sqrt(sum(prob_linea’.*(pot_merge-expected).^2))
sd =
0.0866

Merging Bayes results from coarse and detailed calculations

>> load info_posreal_zonapeque
infoarchivos_zonapeque=infoarchivos;
prob_zonapeque=infoarchivos2prob(infoarchivos_zonapeque,1:3);
load info_posreal_bingrueso
infoarchivos_bingrueso=infoarchivos;
prob_bingrueso=infoarchivos2prob(infoarchivos_bingrueso,1:3);
>> imagesc(log10(infoarchivos_bingrueso.beta),log10(infoarchivos_bingrueso.alfa),log10(squeeze(sum(prob_bingrueso)))); set(gca,’YDir’,’normal’)
>> hold on
>> imagesc(log10(infoarchivos_bingrueso.beta),log10(infoarchivos_bingrueso.alfa),log10(squeeze(sum(prob_bingrueso)))); set(gca,’YDir’,’normal’)
>> imagesc(log10(infoarchivos_zonapeque.beta),log10(infoarchivos_zonapeque.alfa),log10(squeeze(sum(prob_zonapeque)))); set(gca,’YDir’,’normal’)
figure
>> plot(infoarchivos_bingrueso.pot,log10(sum(sum(prob_bingrueso,2),3)))
>> hold on
>> plot(infoarchivos_zonapeque.pot,log10(sum(sum(prob_zonapeque,2),3)),’r’)

 

Interpolate the bingrueso results in a tighter binning:

>> logalfa_merge=log10(infoarchivos_bingrueso.alfa(1)):diff(log10(infoarchivos_zonapeque.alfa(1:2))):log10(infoarchivos_bingrueso.alfa(end));
>> logbeta_merge=log10(infoarchivos_bingrueso.beta(1)):diff(log10(infoarchivos_zonapeque.beta(1:2))):log10(infoarchivos_bingrueso.beta(end));
>> pot_merge=infoarchivos_bingrueso.pot(1):diff(infoarchivos_zonapeque.pot(1:2)):infoarchivos_bingrueso.pot(end);
>> [x,y,z]=meshgrid(log10(infoarchivos_bingrueso.alfa),infoarchivos_bingrueso.pot,log10(infoarchivos_bingrueso.beta));
>> [xi,yi,zi]=meshgrid(logalfa_merge,pot_merge,logbeta_merge);
>> prob_bingrueso_interp=interp3(x,y,z,prob_bingrueso,xi,yi,zi,’linear’);
>> plot(pot_merge,log10(sum(sum(prob_bingrueso_interp,2),3)),’g’)
>> figure
>> imagesc(logbeta_merge,logalfa_merge,log10(squeeze(sum(prob_bingrueso_interp)))); set(gca,’YDir’,’normal’)

Not very nice… But everything is working OK. What happens is that interpolation is working in the linear scale, and we are showing logarithms.

Re-normailzation: I assign to the zonapeque dataset the probability that the bingrueso dataset assigns to the same region.

The whole process:

% Generate common bins
logalfa_merge=log10(infoarchivos_bingrueso.alfa(1)):diff(log10(infoarchivos_zonapeque.alfa(1:2))):log10(infoarchivos_bingrueso.alfa(end));
logbeta_merge=log10(infoarchivos_bingrueso.beta(1))+.1:diff(log10(infoarchivos_zonapeque.beta(1:2))):log10(infoarchivos_bingrueso.beta(end));
pot_merge=infoarchivos_bingrueso.pot(1):diff(infoarchivos_zonapeque.pot(1:2)):infoarchivos_bingrueso.pot(end);
logalfa_merge=round(logalfa_merge*10^10)/10^10; % Prevent numerical errors of the order 10^-15
logbeta_merge=round(logbeta_merge*10^10)/10^10;
pot_merge=round(pot_merge*10^10)/10^10;
% Find places where zonapeque dataset inserts in bingrueso dataset
insercion_alfa=find(logalfa_merge==log10(infoarchivos_zonapeque.alfa(1))):find(logalfa_merge==log10(infoarchivos_zonapeque.alfa(end)));
insercion_beta=find(logbeta_merge==log10(infoarchivos_zonapeque.beta(1))):find(logbeta_merge==log10(infoarchivos_zonapeque.beta(end)));
insercion_pot=find(pot_merge==infoarchivos_zonapeque.pot(1)):find(pot_merge==infoarchivos_zonapeque.pot(end));
% Interpolate bingrueso dataset
[x,y,z]=meshgrid(log10(infoarchivos_bingrueso.alfa),infoarchivos_bingrueso.pot,log10(infoarchivos_bingrueso.beta));
[xi,yi,zi]=meshgrid(logalfa_merge,pot_merge,logbeta_merge);
prob_merge=interp3(x,y,z,prob_bingrueso,xi,yi,zi,’linear’);
% Renormalize both datasets
prob_merge=prob_merge/sum(prob_merge(:));
prob_zonapeque=prob_zonapeque/sum(prob_zonapeque(:)) * sum(sum(sum(prob_merge(insercion_pot,insercion_alfa,insercion_beta))));
prob_merge(insercion_pot,insercion_alfa,insercion_beta)=prob_zonapeque;
>> figure
>> imagesc(logbeta_merge,logalfa_merge,log10(squeeze(sum(prob_merge)))); set(gca,’YDir’,’normal’)
plot(pot_merge,log10(sum(sum(prob_merge,2),3)),’m.-‘)

Results are not very nice, because probabilities are actually different. This is because in the regions of low probability, it matters whether we take into account exponent 3 or not.