Class about stochastic optimization

Figures for parabolic costs in 1D:

>> x=-10:.1:10;
>> plot(x,x.^2)
>> close all
>> plot(x,x.^2)
>> plot(x,x.^2+randn(1,length(x)))
>> plot(x,x.^2+randn(1,length(x))*10)
>> hold on
>> plot(x,x.^2,’r’,’LineWidth’,3)
>> set(gca,’FontSize’,15)
>> xlabel(‘x_1′,’FontSize’,15)
>> ylabel(‘Cost’,’FontSize’,15)
>> figure
>> plot(x,5*x.^2+randn(1,length(x))*10)
>> axis([-10 10 0 120])
>> axis([-10 10 0 120])
>> axis auto
>> axis([-10 10 -20 120])
>> hold on
>> plot(x,5*x.^2,’r’,’LineWidth’,3)
>> set(gca,’FontSize’,15)
>> xlabel(‘x_2′,’FontSize’,15)
>> ylabel(‘Cost’,’FontSize’,15)
>> plot(x,5*x.^2+randn(1,length(x))*10)
>> plot(x,5*x.^2+randn(1,length(x))*10)
>> plot(x,5*x.^2+randn(1,length(x))*10)
>> plot(x,5*x.^2+randn(1,length(x))*10)
>> plot(x,5*x.^2+randn(1,length(x))*10)


Figures for the cost of two different neurons:

x=0:.01:1;
>> plot(x,2*x.^2,’b’,’LineWidth’,2)
>> set(gca,’FontSize’,15,’Box’,’off’,’LineWidth’,2)
>> xlabel(‘x_1′,’FontSize’,15)
>> ylabel(‘Cost’,’FontSize’,15)
>> plot(x,2*x.^2,’b’,’LineWidth’,3)
>> set(gca,’FontSize’,15,’Box’,’off’,’LineWidth’,3)
>> xlabel(‘x_1′,’FontSize’,15)
>> ylabel(‘Cost’,’FontSize’,15)


Advertisements

Refinement of the metric for the linear and 0.5 cases

See post on 25 Feb 09 for the calculations of permutations.

Linear case:

load metrica_alfabetapnas_lineal
>> m=aproximametrica(coste_real,coste_perm,1)
m =
6.7742e-012

 

Exponent 0.5:

>> load metrica_alfabetabayes_cerocinco
>> m=aproximametrica(coste_real,coste_perm,1)
m =
8.6886e-025

Calibration of Bayes

fig_pap_calibracionbayesglobal_trocitos_01(1,0)

Works fine for most cases (except simulation 3). I will repeat that simulation with better binning. Anyway, simulations with xi=1 seem to work always well, so we can be still sure that xi<1.

Test of isocost lines for the cumulative distribution in elegans

>> fig_pap_controles_wvsdesv_elegans_01(1,0)

Figure of the metabolic network

Only the core metabolism (no oxygen). Only colors for reactions that involve ATP and metabolites that contribute to biomass.

New calibration of Bayes: Exponent 0.5

Calculations made in remoton

I explore different values of the noise:

ruidos2=[.01 .05 .1 .5 1 5];
matlabpool open local 4
parfor (c=1:length(ruidos2))
[pos{c},coste_hist{c},error_hist{c}]=simulaelegans_saltopeque_ruidonorm(todas.A*10^-1.1,todas.M*10^-.9+todas.S,todas.f,.5,.001,ruidos2(c),5*10^6,0);
end
pos_opt=NaN(279,100)

parfor (c=1:100)

pos_opt(:,c)=coste2pos_num_ruido(todas.A*10^-1.1,todas.M*10^-.9+todas.S,todas.f,.5,0);

end
for c=1:100

costes(c)=pos2coste(todas.A*10^-1.1,todas.M*10^-.9+todas.S,todas.f,pos_opt(:,c),.5);

end
[m,ind]=min(costes);
for c=1:length(pos)

errores(c)=mean(abs(pos{c}-pos_reopt));

end

I would say that below 0.5 the system gets stuck in global minima, and above 0.5 the standard deviation increases due to the noise. I take the point with noise 0.5, which has a convenient mean error.

I run Bayes:

infoarchivos=Bayes_alfabeta(todas.A,todas.M,todas.S,todas.f,pos{4},0:.02:1,0:.25:4,10.^(-2:.3:1),10.^(-2:.3:1),10.^(-2:6/6:4),10.^(-11:26/6:15),[],[2 2],[10 10],’Calibracionueva_predichoporbayes’,[],4,1);

On my desktop in the lab:
load(‘c:\hipertec\optimesto\bayes\resultados\info_Calibracionueva_predichoporbayes.mat’)
prob=infoarchivos2prob(infoarchivos,[1 2 3]);
>> plot(infoarchivos.pot,sum(sum(prob,2),3),’r’)
>> imagesc(log10(infoarchivos.beta),log10(infoarchivos.alfa),squeeze(sum(prob)))
>> close all
>> imagesc(log10(infoarchivos.beta),log10(infoarchivos.alfa),squeeze(sum(prob)))
>> hold on
>> plot(-.9,-1.1,’w.’)

This second figure seemed different when plotted in remotón. There was a long diagonal towards the right-bottom corner. But the maximum was in the same place, and the right value was inside the high probability area.

Anyway, works fine. I run it with a binning that includes the exact values for alfa and beta…

>> load(‘c:\hipertec\optimesto\bayes\resultados\info_Calibracionueva_predichoporbayes_binbuenos.mat’)
>> prob=infoarchivos2prob(infoarchivos,[1 2 3]);
>> close all
>> plot(infoarchivos.pot,sum(sum(prob,2),3),’r’)
>> figure
>> imagesc(log10(infoarchivos.beta),log10(infoarchivos.alfa),squeeze(sum(prob)))
>> hold on
>> plot(-.9,-1.1,’w.’)

Good enough. If we have time after running controls for other values of alpha, beta and the exponent, we should run a few more controls in this situation to see the dispersion. Also, the lower sd observed in real data might be because local minima may give rise to very specific configurations. We might try with lower noise, and see what happens.

Supplementary Figure about Bayes

fig_pap_bayes_suppl_01(2,[0 0])

>> fig_pap_bayes_suppl_01(2,[0 0])

Corrections in the labels of box c:

>> fig_pap_bayes_suppl_01(2,[0 0])