Skip to main content

Table 2 Information metrics of environmental complexity.

From: Guided evolution of in silico microbial populations in complex environments accelerates evolutionary rates through a step-wise adaptation

Metric

Adjusted R 2

MI (S 1 ; S 2 ; N shifted )

0.984

MI (S 1 ; S 2 |N shifted )

0.981

H (S 1 ; S 2 ; N shifted )

0.028

MI (S 1 ; S 2 ; N)

-0.249

[Corr. (S 1 , N shifted ) + Corr. (S 1 , N shifted )]/2

-0.332

[K-L Div. (S 1 ||N shifted ) + K-L Div. (S 1 ||N shifted )]/2

-0.106

  1. We evaluated the following measures on their correlation with the rate of evolution measured through experimental runs. From top to bottom: Multivariate mutual information (slope -0.169 ± 0.011, intercept 0.302 ± 0.031), multivariate conditional mutual information (slope 0.181 ± 0.012, intercept -0.12 ± 0.037), entropy, mutual information of time-shifted environments, average pair-wise Pearson correlation, average Kullback-Leibler divergence. Information theoretic metrics were calculated for probability distributions (using 64 bins) of possible values of two signals S 1 and S 2 and a nutrients' signal. Nutrient's signals were taken either as a time delayed function of input signals N (see Figure 3), or as a shifted by -500 time steps nutrients' signal N shifted (i.e. with eliminated time delay relative to the input signals). Information theoretic metrics were calculated as: MI(S 1 ; S 2 ; N) = H(S 1 ) + H(S 2 ) + H(N) - H(S 1 ;S 2 ) - H(S 1 ;N) - H(S 2 ;N) + H(S 1 ;S 2 ;N) and MI(S 1 ; S 2 |N) = MI(S 1 ;S 2 ) - MI(S 1 ; S 2 ; N); joint entropy is defined as H ( X 1 , . . . , X n ) = ∑ x 1 ∈ X 1 … ∑ s n ∈ X n p ( x 1 , . . . , x n ) log ( p ( x 1 , . . . , x n ) ) and mutual information is defined as M I ( S 1 ; S 2 ) = ∑ s 1 ∈ S 1 ∑ s 2 ∈ S 2 p ( s 1 , s 2 ) log p ( s 1 , s 2 ) p 1 ( s 1 ) p 2 ( s 2 ) , where (p(x1,...,x n ) and p i (x) are joint and marginal probability distribution functions, respectively. Kullback-Leibler divergence is calculated as: K-L Div. ( S | | N ) = ∑ i p ( S i ) log p ( S i ) p ( N i ) for probability distributions S and N.