The influence of different types of translational inaccuracies on the genetic code structure

Background The standard genetic code is a recipe for assigning unambiguously 21 labels, i.e. amino acids and stop translation signal, to 64 codons. However, at early stages of the translational machinery development, the codons did not have to be read unambiguously and the early genetic codes could have contained some ambiguous assignments of codons to amino acids. Therefore, the goal of this work was to obtain the genetic code structures which could have evolved assuming different types of inaccuracy of the translational machinery starting from unambiguous assignments of codons to amino acids. Results We developed a theoretical model assuming that the level of uncertainty of codon assignments can gradually decrease during the simulations. Since it is postulated that the standard code has evolved to be robust against point mutations and mistranslations, we developed three simulation scenarios assuming that such errors can influence one, two or three codon positions. The simulated codes were selected using the evolutionary algorithm methodology to decrease coding ambiguity and increase their robustness against mistranslation. Conclusions The results indicate that the typical codon block structure of the genetic code could have evolved to decrease the ambiguity of amino acid to codon assignments and to increase the fidelity of reading the genetic information. However, the robustness to errors was not the decisive factor that influenced the genetic code evolution because it is possible to find theoretical codes that minimize the reading errors better than the standard genetic code.

formation of the SGC assumes that peptides and RNAs coevolved and were mutual stimulators for the whole system [27]. In this model, a big role was played by tRNA dimers, which directed the initial protein synthesis and showed peptidyl-transferase activity in creation of peptide bonds.
The models assuming a gradual addition of amino acids to the code postulated that this incorporation was: (i) associated with the minimization of disturbance in already synthesized proteins [28], (ii) favoured to promote the diversity of amino acids in proteins [5,8,28,29], (iii) initially driven by catalytic propensity of amino acids functioning in ribozymes [30], (iv) proceeded according to biosynthetic pathways [31][32][33][34][35][36][37][38][39][40], or (v) a consequence of duplications of genes coding for tRNAs and aminoacyl-tRNA synthetases (aaRS) [6,8,[41][42][43][44][45][46][47]. The latter proposition, however, was recently criticized in favour of the coevolution theory assuming that the structure of the genetic code was determined by biosynthetic relationships between amino acids [48], although other authors believe that there was a coevolution between the aaRS and the anticodon code as well as an operational code [49]. Thus, the coevolution theory does not necessarily discard the proposition that aaRS and tRNAs played a major role in the formation of the SGC [39].
Considering many factors together, the evolution of the code was probably a combination of adaptation and frozen accident, although contributions of metabolic pathways and weak affinities between amino acids and nucleotide triplets cannot be ruled out [50,51].
The optimality of the SGC can be reformulated as an attractive problem from the computational and mathematical points of view. For example, a general method of constructing error-correcting binary group codes, represented by channels transmitting binary information, was proposed [52]. Moreover, the analysis of the structure and symmetry of the genetic code using binary dichotomy algorithms also showed its immunity to noise in terms of error-detection and error-correction [53][54][55]. The code can be also described as a single-or multi-objective optimization problem using the Evolutionary Algorithms (EA) technique to find optimal genetic codes under various criteria [11,50,[56][57][58]. Such approach revealed that it is possible to find the theoretical codes much better optimized than the SGC.
The properties of the genetic code can be also tested using techniques borrowed from graph theory [59,60]. The analysis of the SGC as a partition of an undirected and unweighted graph showed that the majority of codon blocks are optimal in terms of the conductance measure, which is the ratio of non-synonymous substitutions between the codons in this group to all possible single nucleotide substitutions affecting these codons [60]. Therefore, this parameter can be interpreted as a measure of robustness against the potential changes in proteincoding sequences generated by point mutations. The SGC turned out to be far from the optimum according to the conductance but many codon groups in this code reached the minimum conductance for their size [60].
The unique features of the SGC indicate that the structure of this coding system is not fully random and must have evolved under some mechanisms. It is obvious that if we assume that 64 codons encode 20 amino acids and stop coding signal in a potential genetic code then this code must be redundant, i.e. there must exist an amino acid which is encoded by more than one codon. In consequence, such code can be represented as a partition of the set of 64 codons into 21 disjoint subsets (codon groups) so that each codon group encodes unambiguously a respective amino acid or stop signal. Interestingly, these codon groups are generally characterized by a very specific structure in the SGC, namely, the codons belonging to the same group differ usually in the same codon position. Most often the third codon position is different, whereas the first and the second ones stay the same. To explain this specific pattern, Crick developed the wobble rule, which states that the first nucleotide of the tRNA anticodon can interact with one of the several possible nucleotides in the third codon position of a transcript (mRNA) [61]. This non-standard base pairing is often associated with the post-transcriptional modifications of the nucleotide at the first position of the anticodon in the tRNA [62]. The weakened specificity in the base interaction has many consequences. Particularly, it reduces the number of different tRNA molecules which have to recognize codons during the protein synthesis process. Moreover, single point mutations in the third codon position can be synonymous, i.e. do not change the coded amino acid. The wobble base pairing plays also a role in the adoption of the proper structure by tRNA and determines whether the tRNA will be aminoacylated with a specific amino acid.
Our approach to the study of the origins and the possible evolution of the specific structure of the SGC assumes that the early translational machinery was not perfect and codons could be translated ambiguously. Such assumption is in agreement with a hypothesis that protoribosomes could form spontaneously and were able to produce a variety of random peptides, whose sequences depended on the distribution of various amino acids in their vicinity, without the need of a code [63,64]. Our model also concerns the evolvability of the genetic code as shown in the case of the alternative variants of the genetic code [5,[65][66][67][68][69][70]. The evolutionary models of these codes postulate the presence of ambiguous assignments of codons to amino acids [71,72]. Indeed, such assignments were found in Condylostoma, Blastocrithidia and Karyorelict nuclear codes [73][74][75] as well as Bacillus subtilis and Candida [76][77][78]. For these reasons we assumed that the genetic code structure went through intermediate stages in which a particular codon could be translated into more than one amino acid. Obviously, such property of the genetic code is directly related to the level of inaccuracy of the translational machinery. Therefore the goal of our work was to learn which structures of the genetic code can evolve assuming different types of inaccuracy in codon reading in comparison to the structure of the SGC.
Using the approach based on an evolutionary algorithm [79,80], we analysed a population of randomly generated genetic codes whose codons encoded ambiguously more than one amino acid. The population evolved under the conditions which preferred unambiguous encoding. The scenario which was run under the assumption similar to the wobble rule, produced very quickly the coding systems that are more unambiguous and robust to errors in comparison to other scenarios.

Methods
In this section we give a brief overview of the technical aspects of our work. First, we set up the notation and the terminology necessary to present the crucial steps of our simulation procedure. Then, we introduce a detailed description of the fitness function F, which was used during the selection process. Finally, we describe several measures to study the properties of the optimal genetic codes extracted from the simulations.

Evolutionary algorithm
To simulate the process of the genetic code emergence, we applied an adapted version of EA class algorithm. This technique is widely used in many optimization tasks, especially in the case when analytical solutions do not exist or they are computationally infeasible [80].
The simulation starts with a population of 1000 candidate solutions (individuals). Each candidate represents a random assignment of 64 codons c to 21 labels l corresponding to 20 amino acids and stop translation signal. For simplicity of notation, we use the following set of labels l = 1, 2, 3, . . . , 20, 21 and denote the codons c = 1, 2, 3, . . . , 63, 64. Therefore, P = (p cl ) is a matrix with 64 rows and 21 columns. Each entry p cl in the matrix P is a probability that a given codon c encodes a given label l and every row sums up to one. At the beginning of our simulations, we used the genetic code matrices whose rows were generated according to the uniform distribution. These codes create an unbiased starting population with high volatility.
The simulation process is divided into consecutive steps called generations. During each step, two important operators, i.e. mutation and selection, are applied to the population. The mutation is a classical genetic operator used in all EA algorithms because it is responsible for random modifications of selected individuals, thus creating new solutions. Here this operator is realized by changing the probability that the selected codon encodes one of 21 possible labels. All changes are introduced using random values generated from the normal distribution and normalized to obtain a probability function in each row. The selection operator requires a fitness function F which allows for assessing the quality of solutions, i.e. the fitness value. Candidate solutions with greater fitness values (scores) are more likely selected to survive and reproduce for the next generation. In this case, we applied a random process of drawing candidate solutions to the next generation with the probability proportional to their fitness. We run the simulations up to 50,000 steps and repeated them 50 times using different seeds.

Fitness function
The fitness function F plays the decisive role in the procedure of genetic codes selection. As a fitness measure, we used a modified version of the total probability function, i.e. the probability that a given genetic code encodes 20 amino acids and stop translation signal. This measure assumes some restrictions on the structure of the codon group assigned to a specific label, e.g. the size of the potential codon group. Moreover, it favours greater probability of encoding a selected label, which reduces the ambiguity in coding. Below we present a detailed description of F in three consecutive steps: 1 Let L = l 1 , l 2 , . . . , l 21 be a sequence of all labels and let C = c r 1 , c r 2 , . . . , c r 21 , r i = 1, 2, . . . , 64 be a sequence of random codons where every codon c r i encodes a respective label l i . Each codon c r i ∈ C is drawn randomly from the set of all possible codons c = c 1 , c 2 , . . . , c 64 according to the following probability: where p l i |c j = p l i c j is an element from l th i -row and c th j -column of the matrix P. It is evident that 64 j=1 P(l i |c j ) is a sum of all elements extracted from the column l i of the matrix P. Therefore, the Eq. (1) is clearly an application of Bayes rule under the assumption that a priori probability, i.e. the probability of choosing a given codon c j , is uniformly distributed i.e. P c j = 1/64. 2 For each codon c r i belonging to C, we define a codon neighbourhood N c r i . N c r i is a set of codons that contains the original codon c r i and the codons c r i differing in one nucleotide from c r i . The size of N c r i depends on the simulation assumptions. We considered three possible scenarios: For example, the neighbourhood for the codon GGG is: • GGG, GGA, GGC, GGT for the scenario M 1 ; • GGG, AGG, CGG, TGG, GAG, GCG, GTG for the scenario M 2 ; • GGG, AGG, CGG, TGG, GAG, GCG, GTG, GGA, GGC, GGT for the scenario M 3 .
Thus, the size of the neighbourhood for (2) It is evident that assuming P c r i = 1 64 , c r i = 1, 2, . . . , 64 and the independence of P l n |c r i in the formula (2), we obtain the following equality: which is the total probability that a given genetic code generates a sequence of labels L. Therefore, a high value of F suggests that a given genetic code is more likely to encode 20 amino acids and stop coding signal unambiguously.
It should be noted that the computation of F, using the formula (2) directly, involves the order of O |N(c r )| 21 calculations [81]. Therefore, fast calculation of the fitness values for many candidate solutions becomes a problem because the "direct" method is computationally infeasible even for small sizes of N(c r ). To deal with it, we incorporated a modified version of the forward algorithm [81], which is more efficient in computing the exact fitness values than the direct approach. This procedure follows from some basic observations. Let us consider α l (c) defined inductively as: From this definition we can deduce that F = c∈N c r 21 α 21 (c). If we take into account the computational effort required to calculate α l (c) c ∈ N(c r l ) and then compute the fitness value, we need the order of O |N(c r l )| 2 calculations. Thereby, assuming that N c r l = 10, which is the maximum size of the codon neighbourhood in the M 3 model, we need about 2100 computations for the modified forward method in comparison to about 10 21 computations for the "direct" approach. This forward procedure allowed us to calculate the fitness values fast and effectively, which is essential in the case of many individuals constantly modified during simulations.
There is also another important feature related to the fitness function, namely, F is non-deterministic. This is because the fitness value is dependent on a randomly generated codon sequence C. Therefore, F is a random variable and in consequence, genetic codes are rated according to their randomly generated fitness values during the selection process. However, the chance to be selected to the next generation is not only a matter of luck because the selection of the sequence C prefers the codons that have relatively high probabilities to encode respective labels (see Eq. (1)). Thereby, the distribution of F prefers larger values. They are compared during the selection process and finally, the method of codon selection is crucial in terms of the convergence of genetic codes to the stable solutions. We observed such convergence of the fitness values to the stable solution during the simulations steps. An example of the variation in the fitness function values calculated for 50 independent simulations under the same parameters but different seeds is presented in the Fig. 1.

Measures of the properties of genetic codes
Because of the large amount of data to analyse, we introduced some definitions to test in details the properties of the obtained genetic codes. One of the most important questions which arose in our investigations was how to measure the level of the genetic code ambiguity at the global scale, because the fitness function delivered us only a piece of information about the probability of encoding 21 labels. To test the quality of a given genetic code, we defined the genetic code entropy. It should be noted that H(P) is in fact the sum of Shannon entropy calculated for each row of the matrix P, separately. Therefore, H(P) corresponds to the multidimensional entropy of independent distributions. The definition 1 appears useful in testing the general properties of genetic codes in terms of changes in their ambiguity. Moreover, it allows us to make more detailed comparisons between the results obtained under different scenarios i.e. M 1 , M 2 and M 3 . In our analyses we also calculated the average genetic code entropy value H av (P), which is the arithmetic mean of the genetic code entropy H(P) evaluated for all candidate solutions.
Furthermore, we used a graph representation of the genetic code. This approach was effectively applied by [59] and [60]. The authors considered a graph G(V , E) with 64 nodes (codons) V and the set of edges E representing point mutations between codons. According to this approach, every genetic code C is a partition of V into 21 disjoint subsets S l , l = l 1 , l 2 , . . . , l 21 , i.e. groups of codons. To investigate further the properties of a given graph clustering, [60] introduced the set conductance, which turned out a very useful measure in testing the properties of codon groups. The definition of the set conductance is as follows:

Definition 2 For a given graph G, let S be a subset of V.
The conductance of S is defined as: where E S,S is the number of edges of G crossing from S to its complementS and vol(S) is the sum of all degrees of the vertices belonging to S.
The set conductance has a useful interpretation from the biological point of view because for a given codon group S, φ(S) is the ratio of non-synonymous codon changes to all possible changes concerning all codons belonging to this set. Therefore, it is interesting to find the optimal codon blocks in terms of φ(S). To do so, we used the k-size-conductance φ k (G) described as the minimal set conductance over all subsets of V with the fixed size k.

Definition 3
The k-size-conductance of the graph G, for k ≥ 1, is defined as: Moreover, the properties of a given genetic code C can be expressed as the average code conductance (C), which is the arithmetic mean calculated from all set conductances of all codon groups. The detailed definition of the average code conductance is given in the following way:

Definition 4
The average conductance of a genetic code C is defined as:

The relationship between matrix and graph representation of the genetic code
As mentioned in the previous section, we used two different representations of the genetic code. The first one describes the genetic codes as a matrix, whereas the other one presents the genetic code as a partition of graph nodes into 21 non-empty disjoint clusters. It is evident that for every graph representation we can construct directly a unique matrix. Then, each row c of the matrix P contains a degenerated probability distribution, i.e. p cl = 1, where a codon c encodes a label l. On the other hand, without additional assumptions, it is impossible to obtain a unique graph partition from a selected matrix representation. Therefore, we have to assume that each row of the matrix P contains a unimodal probability distribution.
Only in such case we can transform P unambiguously into an equivalent graph representation. To do so, we introduced the maximum likelihood graph partition (MLGP) approach.
Definition 5 Let P = (p cl ) be a matrix representation of a genetic code, where each row contains a unimodal discrete probability distribution. Assume also that for every label l there exists a codon c such that: Then the maximum likelihood graph partition is a partition of the set of the graph G nodes into 21 non-empty disjoint subsets S 1 , S 2 , . . . , S 21 according to the following formula: To measure the quality of the selected codon block S l , l = 1, 2, . . . , 21, created according to the definition 5, we defined the coding strength of the set S l .

Definition 6
Let P = (p cl ) be a matrix representation of a genetic code, where each row contains a unimodal discrete probability distribution and let C = {S 1 , S 2 , . . . S 21 } be its respective MLGP representation, then for every S l we define ψ(S l ), the coding strength of the set S l , in the following way: Following the definition 6 of the coding strength, we can also consider the average coding strength (C) of a genetic code C, which is defined as the arithmetic mean of all coding strengths ψ(S l ) computed for all S l belonging to the graph representation of a genetic code C:

The uncertainty level of simulated genetic codes
The aim of these simulations was to learn, which structures of the genetic codes can evolve assuming different inaccuracy of the translational machinery. We simulated three scenarios of the genetic code evolution that started from an ambiguous coding state. The scenarios M 1 , M 2 and M 3 assumed that respectively one, two or three codon positions can be mutated or erroneously read during the translation process. We started our analysis by looking at the differences between the average entropy value H av (P) of the genetic codes calculated for the three scenarios. The high value of the entropy means that a code is characterized by a high level of coding ambiguity, i.e. a individual codon can be translated into various amino acids, while the low values indicate that the coding is more unambiguous. The code with the perfect unambiguity should be characterized by H av (P) = 0. The changes in the coding ambiguity during the simulation time are presented in the Fig. 2 (Fig. 2). In contrast to the entropy measure, which includes in the calculation the probabilities of all possible assignments of amino acids to codons, the average coding strength takes into account only the maximum probability of these assignments. Large values of indicate that the assignments are highly unambiguous in a given code, while small values mean that many amino acids can be encoded by many codons with a comparable probability. The code with no ambiguous assignment of amino acids to codons ought to have the value = 1. Similarly to the entropy, the highest unambiguity and the largest values of are observed in the case of M 1 but the values of do not show the relationship with the size of N(c r ) as the H av (P) (Fig. 3). We could expect that a decrease in the neighbourhood would result in an increase of the coding signal. However, it is not fully fulfilled because for M 2 is slightly smaller than for M 3 (Fig. 3). This observation suggests that the MGLP graph representations of the genetic codes computed under the M 2 scenario are composed of codon blocks characterized by a weaker coding signal in comparison to the other simulation scenarios.

The robustness level of simulated genetic codes
To describe the robustness of the structure of the genetic code to mutations and mistranslations, we applied the average code conductance . Its large value indicates that the code is not robust against point mutations. The values were calculated following the MLGP representation of the codes obtained at the end of each simulation run. It is interesting that the values for each simulation run under the M 1 assumption, are smaller than the average code conductance computed for the standard genetic code, i.e.
(SGC) = 0.8112 (Fig. 4). Moreover, the M 1 -type optimal genetic codes are closer to the best (minimum) possible value of = 0.7724 for any code assigning 21 labels to 64 codons. The results strongly suggest that the M 1 scenario of code evolution is able to create the genetic codes quite robust to mutation and mistranslations. In contrast to that, the genetic codes obtained under the M 2 and M 3 assumptions are characterized by much larger values of the average code conductance than SGC (Fig. 4). Thereby their structures are less robust against point mutation. The genetic codes obtained in the M 2 type of simulations show generally the worst in comparison to the other simulation types.

The types of codon groups in simulated genetic codes
The genetic codes obtained under M 1 , M 2 and M 3 scenarios differ in the codon group distribution (Fig. 5). In the the genetic codes produced at the end of 50 independent simulations in the M 1 scenario, there are two most frequent types of groups, consisting of two and four codons (Fig. 5b), similarly to the SGC (Fig. 5a). They constitute in total over 87% of all codon groups in the M 1 codes and 71% in the case of the SGC. The groups of one, three, five and six codons are in the minority, constituting in total less than 13% of the codon groups in the M 1 codes. However, there are also some differences in comparison to the SGC. In the SGC the contribution of two-codon groups is greater than the four-codon groups, while in the M 1 codes the opposite is true. Moreover, there are no groups of five codons in the SGC, which occur in the M 1 codes. The codes produced by the M 2 model show definitely different distribution of the codon groups and are characterized by a greater variability in codon group sizes, being in the range from 1 to 16 (Fig. 5c). However, the codon groups of the size from 1 to 6 have the joint frequency over 95%. The most frequent are two-codon groups as in the SGC. They constitute 38% and 43%, respectively.
What is more, an intriguing kind of symmetry is present in the distribution of codon groups in the genetic codes simulated under the M 3 scenario (Fig. 5d). The most frequently observed codon group consists of three codons and constitutes about 60% of all groups. The frequencies of other codon groups are nearly symmetrically arranged around the most frequent group. The next most common groups (about 20%) include two and four codons. This type of codes are the most different form the SGC in the distribution of the codon groups because in the SGC the three-codon groups are poorly represented.
The presence of codon groups with the number of codons different than in the SGC would seem intriguing and artificial for the simulated codes. However, such groups have actually evolved in some alternative variants of the SGC. In total in these codes, there are five pentacodonic amino acids, four heptacodonic amino acids and five octacodonic amino acids (https://www.ncbi.nlm.nih. gov/Taxonomy/Utils/wprintgc.cgi). For example, in the alternative yeast nuclear code, serine is encoded additionally by the seventh codon CUG, which was taken from leucine, encoded in consequence by five codons.

The properties of the best genetic codes
In this section, we discussed the properties of the best genetic codes that were selected according to their maximum fitness values from all simulation runs for all types of scenarios. In the Fig. 6, we presented four heatmaps depicting the selected matrix representations of the genetic codes at the beginning as well as at the end of the simulations under the M 1 , M 2 and M 3 scenarios.
As expected, the random code at the start of simulation is highly ambiguous (Fig. 6a), while the code emerged under the M 1 scenario is characterized by a very high unambiguity and is filled mainly with the codon blocks consisting of two and four codons (Fig. 6b). The codons in each of such groups differ in pairwise comparison in only one nucleotide (Fig. 7). The graph representation of this code following the definition 5 is also optimal in terms of the k-size conductance φ k (G), k = 2, 4. All the codon groups show the minimum possible conductance for their size. Therefore, these groups are the most robust against single non-synonymous nucleotide mutations. In consequence, this genetic code reaches the minimum of the average code conductance (C) = 0.7725, which is the minimum value of all possible genetic codes and is smaller than the conductance of the standard genetic code  (Table 1). The best codes produced under the M 3 scenario (Fig. 6d) show completely different composition of codon groups in comparison to the best code of the M 1 scenario. The M 3 -type code is composed of codon groups of the size k = 2, 3, 4 with the domination of three-codon groups ( Table 2). This code is also less robust against point mutation because its average code conductance is equal to 0.8457, which is slightly greater than the conductance of the standard genetic code (SGC) = 0.8113. This is caused by the presence of as many as twelve non-optimal codon groups in terms of the k-size conductance ( Table 2). The code shows a higher ambiguity than that of the M 1 scenario because its average coding strength ψ is 0.8023. Only four codon groups consisting of two codons are perfectly unambiguous and robust to non-synonymous mutations.
The best genetic code evaluated under the M 2 model (Fig. 6c) is characterized by the most diversified size of codon groups in comparison to the M 1 and M 3 cases Table 1 The codon groups of the best genetic code in terms of the fitness function F extracted from 50 independent simulations under the M 1 scenario {AAA, AAT, AAG, AAC} 4 1.0000000 The groups S are characterized by: the size k, the coding strength ψ(S), the conductance φ(S) and the minimal conductance of the codon group with the size k φ k (G) because it is composed of codon groups of the size k = 1, 2, 3, 4, 6 ( Table 3). These groups are also characterized by generally smaller coding strength values of ψ. Therefore, the average coding strength calculated in this case is equal to 0.7996. Moreover, thirteen codon blocks are not optimal in terms of the set conductance φ(S). In consequence, the average code conductance is relatively high and equals 0.8580. Therefore, it is the least robust genetic code structure against point mutation in comparison to the M 1 -and M 3 -type codes. The M 2 code contains no codon groups including at least two codons that simultaneously encode unambiguously one label and are the most robust to single point mutations. On the other hand, the two largest groups of six codons in this code are optimal in terms of the k-size conductance φ k (G) (Fig. 7) and are characterized by quite big values of coding strength, over 0.98.

Discussion
We carried out a simulation study to find out how the structure of the genetic code could have evolved The groups S are characterized by: the size k, the coding strength ψ(S), the conductance φ(S) and the minimal conductance of the codon group with the size k φ k (G) Moreover, the models M 1 and M 2 match the stages of the genetic code evolution postulated by the 2-1-3 model [44,84] and the four-column theory [28]. They assume that in the beginning of the genetic code evolution the second codon position decided about the encoded amino acids, whereas other positions were not important. Next, the coding specificity occurred in the first codon position and then, to some extent, in the third position.
The initial ambiguity in the assignments of amino acids to codons disappeared the fastest under the M 1 model. The codes generated under this scenario are characterized by the biggest unambiguity of coding and the most effective minimization of mutations changing encoded amino acids or stop translation signal. On the other hand, the genetic codes simulated under the M 3 assumptions maintained the highest level of ambiguity and the M 2 -type codes produced the biggest number of amino acid changes due to point mutations in codons.
It is interesting to consider, which of the simulated codes is the most similar to the SGC based on unambiguity, minimization of point mutations and the structure.
According to the unambiguity measured by entropy or coding strength, the most similar are the codes obtained under the M 1 scenario. They show almost unambiguous assignments of amino acids to codons. However, they are not perfect. Similarly, the SGC is usually presented as a table with the unambiguous assignments but the translation process is not ideal and some errors can occur. It was estimated that one mistranslation occurs with the rate of 10 −3 to 10 −6 per codon [85] or 10 −3 to 10 −5 per amino acid [86]. Moreover, errors associated with replication and transcription processes can also change the encoded amino acid. If the initial genetic codes had been characterized by much bigger ambiguity of assignments of amino acids to codons, they would have been quickly eliminated by selection, which resembles the rapid decrease in entropy during the simulation of the M 1 codes. The entropy in other models was also reduced but stabilized at the much larger level. It indicates that the assumption on a imprecise recognition of only one fixed codon position is necessary to reduce the initial ambiguity, which corresponds to the wobble rule characterizing the current process of translation.
In terms of minimization of amino acid replacements resulting from point mutations in codons, measured here by the average conductance, the SGC is placed between the M 1 codes, characterized by the lowest conductance, and the codes from the M 2 and M 3 models. In agreement with our simulation study, other analyses also showed that the SGC is not perfectly optimized in this respect and better codes can be found [11,44,50,57,58,[87][88][89]. Therefore, it is possible that the assignments of amino acids to codons occurred in accordance with other mechanisms, while the minimization of mutation errors was adjusted by the direct optimization of the mutational pressure around the established genetic code [90][91][92][93][94]. Moreover, some minimization properties of the SGC could have evolved as a by-product of the duplication of genes for tRNAs and aminoacyl-tRNA synthetases charging similar amino acids [6,8,[41][42][43][44][45][46][47]. It is also possible that new amino acids were added into the code in an order that ensured the minimal disturbance of already synthesized proteins but the code itself was not directly optimized [28].
When we compare the structure of the SGC with the structure of the codes produced by the three models, the standard code is the most similar to the M 1 codes because they are also characterized by the domination of amino acids encoded by the groups of two and four codons. All these codon groups are also optimal in terms of the conductance in both the simulated and the SGC. However, four-codon groups are the most numerous in the M 1 codes, while in the SGC the most frequent are two-codon groups, which dominate also in the M 2 codes. The degeneracy of the SGC is usually associated with the presence of codons encoding the same amino acid and differing in the third codon position. It corresponds to the M 1 model, in which the codons for a given amino acid have two fixed positions identical and one different. However, the SGC contains also two codon groups encoding arginine and leucine, which resemble the codon groups in the M 2 model, where all the codons in the groups have one fixed codon position identical and differ in one nucleotide in one of the other two codon positions. Three codons recognized as the stop translation signal also show this property in the SGC. Therefore, the SGC is a mixture of the M 1 and M 2 models in this respect.
The models M 3 and M 2 can represent the initial stages of evolution when the translational apparatus did not read codons perfectly. Therefore, there was a selection to improve the translation process and to develop a stable form of the genetic code. The fixing of two codon positions, represented by the M 1 model, was crucial and enough to unambiguously encode 20 amino acids and the stop translation signal by 64 codons. The wobble base pairing could be a relic of the initial ambiguity.
Since the SGC turned out to be most similar to the codes evolved under the M 1 and M 2 models, we may assume that at certain stage it could have evolved according to the theories proposed by [44,84] and [28], which means that in the beginning the translation machinery could have recognised only the second codon position, then the first and the third positions. It would be interesting to combine our models with others or enrich it with other biological assumptions to obtain a more accurate model of the evolution of the standard genetic code.

Conclusions
The initial evolution of the standard genetic code could have started from imperfect reading of the genetic information associated with ambiguous assignments of amino acids to codons. Then the selection favoured codes that improved the fidelity of the translation process. An important step was the fixation of two codon positions, which generated the typical codon block structure of the genetic code. According to this hypothesis, the wobble base pairing in the third codon position could be a relic of an early ambiguity. However, the selection for the minimization of translational errors could not have been the only factor influencing the genetic code evolution because its current level of optimization is not perfect. The simulated codes outperformed the standard genetic code in the robustness against mistranslations.