Skip to main content

Table 4 A Comparison of Structural Error for the suboptimal learning algorithm and the optimal learning algorithm

From: Empirical evaluation of scoring functions for Bayesian network model selection

   

Greedy Hill Climbing

Optimal

GoldNet

Size

Score

Add

Delete

Rev

Mis

Total

Add

Delete

Rev

Mis

Total

Austr

200

AIC

16

14

1

1

32

11

6

2

2

21

 

200

MDL

9

17

0

0

26

0

8

1

4

13

 

200

fNML

11

16

0

1

28

20

7

0

4

31

 

200

0.1

7

17

0

1

25

0

10

0

4

14

 

200

0.5

9

17

0

0

26

1

9

1

3

14

 

200

1

9

17

0

0

26

1

9

1

3

14

 

200

5

11

12

2

2

27

5

6

1

6

18

 

200

10

14

14

0

2

30

8

7

2

4

21

 

600

AIC

18

15

1

0

34

7

1

0

0

8

 

600

MDL

13

15

1

0

29

0

2

0

0

2

 

600

fNML

13

15

2

0

30

1

3

0

7

11

 

600

0.1

11

15

1

1

28

0

4

0

1

5

 

600

0.5

12

15

1

1

29

0

3

0

1

4

 

600

1

12

15

1

1

29

0

3

0

1

4

 

600

5

14

14

1

4

33

1

2

0

0

3

 

600

10

15

15

0

3

33

4

3

1

9

17

 

1000

AIC

18

13

1

0

32

7

0

1

0

8

 

1000

MDL

15

15

1

0

31

0

0

0

0

0

 

1000

fNML

16

15

0

3

34

2

1

1

8

12

 

1000

0.1

15

15

1

0

31

0

0

0

0

0

 

1000

0.5

15

15

1

0

31

0

0

0

0

0

 

1000

1

15

15

1

0

31

0

0

0

0

0

 

1000

5

17

15

2

1

35

2

0

4

6

12

 

1000

10

18

15

2

1

36

4

1

1

8

14

Crx

200

AIC

20

14

0

2

36

9

2

4

3

18

 

200

MDL

9

16

0

3

28

1

8

0

9

18

 

200

fNML

16

15

1

1

33

19

5

6

4

34

 

200

0.1

6

16

0

3

25

1

11

0

6

18

 

200

0.5

10

16

0

3

29

1

8

0

9

18

 

200

1

9

15

0

4

28

1

7

0

10

18

 

200

5

13

14

1

2

30

5

6

3

5

19

 

200

10

19

14

2

0

35

9

4

3

3

19

 

600

AIC

21

14

0

0

35

8

1

2

0

11

 

600

MDL

14

16

0

0

30

1

3

1

0

5

 

600

fNML

14

14

0

4

32

3

3

1

7

14

 

600

0.1

11

15

0

1

27

2

6

2

1

11

 

600

0.5

13

15

0

0

28

1

3

1

0

5

 

600

1

13

15

0

0

28

1

3

1

0

5

 

600

5

17

13

2

3

35

6

2

2

7

17

 

600

10

18

13

0

3

34

8

3

2

6

19

 

1000

AIC

21

15

0

0

36

7

1

1

0

9

 

1000

MDL

14

15

1

0

30

1

2

1

1

5

 

1000

fNML

17

15

0

4

36

2

2

0

9

13

 

1000

0.1

14

15

0

0

29

1

3

1

1

6

 

1000

0.5

13

15

0

0

28

1

3

1

1

6

 

1000

1

13

15

0

0

28

1

3

1

1

6

 

1000

5

17

15

0

0

32

4

2

0

11

17

 

1000

10

18

14

2

4

38

6

2

1

8

17

  1. This table gives detailed information about the structural differences between the learned and gold standard networks for the Statlog (Australian Credit Approval) and Credit Approval datasets. It shows differences for both the greedy hill climbing and the optimal learning algorithm. GoldNet gives the name of the network. Size gives the sample size. Score gives the scoring function. When only a number is shown, the scoring function is BDeu with that value for α. Add gives the number of edges that were added to the learned network that were not in the gold standard network. Delete gives the number of edges that were not in the learned network which were in the gold standard network. Rev gives the number of edges that were oriented in the wrong direction in the equivalence class of the learned network compared to that of the gold standard network; that is, the number of edges that were reversed. Mis gives the number of edges that were either directed in the equivalence class of the learned network and undirected in that of the gold standard network, or vice versa; that is, it gives the number of mis-directed edges.