Datasets used for classification: comparison of results 
Method 
Accuracy % 
Reference 
PVM (logical rules) 
89.6 
Weiss, Kapouleas 
CMLP2LN (logical rules) 
89.6±? 
our 
kNN, stand. Manhatan, k=8,9,2225 k=4,5, stand. Euclid, f2+f4 removed 
88.7 
our (WD/KG) 
9NN, stand. Euclides 
87.7 
our (KG) 
RIAC (prob. inductive) 
86.9 
Hamilton et.al 
1NN, stand. Euclides, f2+f4 rem 
86.8 
our (WD/KG) 
MLP+backpropagation 
85.8 
Weiss, Kapouleas 
CART, C4.5 (dec. trees) 
84.9 
Weiss, Kapouleas 
FSM 
84.9 
our (RA) 
Bayes rule (statistical) 
83.0 
Weiss, Kapouleas 
Method 
Accuracy % 
Reference 
NBC+WX+G(WX) 
??.5±7.7 
TMGM 
NBC+G(WX) 
??.2±6.7 
TMGM 
kNN auto+G(WX) Eukl 
??.2±6.7 
TMGM 
CMLP2LN 
89.6 
our logical rules 
20NN, stand. Eukl f 4,1,7 
89.3±8.6 
our (KG); feature sel. from CV on the whole data set 
SSV beam leaves 
88.7±8.5 
WD 
SVM linear C=1 
88.1±8.6 
WD 
6NN, stand. Eukl. 
88.0±7.9 
WD 
SSV default 
87.8±8.7 
WD 
SSV beam pruning 
86.9±9.8 
WD 
kNN, k=auto, Eucl 
86.7±6.6 
WD 
FSM, a=0.9, Gauss, cluster 
86.1±8.8 
WDGM 
NBC 
85.9±10.2 
TMGM 
VSS 1 neuron, 4 it 
84.9±7.4 
WD/MK 
SVM Gauss C=32, s=0.1 
84.4±8.2 
WD 
MLP+BP (Tooldiag) 
83.9 
Rafał Adamczak 
RBF (Tooldiag) 
80.2 
Rafał Adamczak 
Method 
Accuracy % 
Reference 
FSM 
98.3 
our (RA) 
3NN stand Manhatan 
97.1 
our (KG) 
21NN stand. Euclidean 
96.9 
our (KG) 
C4.5 (decision tree) 
96.0 
Hamilton et.al 
RIAC (prob. inductive) 
95.0 
Hamilton et.al 
method 
Accuracy % 
Reference 
Naive MFT 
97.1 
Opper, Winther, L1O est. 97.3 
SVM Gauss, C=1,s=0.1 
97.0±2.3 
WDGM 
SVM (10xCV) 
96.9 
Opper, Winther 
SVM lin, opt C 
96.9±2.2 
WDGM, same with Minkovsky kernel 
Cluster means, 2 prototypes 
96.5±2.2 
MB 
Default, majority 
65.5 
 
method 
Accuracy % 
Reference 
NB + kernel est 
97.5±1.8 
WD, WEKA, 10X10CV 
SVM (5xCV) 
97.2 
Bennet and Blue 
kNN with DVDM distance 
97.1 
our (KG) 
GM kNN, k=3, raw, Manh 
97.0±2.1 
WD, 10X10CV 
GM kNN, k=opt, raw, Manh 
97.0±1.7 
WD, 10CV only 
VSS, 8 it/2 neurons 
96.9±1.8 
WD/MK; 98.1% train 
FSMFeature Space Mapping 
96.9±1.4 
RA/WD, a=.99 Gaussian 
Fisher linear discr. anal 
96.8 
Ster, Dobnikar 
MLP+BP 
96.7 
Ster, Dobnikar 
MLP+BP (Tooldiag) 
96.6 
Rafał Adamczak 
LVQ 
96.6 
Ster, Dobnikar 
kNN, Euclidean/Manhattan f. 
96.6 
Ster, Dobnikar 
SNB, seminaive Bayes (pairwise dependent) 
96.6 
Ster, Dobnikar 
SVM lin, opt C 
96.4±1.2 
WDGM, 16 missing with 10 
VSS, 8 it/1 neuron! 
96.4±2.0 
WD/MK, train 98.0% 
GM IncNet 
96.4±2.1 
NJ/WD; FKF, max. 3 neurons 
NB  naive Bayes (completly independent) 
96.4 
Ster, Dobnikar 
SSV opt nodes, 3CV int 
96.3±2.2 
WD/GM; training 96.6±0.5 
IB1 
96.3±1.9 
Zarndt 
DBCART (decision tree) 
96.2 
Shang, Breiman 
GM SSV Tree, opt nodes BFS 
96.0±2.9 
WD/KG (beam search 94.0) 
LDA  linear discriminant analysis 
96.0 
Ster, Dobnikar 
OC1 DT (5xCV) 
95.9 
Bennet and Blue 
RBF (Tooldiag) 
95.9 
Rafał Adamczak 
GTO DT (5xCV) 
95.7 
Bennet and Blue 
ASI  Assistant I tree 
95.6 
Ster, Dobnikar 
MLP+BP (Weka) 
95.4±0.2 
TW/WD 
OCN2 
95.2±2.1 
Zarndt 
IB3 
95.0±4.0 
Zarndt 
MML tree 
94.8±1.8 
Zarndt 
ASR  Assistant R (RELIEF criterion) tree 
94.7 
Ster, Dobnikar 
C4.5 tree 
94.7±2.0 
Zarndt 
LFC, Lookahead Feature Constr binary tree 
94.4 
Ster, Dobnikar 
CART tree 
94.4±2.4 
Zarndt 
ID3 
94.3±2.6 
Zarndt 
C4.5 (5xCV) 
93.4 
Bennet and Blue 
C 4.5 rules 
86.7±5.9 
Zarndt 
Default, majority 
65.5 
 
QDA  quadratic discr anal 
34.5 
Ster, Dobnikar 
Method 
Accuracy % 
Reference 
Weighted 9NN 
92.9±? 
Karol Grudziński 
18NN, stand. Manhattan 
90.2±0.7 
Karol Grudziński 
FSM with rotations 
89.7±? 
Rafał Adamczak 
15NN, stand. Euclidean 
89.0±0.5 
Karol Grudziński 
VSS 4 neurons, 5 it 
86.5±8.8 
WD/MK, train 97.1 
FSM without rotations 
88.5 
Rafał Adamczak 
LDA, linear discriminant analysis 
86.4 
Stern & Dobnikar 
Naive Bayes and SemiNB 
86.3 
Stern & Dobnikar 
IncNet 
86.0 
Norbert Jankowski 
QDA, quadratic discriminant analysis 
85.8 
Stern & Dobnikar 
1NN 
85.3±5.4 
Stern & Dobnikar, std added by WD 
VSS 2 neurons, 5 it 
85.1±7.4 
WD/MK, train 95.0 
ASR 
85.0 
Stern & Dobnikar 
Fisher discriminant analysis 
84.5 
Stern & Dobnikar 
LVQ 
83.2 
Stern & Dobnikar 
CART (decision tree) 
82.7 
Stern & Dobnikar 
MLP with BP 
82.1 
Stern & Dobnikar 
ASI 
82.0 
Stern & Dobnikar 
LFC 
81.9 
Stern & Dobnikar 
RBF (Tooldiag) 
79.0 
Rafał Adamczak 
MLP+BP (Tooldiag) 
77.4 
Rafał Adamczak 
1. age 
2. sex 
3. chest pain type (4 values) 
4. resting blood pressure 
5. serum cholestorol in mg/dl 
6. fasting blood sugar 120 mg/dl 
7. resting electrocardiographic results (values 0,1,2) 
8. maximum heart rate achieved 
9. exercise induced angina 
10. oldpeak = ST depression induced by exercise relative to rest 
11. the slope of the peak exercise ST segment 
12. number of major vessels (03) colored by flouroscopy 
13. thal: 3 = normal; 6 = fixed defect; 7 = reversable defect 
Method 
Accuracy % 
Reference 
Lin SVM 2D QCP 
85.9±5.5 
MG, 10xCV 
kNN auto+WX 
??.8±5.6 
TM GM 10xCV 
SVM Gauss+WX+G(WX), C=1 s=25 
??.8±6.4 
TM GM 10xCV 
SVM lin, C=0.01 
84.9±7.9 
WD, GM 10x(9xCV) 
SFM, G(WX), default C=1 
??±5.1 
TM, GM 10xCV 
NaiveBayes 
84.5±6.3 
TM, GM 10xCV 
NaiveBayes 
83.6 
RA, WEKA 
SVML default C=1 
82.5±6.4 
TM, GM 10xCV 
K* 
76.7 
WEKA, RA 
IB1c 
74.0 
WEKA, RA 
1R 
71.4 
WEKA, RA 
T2 
68.1 
WEKA, RA 
MLP+BP 
65.6 
ToolDiag, RA 
FOIL 
64.0 
WEKA, RA 
RBF 
60.0 
ToolDiag, RA 
InductH 
58.5 
WEKA, RA 
Base rate (majority classifier) 
55.7 

IB14 
50.0 
ToolDiag, RA 
Method 
Accuracy % 
Reference 
LDA 
84.5 
Weiss ? 
25NN, stand, Euclid 
83.6±0.5 
WD/KG repeat?? 
CMLP2LN 
82.5 
RA, estimated? 
FSM 
82.2 
Rafał Adamczak 
MLP+backprop 
81.3 
Weiss ? 
CART 
80.8 
Weiss ? 
Method 
Accuracy % 
Reference 
IncNet+transformations 
90.0 
Norbert Jankowski; check again! 
28NN, stand, Euclid, 7 features 
85.1±0.5 
WD/KG 
LDA 
84.5 
Ster & Dobnikar 
Fisher discriminant analysis 
84.2 
Ster & Dobnikar 
k=7, Euclid, std 
84.2±6.6 
WD, GhostMiner 
16NN, stand, Euclid 
84±0.6 
WD/KG 
FSM, 82.484% on test only 
84.0 
Rafał Adamczak 
k=1:10, Manhattan, std 
83.8±5.3 
WD, GhostMiner 
Naive Bayes 
82.583.4 
Rafał; Ster, Dobnikar 
SNB 
83.1 
Ster & Dobnikar 
LVQ 
82.9 
Ster & Dobnikar 
GTO DT (5xCV) 
82.5 
Bennet and Blue 
kNN, k=19, Eculidean 
82.1±0.8 
Karol Grudziński 
k=7, Manhattan, std 
81.8±10.0 
WD, GhostMiner 
SVM (5xCV) 
81.5 
Bennet and Blue 
kNN (k=1? raw data?) 
81.5 
Ster & Dobnikar 
MLP+BP (standarized) 
81.3 
Ster, Dobnikar, Rafał Adamczak 
Cluster means, 2 prototypes 
80.8±6.4 
MB 
CART 
80.8 
Ster & Dobnikar 
RBF (Tooldiag, standarized) 
79.1 
Rafał Adamczak 
Gaussian EM, 60 units 
78.6 
Stensmo & Sejnowski 
ASR 
78.4 
Ster & Dobnikar 
C4.5 (5xCV) 
77.8 
Bennet and Blue 
IB1c (WEKA) 
77.6 
Rafał Adamczak 
QDA 
75.4 
Ster & Dobnikar 
LFC 
75.1 
Ster & Dobnikar 
ASI 
74.4 
Ster & Dobnikar 
K* (WEKA) 
74.2 
Rafał Adamczak 
OC1 DT (5xCV) 
71.7 
Bennet and Blue 
1 R (WEKA) 
71.0 
Rafał Adamczak 
T2 (WEKA) 
69.0 
Rafał Adamczak 
FOIL (WEKA) 
66.4 
Rafał Adamczak 
InductH (WEKA) 
61.3 
Rafał Adamczak 
Default, majority 
54.1 
baserate 
C4.5 rules 
53.8±5.9 
Zarndt 
IB14 (WEKA) 
46.2 
Rafał Adamczak 
Method 
Accuracy % 
Reference 
kNN, Value Distance Metric (VDM) 
82.6 
D. Wettschereck 
kNN, Euclidean 
82.4±0.8 
D. Wettschereck 
kNN, Variable Similarity Metric 
82.4 
D. Wettschereck 
kNN, Modified VDM 
83.1 
D. Wettschereck 
Other kNN variants 
< 82.4 
D. Wettschereck 
kNN, Mutual Information 
81.8 
D. Wettschereck 
CLASSIT (hierarchical clustering) 
78.9 
Gennari, Langley, Fisher 
NTgrowth (instancebased) 
77.0 
Aha & Kibler 
C4 
74.8 
Aha & Kibler 
Naive Bayes 
82.8±1.3 
Friedman et.al, 5xCV, 296 vectors 
Method 
Accuracy % 
Reference 
Logdisc 
77.7 
Statlog 
IncNet 
77.6 
Norbert Jankowski 
DIPOL92 
77.6 
Statlog 
Linear Discr. Anal. 
77.577.2 
Statlog; Ster & Dobnikar 
SVM, linear, C=0.01 
77.5±4.2 
WDGM, 10XCV averaged 10x 
SVM, Gauss, C, sigma opt 
77.4±4.3 
WDGM, 10XCV averaged 10x 
SMART 
76.8 
Statlog 
GTO DT (5xCV) 
76.8 
Bennet and Blue 
kNN, k=23, Manh, raw, W 
76.7±4.0 
WDGM, feature weighting 3CV 
kNN, k=1:25, Manh, raw 
76.6±3.4 
WDGM, most cases k=23 
ASI 
76.6 
Ster & Dobnikar 
Fisher discr. analysis 
76.5 
Ster & Dobnikar 
MLP+BP 
76.4 
Ster & Dobnikar 
MLP+BP 
75.8±6.2 
Zarndt 
LVQ 
75.8 
Ster & Dobnikar 
LFC 
75.8 
Ster & Dobnikar 
RBF 
75.7 
Statlog 
NB 
75.573.8 
Ster & Dobnikar; Statlog 
kNN, k=22, Manh 
75.5 
Karol Grudziński 
MML 
75.5±6.3 
Zarndt 
SNB 
75.4 
Ster & Dobnikar 
BP 
75.2 
Statlog 
SSV DT 
75.0±3.6 
WDGM, SSV BS, node 5CV MC 
kNN, k=18, Euclid, raw 
74.8±4.8 
WDGM 
CART DT 
74.7±5.4 
Zarndt 
CART DT 
74.5 
Stalog 
DBCART 
74.4 
Shang & Breiman 
ASR 
74.3 
Ster & Dobnikar 
ODT, dyadic trees 
74.0±2.3 
Blanchard 
Cluster means, 2 prototypes 
73.7±3.7 
MB 
SSV DT 
73.7±4.7 
WDGM, SSV BS, node 10CV strat 
SFC, stacking filters 
73.3±1.9 
Porter 
C4.5 DT 
73.0 
Stalog 
C4.5 DT 
72.7±6.6 
Zarndt 
Bayes 
72.2±6.9 
Zarndt 
C4.5 (5xCV) 
72.0 
Bennet and Blue 
CART 
72.8 
Ster & Dobnikar 
Kohonen 
72.7 
Statlog 
C4.5 DT 
72.1±2.6 
Blanchard (averaged over 100 runs) 
kNN 
71.9 
Ster & Dobnikar 
ID3 
71.7±6.6 
Zarndt 
IB3 
71.7±5.0 
Zarndt 
IB1 
70.4±6.2 
Zarndt 
kNN, k=1, Euclides, raw 
69.4±4.4 
WDGM 
kNN 
67.6 
Statlog 
C4.5 rules 
67.0±2.9 
Zarndt 
OCN2 
65.1±1.1 
Zarndt 
Default, majority 
65.1 

QDA 
59.5 
Ster, Dobnikar 
Method 
Accuracy % 
Reference 
SVM (5xCV) 
77.6 
Bennet and Blue 
C4.5 
76.0±0.9 
Friedman, 5xCV 
SemiNaive Bayes 
76.0±0.8 
Friedman, 5xCV 
Naive Bayes 
74.5±0.9 
Friedman, 5xCV 
Default, majority 
65.1 
1 age: continuous 
2 sex: {M, F} 
3 on thyroxine: logical 
4 maybe on thyroxine: logical 
5 on antithyroid medication: logical 
6 sick  patient reports malaise: logical 
7 pregnant: logical 
8 thyroid surgery: logical 
9 I131 treatment: logical 
10 test hypothyroid: logical 
11 test hyperthyroid: logical 
12 on lithium: logical 
13 has goitre: logical 
14 has tumor: logical 
15 hypopituitary: logical 
16 psychological symptoms: logical 
17 TSH: continuous 
18 T3: continuous 
19 TT4: continuous 
20 T4U: continuous 
21 FTI: continuous 
Method 
% training 
% test 
Reference 
CMLP2LN rules+ASA 
99.90 
99.36 
Rafał/Krzysztof/Grzegorz 
CART 
99.80 
99.36 
Weiss 
PVM 
99.80 
99.33 
Weiss 
SSV beam search 
99.80 
99.33 
WD 
IncNet 
99.68 
99.24 
Norbert 
SSV opt leaves or pruning 
99.7 
99.1 
WD 
MLP init+ a,b opt. 
99.5 
99.1 
Rafał 
CMLP2LN rules 
99.7 
99.0 
Rafał/Krzysztof 
Cascade correlation 
100.0 
98.5 
Schiffmann 
Local adapt. rates 
99.6 
98.5 
Schiffmann 
BP+genetic opt. 
99.4 
98.4 
Schiffmann 
Quickprop 
99.6 
98.3 
Schiffmann 
RPROP 
99.6 
98.0 
Schiffmann 
3NN, Euclides, with 3 features 
98.7 
97.9 
W.D./Karol 
1NN, Euclides, with 3 features 
98.4 
97.7 
W.D./Karol 
Best backpropagation 
99.1 
97.6 
Schiffmann 
1NN, Euclides, 8 features used 
 
97.3 
Karol/W.D. 
SVM Gauss, C=8 s=0.1 
98.3 
96.1 
WD 
Bayesian classif. 
97.0 
96.1 
Weiss? 
SVM Gauss, C=1 s=0.1 
95.4 
94.7 
WD 
BP+conj. gradient 
94.6 
93.8 
Schiffmann 
1NN Manhattan, std data 
93.8 
Karol G./WD 

SVM lin, C=1 
94.1 
93.3 
WD 
SVM Gauss, C=8 s=5 
100 
92.8 
WD 
Default, majority 250 test errors 
92.7 

1NN Manhattan, raw data 
92.2 
Karol G./WD 
Method 
Training set 
Test set 
Reference 
IB2IB4 
81.285.5 
43.644.6 
WEKA, our calculation 
Naive Bayes 
 
46.6 
WEKA, our calculation 
1R (rules) 
58.4 
50.3 
WEKA, our calculation 
T2 (rules from decision tree) 
67.5 
53.3 
WEKA, our calculation 
FOIL (inductive logic) 
99 
60.1 
WEKA, our calculation 
FSM, initial 49 crisp logical rules 
83.5 
63.2 
FSM, our calculation 
LDA (statistical) 
68.4 
65.0 
our calculation 
DLVQ (38 nodes) 
100 
66.0 
our calculation 
C4.5 decision rules 
64.5 
66.3 
our calculation 
Best fuzzy MLP model 
75.5 
66.3 
Mitra et. al 
MLP with RPROP 
68.0 
our calculation 

Cascade Correlation 
71.0 
our calculation 

Fuzzy neural network 
100 
75.5 
Hayashi 
C4.5 decision tree 
94.4 
75.5 
our calculation 
FSM, Gaussian functions 
93 
75.6 
our calculation 
FSM, 60 triangular functions 
93 
75.8 
our calculation 
IB1c (instancebased) 
 
76.7 
WEKA, our calculation 
kNN, k=1, Camberra, raw 
76.1 
80.4 
WD/SBL 
K* method 
 
78.5 
WEKA, our calculation 
1NN, 4 features removed, Manhattan 
76.9 
80.4 
our calculation, KG 
1NN, Camberra, raw, removed f2, 6, 8, 9 
77.2 
83.4 
our calculation, KG 
Method 
% training 
% test 
Time train 
Time test 
MLP+SCG 
96.0 
91.0 
reg alfa=0.5, 36 hidden nodes, 1400 it 
fast; WD 
kNN 
 
90.9 
autok=3, Manhattan, std data 
GM 2.0 
kNN 
91.1 
90.6 
2105, Statlog 
944; parametry? 
kNN 
 
90.4 
autok=5, Euclidean, std data 
GM 2.0 
kNN 
 
90.0 
k=1, Manhattan, std data, no training 
fast, GM 2.0 
FSM 
95.1 
89.7 
std data, a=0.95 
fast, GM 2.0; best NN result 
LVQ 
95.2 
89.5 
1273 
44 
kNN 
 
89.4 
k=1, Euclidean, std data, no training 
fast, GM 2.0 
Dipol92 
94.9 
88.9 
746 
111 
MLP+SCG 
94.4 
88.5 
5000 it; active learning+reg a=0.5, 812 hidden 
fast; WD 
SVM 
91.6 
88.4 
std data, Gaussian kernel 
fast, GM 2.0; unclassified 4.3% 
Radial 
88.9 
87.9 
564 
74 
Alloc80 
96.4 
86.8 
63840 
28757 
IndCart 
97.7 
86.2 
2109 
9 
CART 
92.1 
86.2 
330 
14 
MLP+BP 
88.8 
86.1 
72495 
53 
Bayesian Tree 
98.0 
85.3 
248 
10 
C4.5 
96.0 
85.0 
434 
1 
New ID 
93.3 
85.0 
226 
53 
QuaDisc 
89.4 
84.5 
157 
53 
SSV 
90.9 
84.3 
default par. 
very fast, GM 2.0 
Cascade 
88.8 
83.7 
7180 
1 
Log DA, Disc 
88.1 
83.7 
4414 
41 
LDA, Discrim 
85.1 
82.9 
68 
12 
Kohonen 
89.9 
82.1 
12627 
129 
Bayes 
69.2 
71.3 
75 
17 
N 
Description 
Train 
Test 
1 
red soil 
1072 (24.17%) 
461 (23.05%) 
2 
cotton crop 
479 (10.80%) 
224 (11.20%) 
3 
grey soil 
961 (21.67%) 
397 (19.85%) 
4 
damp grey soil 
415 (09.36%) 
211 (10.55%) 
5 
veg. Stubble 
470 (10.60%) 
237 (11.85%) 
6 
Mixture class 
0 
0 
7 
very damp grey soil 
1038 (23.40%) 
470 (23.50%) 
Method 
Accuracy % 
Reference 
3NN + simplex 
98.7 
Our own weighted kNN 
VSS 2 epochs 
96.7 
MLP with numerical gradient 
3NN 
96.7 
KG, GM with or without weights 
IB3 
96.7 
Aha, 5 errors on test 
1NN, Manhattan 
96.0 
GM kNN (our) 
MLP+BP 
96.0 
Sigillito 
SVM Gaussian 
94.9±2.6 
GM (our), defaults, similar for C=1100 
C4.5 
94.9 
Hamilton 
3NN Canberra 
94.7 
GM kNN (our) 
RIAC 
94.6 
Hamilton 
C4 (no windowing) 
94.0 
Aha 
C4.5 
93.7 
Bennet and Blue 
SVM 
93.2 
Bennet and Blue 
Nonlin perceptron 
92.0 
Sigillito 
FSM + rotation 
92.8 
our 
1NN, Euclidean 
92.1 
Aha, GM kNN (our) 
DBCART 
91.3 
Shang, Breiman 
Linear perceptron 
90.7 
Sigillito 
OC1 DT 
89.5 
Bennet and Blue 
CART 
88.9 
Shang, Breiman 
SVM linear 
87.1±3.9 
GM (our), defaults 
GTO DT 
86.0 
Bennet and Blue 
Method 
Accuracy % 
Reference 
SFM+G+G(WX) 
??±2.6 
GM (our), C=1, s=25 
kNN auto+WX+G(WX) 
??.4±3.6 
GM (our) 
SVM Gaussian 
94.6±4.3 
GM (our), C=1, s=25 
VSSMKNN 
91.5±4.3 
MK, 12 neurons (similar 817) 
SVM lin 
89.5±3.8 
GM (our), C=1, s=25 
SSV tree 
87.8±4.5 
GM (our), default 
1NN 
85.8±4.9 
GM std, Euclid 
3NN 
84.0±5.4 
GM std, Euclid 
Method 
Train % 
Test % 
Reference 
1NN, 5D from MDS, Euclid, std 
97.1 
our, GM (WD) 

1NN, Manhattan std 
97.1 
our, GM (WD) 

1NN, Euclid std 
96.2 
our, GM (WD) 

TAP MFT Bayesian 
 
92.3 
Opper, Winther 
Naive MFT Bayesian 
 
90.4 
Opper, Winther 
SVM 
 
90.4 
Opper, Winther 
MLP+BP, 12 hidden, best MLP 
 
90.4 
Gorman, Sejnowski 
1NN, Manhattan raw 
92.3 
our, GM (WD) 

1NN, Euclid raw 
91.3 
our, GM (WD) 

FSM  methodology ? 
83.6 
our (RA) 
1NN Euclid on 5D MDS input 
87.5±0.8 
our GM (WD) 

1NN Euclidean, std data 
86.8±1.2 
our GM (WD) 

1NN Manhattan, std data 
86.3±0.3 
our GM (WD) 

MLP+BP, 12 hidden 
99.8±0.1 
84.7±5.7 
Gorman, Sejnowski 
1NN Manhattan, raw data 
84.5±0.4 
our GM (WD) 

MLP+BP, 24 hidden 
99.8±0.1 
84.5±5.7 
Gorman, Sejnowski 
MLP+BP, 6 hidden 
99.7±0.2 
83.5±5.6 
Gorman, Sejnowski 
SVM linear, C=0.1 
82.7±8.5 
our GM (WD), std data 

1NN Euclidean, raw data 
82.1±0.9 
our GM (WD) 

SVM Gauss, C=1, s=0.1 
77.4±10.1 
our GM (WD), std data 

SVM linear, C=1 
76.9±11.9 
our GM (WD), raw data 

SVM linear, C=1 
76.0±9.8 
our GM (WD), std data 

DBCART, 10xCV 
81.8 
Shang, Breiman 

CART, 10xCV 
67.9 
Shang, Breiman 
Discriminant Adaptive NN, DANN 
92.3 

Adaptive metric NN 
90.9 

kNN 
87.5 

SVM Gauss C=1 
78.8 

C4.5 
76.9 

SVM linear C=1 
75.0 
Method 
Train 
Test 
Reference 
CARTDB, 10xCV on total set !!! 
90.0 
Shang, Breiman 

CART, 10xCV on total set 
78.2 
Shang, Breiman 
Method 
Train 
Test 
Reference 
Square node network, 88 units 
54.8 
UCI 

Gaussian node network, 528 units 
54.6 
UCI 

1NN, Euclides, raw 
99.24 
56.3 
WD/KG 
Radial Basis Function, 528 units 
53.5 
UCI 

Gaussian node network, 88 units 
53.5 
UCI 

FSM Gauss, 10CV na treningowym 
92.60 
51.94 
our (RA) 
Square node network, 22 
51.1 
UCI 

Multilayer perceptron, 88 hidden 
50.6 
UCI 

Modified Kanerva Model, 528 units 
50.0 
UCI 

Radial Basis Function, 88 units 
47.6 
UCI 

Singlelayer perceptron, 88 hidden 
33.3 
UCI 
Method 
Test 
Reference 
10xCV tests below 

3NN, Manhattan 
87.8±4.0 
Kosice 
3NN, Canberra 
87.8±4.2 
WD/GM 
FSM, 65 Gaussian nodes 
87.4±4.5 
Kosice 
3NN, Euclid 
87.3±3.9 
WD/GM 
SSV dec. tree, 22 rules 
86.0±?? 
Kosice 
SVM Gauss opt C~1000, s~1 
85.0±4.0 
WD, Ghostminer 
SVM Gauss C=1000, s=1 
83.5±4.1 
WD, Ghostminer 
SVM, Gauss, C=1, s=0.1 
76.6±2.5 
WD, Ghostminer 
2xCV tests below 

3NN, Euclidean 
86.1±0.6 
Kosice 
FSM, 40 Gaussian nodes 
85.2±1.2 
Kosice 
MLP 
84.6 
Pal 
Fuzzy MLP 
84.2 
Pal 
SSV dec. tree, beam search 
83.3±0.9 
Kosice 
SSV dec. tree, best first 
83.0±1.0 
Kosice 
Bayes Classifier 
79.2 
Pal 
Fuzzy SOM 
73.5 
Pal 
Method 
Test 
Reference 
Leaveoneout test results 

RDA 
100 
[1] 
QDA 
99.4 
[1] 
LDA 
98.9 
[1] 
kNN, Manhattan, k=1 
98.7 
GMWD, std data 
1NN 
96.1 
[1] ztransformed data 
kNN, Euclidean, k=1 
95.5 
GMWD, std data 
kNN, Chebyshev, k=1 
93.3 
GMWD, std data 
10xCV tests below 

kNN, Manhattan, auto k=110 
98.9±2.3 
GMWD, 2D data, after MDS/PCA 
IncNet, 10CV, def, Gauss 
98.9±2.4 
GMWD, std data, up to 3 neurons 
10 CV SSV, opt prune 
98.3±2.7 
GMWD, 2D data, after MDS/PCA 
10 CV SSV, node count 7 
98.3±2.7 
GMWD, 2D data, after MDS/PCA 
kNN, Euclidean, k=1 
97.8±2.8 
GMWD, 2D data, after MDS/PCA 
kNN, Manhattan, k=1 
97.8±2.9 
GMWD, 2D data, after MDS/PCA 
kNN, Manhattan, auto k=110 
97.8±3.9 
GMWD 
kNN, Euclidean, k=3, weighted features 
97.8±4.7 
GMWD 
IncNet, 10CV, def, bicentral 
97.2±2.9 
GMWD, std data, up to 3 neurons 
kNN, Euclidean, auto k=110 
97.2±4.0 
GMWD 
10 CV SSV, opt node 
97.2±5.4 
GMWD, 2D data, after MDS/PCA 
FSM a=.99, def 
96.1±3.7 
GMWD, 2D data, after MDS/PCA 
FSM 10CV, Gauss, a=.999 
96.1±4.7 
GMWD, std data, 811 neurons 
FSM 10CV, triang, a=.99 
96.1±5.9 
GMWD, raw data 
kNN, Euclidean, k=1 
95.5±4.4 
GMWD 
10 CV SSV, opt node, BFS 
92.8±3.7 
GMWD 
10 CV SSV, opt node, BS 
91.6±6.5 
GMWD 
10 CV SSV, opt prune, BFS 
90.4±6.1 
GMWD 
Adaptive metric NN 
75.2 

Discriminant Adaptive NN, DANN 
72.9 

kNN 
72.0 

C4.5 
68.2 
Class 
Train 
Test 
1 
464 (23.20%) 
303 (25.55%) 
2 
485 (24.25%) 
280 (23.61%) 
3 
1051 (52.55%) 
603 (50.84%) 
All 
2000 (100%) 
1186 (100%) 
Method 
% in training 
% on test 
Time train 
Time test 
RBF, 720 nodes 
98.5 
95.9 

kNN GM, p(XC), k=6, Euclid, raw 
96.8 
95.5 
0 
short 
Dipol92 
99.3 
95.2 
213 
10 
Alloc80 
93.7 
94.3 
14394 
 
QuaDisc 
100.0 
94.1 
1581 
809 
LDA, Discrim 
96.6 
94.1 
929 
31 
FSM, 8 Gaussians, 180 binary 
95.4 
94.0 

Log DA, Disc 
99.2 
93.9 
5057 
76 
SSV Tree, p(XC), opt node, 4CV 
94.8 
93.4 
short 
short 
Naive Bayes 
94.8 
93.2 
52 
15 
Castle, middle 90 binary var 
93.9 
92.8 
397 
225 
IndCart, 180 binary 
96.0 
92.7 
523 
516 
C4.5, on 60 features 
96.0 
92.4 
9 
2 
CART, middle 90 binary var 
92.5 
91.5 
615 
9 
MLP+BP 
98.6 
91.2 
4094 
9 
Bayesian Tree 
99.9 
90.5 
82 
11 
CN2 
99.8 
90.5 
869 
74 
New ID 
100.0 
90.0 
698 
1 
Ac2 
100.0 
90.0 
12378 
87 
Smart 
96.6 
88.5 
79676 
16 
Cal5 
89.6 
86.9 
1616 
8 
Itrule 
86.9 
86.5 
2212 
6 
kNN 
91.1 
85.4 
2428 
882 
Kohonen 
89.6 
66.1 
 
 
Default, majority 
52.5 
50.8 