


Test on test data: the above methods use the data from Data signal only.This method practically always gives wrong results. Test on train data uses the whole dataset for training and then for testing.This method is obviously very stable, reliable… and very slow. Leave-one-out is similar, but it holds out one instance at a time, inducing the model from all others and then classifying the held out instances.70:30) the whole procedure is repeated for a specified number of times. Random sampling randomly splits the data into the training and testing set in the given proportion (e.g.Cross validation by feature performs cross-validation but folds are defined by the selected categorical feature from meta-features.The algorithm is tested by holding out examples from one fold at a time the model is induced from other folds and examples from the held out fold are classified. Cross-validation splits the data into a given number of folds (usually 5 or 10).The widget supports various sampling methods.The Learner signal has an uncommon property: it can be connected to more than one widget to test multiple learners with the same procedures. Second, it outputs evaluation results, which can be used by other widgets for analyzing the performance of classifiers, such as ROC Analysis or Confusion Matrix.

First, it shows a table with different classifier performance measures, such as classification accuracy and area under the curve. Different sampling schemes are available, including using separate test data. Evaluation Results: results of testing classification algorithms.
