Ensembling and Stacking

1.Ensemble result files

The most straightforward way is to ensemble existing predictions, ideal when teaming up.

model ensembling reduces errr rate and it works better when model predictions are low-correlated.

Classifiers

knn = KNeighborsClassifier(n_neighbors=1)

gnb = GaussianNB()

rf = RandomForestClassifier(random_state=1)

ada = AdaBoostClassifier(alpha=1)

nn = MLPClassifier(alpha=1)

svc1 = SVC(kernel="linear", C=0.025)

svc2 = SVC(gamma=2, C=1)

gp = GaussianProcessClassifier(1.0 * RBF(1.0))

tree = DecisionTreeClassifier(max_depth=5)

qda = QuadraticDiscriminantAnalysis()

KNeighborsClassifier n_neighbors, weights, algorithm, leaf_size, metric,
GaussianNB priors
RandomForestClassifier
AdaBoostClassifier base_estimator, n_estimators, learning_rate
MLPClassifier hidden_layer_sizes, activation, solver, alpha, batch_size, learning_rate
C-Support Vector Classification C (penalty parameter), kernel
GaussianProcessClassifier

results matching ""

    No results matching ""