Regression Models in Scikit-learn

Machine Learning | 18 July 2018

#100DaysOfMLCode

In machine learning, regression problems go hand in hand with classification problems. If you see the trend between regression and classification over the past 5 years, both have similar interest worldwide. In this page, I have collected all the available regression algorithms implemented in scikit-learn so that we can use it quickly and get to know the available algorithms easily.

AdaBoostRegressor

official page | source

usagecode
1
2
from sklearn.ensemble import AdaBoostRegressor
model = AdaBoostRegressor(base_estimator=None, n_estimators=50, learning_rate=1.0, loss=linear, random_state=None)
  • Parameters: base_estimator, n_estimators, learning_rate, loss, random_state.
  • Attributes: estimators_, estimator_weights_, estimator_errors_, feature_importances_

BaggingRegressor

official page | source

usagecode
1
2
from sklearn.ensemble import BaggingRegressor
model = BaggingRegressor(base_estimator=None, n_estimators=10, max_samples=1.0, max_features=1.0, bootstrap=True, bootstrap_features=False, oob_score=False, warm_start=False, n_jobs=1, random_state=None, verbose=0)
  • Parameters: base_estimator, n_estimators, max_samples, max_features, bootstrap, bootstrap_features, oob_score, warm_start, n_jobs, random_state, verbose.
  • Attributes: estimators_, estimators_samples_, estimators_features_, oob_score_, oob_prediction_.

DecisionTreeRegressor

official page | source

usagecode
1
2
from sklearn.tree import DecisionTreeRegressor
model = DecisionTreeRegressor(criterion=mse, splitter=best, max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, presort=False)
  • Parameters: criterion, splitter, max_depth, min_samples_split, min_samples_leaf, min_weight_fraction_leaf, max_features, random_state, max_leaf_nodes, min_impurity_decrease, min_impurity_split, presort
  • Attributes: feature_importances_, max_features_, n_features_, n_outputs_, tree_

ExtraTreeRegressor

official page | source

usagecode
1
2
from sklearn.tree import ExtraTreeRegressor
model = ExtraTreeRegressor(criterion=mse, splitter=random, max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=auto, random_state=None, min_impurity_decrease=0.0, min_impurity_split=None, max_leaf_nodes=None)
  • Parameters: criterion, splitter, max_depth, min_samples_split, min_samples_leaf, min_weight_fraction_leaf, max_features, random_state, min_impurity_decrease, min_impurity_split, max_leaf_nodes

ExtraTreesRegressor

official page | source

usagecode
1
2
from sklearn.ensemble import ExtraTreesRegressor
model = ExtraTreesRegressor(n_estimators=10, criterion=mse, max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=auto, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=False, oob_score=False, n_jobs=1, random_state=None, verbose=0, warm_start=False)
  • Parameters: n_estimators, criterion, max_features, max_depth, min_samples_split, min_samples_leaf, min_weight_fraction_leaf, max_leaf_nodes, min_impurity_split, min_impurity_decrease, bootstrap, oob_score, n_jobs, random_state, verbose
  • Attributes: estimators_, feature_importances_, n_features_, n_outputs_, oob_score_, oob_prediction_

GaussianProcessRegressor

official page | source

usagecode
1
2
from sklearn.gaussian_process import GaussianProcessRegressor
model = GaussianProcessRegressor(kernel=None, alpha=1e-10, optimizer=fmin_l_bfgs_b, n_restarts_optimizer=0, normalize_y=False, copy_X_train=True, random_state=None)
  • Parameters: kernel, alpha, optimizer, n_restarts_optimizer, normalize_y, copy_X_train, random_state
  • Attributes: X_train_, y_train_, kernel_, L_, alpha_, log_marginal_likelihood_value_

GradientBoostingRegressor

official page | source

usagecode
1
2
from sklearn.ensemble import GradientBoostingRegressor
model = GradientBoostingRegressor(loss=ls, learning_rate=0.1, n_estimators=100, subsample=1.0, criterion=friedman_mse, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_depth=3, min_impurity_decrease=0.0, min_impurity_split=None, init=None, random_state=None, max_features=None, alpha=0.9, verbose=0, max_leaf_nodes=None, warm_start=False, presort=auto)
  • Parameters: loss, learning_rate, n_estimators, max_depth, criterion, min_samples_split, min_samples_leaf, min_weight_fraction_leaf, subsample, max_features, max_leaf_nodes, min_impurity_split, min_impurity_decrease, alpha, init, verbose, warm_start, random_state, presort
  • Attributes: feature_importances_, oob_improvement_, train_score_, loss_, init, estimators_

HuberRegressor

official page | source

usagecode
1
2
from sklearn.linear_model import HuberRegressor
model = HuberRegressor(epsilon=1.35, max_iter=100, alpha=0.0001, warm_start=False, fit_intercept=True, tol=1e-05)
  • Parameters: epsilon, max_iter, alpha, warm_start, fit_intercept, tol
  • Attributes: coef_, intercept_, scale_, n_iter_, outliers_

KNeighborsRegressor

official page | source

usagecode
1
2
from sklearn.neighbors import KNeighborsRegressor
model = KNeighborsRegressor(n_neighbors=5, weights=uniform, algorithm=auto, leaf_size=30, p=2, metric=minkowski, metric_params=None, n_jobs=1, **kwargs)
  • Parameters: n_neighbors, weights, algorithm, leaf_size, p, metric, metric_params, n_jobs

MLPRegressor

official page | source

usagecode
1
2
from sklearn.neural_network import MLPRegressor
model = MLPRegressor(hidden_layer_sizes=(100, ), activation=relu, solver=adam, alpha=0.0001, batch_size=auto, learning_rate=constant, learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
  • Parameters: hidden_layer_sizes, activation, solver, alpha, batch_size, learning_rate, learning_rate_init, power_t, max_iter, shuffle, random_state, tol, verbose, warm_start, momentum, nesterovs_momentum, early_stopping, validation_fraction, beta_1, beta_2, epsilon
  • Attributes: loss_, coefs_, intercepts_, n_iter_, n_layers_, n_outputs_, out_activation_

MultiOutputRegressor

official page | source

usagecode
1
2
from sklearn.multioutput import MultiOutputRegressor
model = MultiOutputRegressor(estimator, n_jobs=1)
  • Parameters: estimator, n_jobs

PassiveAggresiveRegressor

official page | source

usagecode
1
2
from sklearn.linear_model import PassiveAggressiveRegressor
model = PassiveAggressiveRegressor(C=1.0, fit_intercept=True, max_iter=None, tol=None, shuffle=True, verbose=0, loss=epsilon_insensitive, epsilon=0.1, random_state=None, warm_start=False, average=False, n_iter=None)
  • Parameters: C, fit_intercept, max_iter, tol, shuffle, verbose, loss, epsilon, random_state, warm_start, average, n_iter
  • Attributes: coef_, intercept_, n_iter_

RadiusNeighborsRegressor

official page | source

usagecode
1
2
from sklearn.neighbors import RadiusNeighborsRegressor
model = RadiusNeighborsRegressor(radius=1.0, weights=uniform, algorithm=auto, leaf_size=30, p=2, metric=minkowski, metric_params=None, **kwargs)
  • Parameters: radius, weights, algorithm, leaf_size, p, metric, metric_params

RandomForestRegressor

official page | source

usagecode
1
2
from sklearn.ensemble import RandomForestRegressor
model = RandomForestRegressor(n_estimators=10, criterion=mse, max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=auto, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=1, random_state=None, verbose=0, warm_start=False)
  • Parameters: n_estimators, criterion, max_features, max_depth, min_samples_split, min_samples_leaf, min_weight_fraction_leaf, max_leaf_nodes, min_impurity_split, min_impurity_decrease, bootstrap, oob_score, n_jobs, random_state, verbose, warm_start
  • Attributes: estimators_, feature_importances_, n_features_, n_outputs_, oob_score_, oob_prediction_

RANSACRegressor

official page | source

usagecode
1
2
from sklearn.linear_model import RANSACRegressor
model = RANSACRegressor(base_estimator=None, min_samples=None, residual_threshold=None, is_data_valid=None, is_model_valid=None, max_trials=100, max_skips=inf, stop_n_inliers=inf, stop_score=inf, stop_probability=0.99, residual_metric=None, loss=absolute_loss, random_state=None)
  • Parameters: base_estimator, min_samples, residual_threshold, is_data_valid, is_model_valid, max_trials, max_skips, stop_n_inliers, stop_score, stop_probability, residual_metric, loss, random_state
  • Attributes: estimator_, n_trials_, inlier_mask_, n_skips_no_inliers_, n_skips_invalid_data_, n_skips_invalid_model_

SGDRegressor

official page | source

usagecode
1
2
from sklearn.linear_model import SGDRegressor
model = SGDRegressor(loss=squared_loss, penalty=l2, alpha=0.0001, l1_ratio=0.15, fit_intercept=True, max_iter=None, tol=None, shuffle=True, verbose=0, epsilon=0.1, random_state=None, learning_rate=invscaling, eta0=0.01, power_t=0.25, warm_start=False, average=False, n_iter=None)
  • Parameters: loss, penalty, alpha, l1_ratio, fit_intercept, max_iter, tol, shuffle, verbose, epsilon, random_state, learning_rate, eta0, power_t, warm_start, average, n_iter
  • Attributes: coef_, intercept_, average_coef_, average_intercept_, n_iter_

TheilSenRegressor

official page | source

usagecode
1
2
from sklearn.linear_model import TheilSenRegressor
model = TheilSenRegressor(fit_intercept=True, copy_X=True, max_subpopulation=10000.0, n_subsamples=None, max_iter=300, tol=0.001, random_state=None, n_jobs=1, verbose=False)
  • Parameters: fit_intercept, copy_X, max_subpopulation, n_subsamples, max_iter, tol, random_state, n_jobs, verbose
  • Attributes: coef_, intercept_, breakdown_, n_iter_, n_subpopulation_

In case if you found something useful to add to this article or you found a bug in the code or would like to improve some points mentioned, feel free to write it down in the comments. Hope you found something useful here.