參考資料來自sklearn官方網(wǎng)站:http://scikit-learn.org/stable/
總的來說,Sklearn可實現(xiàn)的函數(shù)或功能可分為以下幾個方面:
分類算法
回歸算法
聚類算法
降維算法
文本挖掘算法
模型優(yōu)化
數(shù)據(jù)預處理
>>> from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
>>> lda = LinearDiscriminantAnalysis(solver='svd', store_covariance=True)
>>> from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
>>> qda = QuadraticDiscriminantAnalysis(store_covariances=True)
>>> from sklearn import svm
>>> clf = svm.SVC()
>>> from sklearn import neighbors
>>> clf = neighbors.KNeighborsClassifier(n_neighbors, weights=weights)
>>> from sklearn.neural_network import MLPClassifier
>>> clf = MLPClassifier(solver='lbfgs', alpha=e-,
... hidden_layer_sizes=(, ), random_state=)
>>> from sklearn.naive_bayes import GaussianNB
>>> gnb = GaussianNB()
>>> from sklearn import tree
>>> clf = tree.DecisionTreeClassifier()
>>> from sklearn.ensemble import BaggingClassifier
>>> from sklearn.neighbors import KNeighborsClassifier
>>> bagging = BaggingClassifier(KNeighborsClassifier(),
... max_samples=0., max_features=0.)
2、隨機森林(Random Forest)
>>> from sklearn.ensemble import RandomForestClassifier
>>> clf = RandomForestClassifier(n_estimators=0)
>>> from sklearn.ensemble import AdaBoostClassifier
>>> clf = AdaBoostClassifier(n_estimators=00)/4、GBDT(Gradient Tree Boosting)
>>> from sklearn.ensemble import GradientBoostingClassifier
>>> clf = GradientBoostingClassifier(n_estimators=00, learning_rate=.0,
... max_depth=, random_state=0).fit(X_train, y_train)
>>> from sklearn import linear_model
>>> reg = linear_model.LinearRegression()
>>> from sklearn import linear_model
>>> reg = linear_model.Ridge (alpha = .)
>>> from sklearn.kernel_ridge import KernelRidge
>>> KernelRidge(kernel='rbf', alpha=0., gamma=0)
>>> from sklearn import svm
>>> clf = svm.SVR()
>>> from sklearn import linear_model
>>> reg = linear_model.Lasso(alpha = 0.)
>>> from sklearn.linear_model import ElasticNet
>>> regr = ElasticNet(random_state=0)
>>> from sklearn import linear_model
>>> reg = linear_model.BayesianRidge()
>>> from sklearn.linear_model import LogisticRegression
>>> clf_l_LR = LogisticRegression(C=C, penalty='l', tol=0.0)
>>> clf_l_LR = LogisticRegression(C=C, penalty='l', tol=0.0)
>>> from sklearn import linear_model
>>> ransac = linear_model.RANSACRegressor()
>>> from sklearn.preprocessing import PolynomialFeatures
>>> poly = PolynomialFeatures(degree=)
>>> poly.fit_transform(X)
高斯過程回歸(Gaussian Process Regression)
>>> from sklearn.cross_decomposition import PLSCanonical
>>> PLSCanonical(algorithm='nipals', copy=True, max_iter=00, n_components=,scale=True, tol=e-0)
典型相關分析(CCA)
>>> from sklearn.cross_decomposition import CCA
>>> cca = CCA(n_components=)
Knn算法
>>> from sklearn.neighbors import NearestNeighbors
>>> nbrs = NearestNeighbors(n_neighbors=, algorithm='ball_tree').fit(X)
Kmeans算法
>>> from sklearn.cluster import KMeans
>>> kmeans = KMeans(init='k-means++', n_clusters=n_digits, n_init=0)
>>> from sklearn.cluster import AgglomerativeClustering
>>> model = AgglomerativeClustering(linkage=linkage,
connectivity=connectivity, n_clusters=n_clusters)
>>> from sklearn.decomposition import PCA
>>> pca = PCA(n_components=)
>>> from sklearn.decomposition import KernelPCA
>>> kpca = KernelPCA(kernel='rbf', fit_inverse_transform=True, gamma=0)
>>> from sklearn.decomposition import FactorAnalysis
>>> fa = FactorAnalysis()
>>> from sklearn.decomposition import NMF, LatentDirichletAllocation
不具體列出函數(shù),只說明提供的功能
特征選擇
隨機梯度方法
交叉驗證
參數(shù)調(diào)優(yōu)
模型評估:支持準確率、召回率、AUC等計算,ROC,損失函數(shù)等作圖
數(shù)據(jù)預處理
標準化
異常值處理
非線性轉(zhuǎn)換
二值化
獨熱編碼(one-hot)
缺失值插補:支持均值、中位數(shù)、眾數(shù)、特定值插補、多重插補
衍生變量生成
聯(lián)系客服