LightGBM -- Light Gradient Boosting Machine
生活随笔
收集整理的這篇文章主要介紹了
LightGBM -- Light Gradient Boosting Machine
小編覺(jué)得挺不錯(cuò)的,現(xiàn)在分享給大家,幫大家做個(gè)參考.
LightGBM 是微軟開(kāi)源的一個(gè)基于決策樹(shù)和XGBoost的機(jī)器學(xué)習(xí)算法。具有分布式和高效處理大量數(shù)據(jù)的特點(diǎn)。
- 更快的訓(xùn)練速度,比XGBoost的準(zhǔn)確性更高
- 更低的內(nèi)存使用率,通過(guò)使用直方圖算法將連續(xù)特征提取為離散特征,實(shí)現(xiàn)了驚人的快速訓(xùn)練速度和較低的內(nèi)存使用率
- 通過(guò)使用按葉分割而不是按級(jí)別分割來(lái)獲得更高精度,加快目標(biāo)函數(shù)收斂速度,并在非常復(fù)雜的樹(shù)中捕獲訓(xùn)練數(shù)據(jù)的底層模式。使用num_leaves和max_depth超參數(shù)控制過(guò)擬合
- 支持并行計(jì)算、分布式處理和GPU學(xué)習(xí)
LightGBM的特點(diǎn)
- XGBoost 使用決策樹(shù)對(duì)一個(gè)變量進(jìn)行拆分,并在該變量上探索不同的切割點(diǎn)(按級(jí)別劃分的樹(shù)生長(zhǎng)策略)
- LightGBM 專注于按葉子節(jié)點(diǎn)進(jìn)行拆分,以便獲得更好的擬合(按葉劃分的樹(shù)生長(zhǎng)策略)
這使得LightGBM 能夠快速獲得很好的數(shù)據(jù)擬合,并生成能夠替代XGBoost的解決方案。從算法上講,XGBoost將決策樹(shù)進(jìn)行的分割結(jié)構(gòu)作為一個(gè)圖來(lái)計(jì)算,使用廣度搜索優(yōu)先(BFS),而LightGBM使用的是深度優(yōu)先(DFS)
安裝
# conda 安裝 conda install -c conda-forge lightgbm# pip安裝 python3.6 -m pip install lightgbm基本使用
訓(xùn)練的過(guò)程有很多API接口可以使用, 下面分別說(shuō)明一些常用API的使用方法和使用示例
https://lightgbm.readthedocs.io/en/v3.3.2/Python-API.html
lightgbm.train
parameters = {'learning_rate': 0.05,'boosting_type': 'gbdt','objective': 'binary','metrics': classification_metrics,'num_leaves': 32,'feature_fraction': 0.8,'bagging_fraction': 0.8,'bagging_freq': 5,'seed': 2022,'bagging_seed': 1,'feature_fraction_seed': 7,'min_data_in_leaf': 20,'n_jobs': -1,'verbose': -1,}lightgbm.train(params, train_set, num_boost_round=100, valid_sets=None, valid_names=None, fobj=None, feval=None, init_model=None, feature_name='auto', categorical_feature='auto', early_stopping_rounds=None, evals_result=None, verbose_eval='warn', learning_rates=None, keep_training_booster=False, callbacks=None)| params | 模型訓(xùn)練的超參數(shù), 比如學(xué)習(xí)率、評(píng)價(jià)指標(biāo)等 |
| train_set | 訓(xùn)練集 |
| num_boost_round | boosting 迭代次數(shù) |
| valid_sets | 驗(yàn)證集,一般 valid_sets = [valid_set, train_set] |
| verbose_eval | |
| early_stopping_rounds | 模型在驗(yàn)證分?jǐn)?shù)停止提升(收斂了)就停止迭代了,early_stopping_rounds 限制一個(gè)最小的迭代次數(shù),比如不少于200次 |
| evals_result | store all evaluation results of all the items in valid_sets, 一般用evals_result 來(lái)畫(huà)loss在迭代過(guò)程中的圖 |
使用示例 :lightgbm.train K折交叉驗(yàn)證 Train 二分類模型的過(guò)程
import lightgbm as lgb import numpy as np from sklearn.model_selection import StratifiedKFold from sklearn.metrics import roc_auc_score, accuracy_score, f1_score, precision_score, recall_scoreX_train, X_test = data[~data['label'].isna()], data[data['label'].isna()] Y_train = X_train['label'] KF = StratifiedKFold(n_splits=5, shuffle=True, random_state=2022) parameters = {'learning_rate': 0.05,'boosting_type': 'gbdt','objective': 'binary','metric': 'auc','num_leaves': 32,'feature_fraction': 0.8,'bagging_fraction': 0.8,'bagging_freq': 5,'seed': 2022,'bagging_seed': 1,'feature_fraction_seed': 7,'min_data_in_leaf': 20,'n_jobs': -1, 'verbose': -1, } lgb_result = np.zeros(len(X_train))for fold_, (trn_idx, val_idx) in enumerate(KF.split(X_train.values, Y_train.values)):print("fold 5 of {}".format(fold_))trn_data = lgb.Dataset(X_train.iloc[trn_idx][features],label=Y_train.iloc[trn_idx]) val_data = lgb.Dataset(X_train.iloc[val_idx][features],label=Y_train.iloc[val_idx])evaluation_result = {}model = lgb.train(params=parameters,train_set=trn_data,num_boost_round=num_round,valid_sets=[trn_data, val_data],verbose_eval=500,early_stopping_rounds=100, evals_result=evaluation_result)lgb_result[val_idx] = model.predict(X_train.iloc[val_idx][features], num_iteration=model.best_iteration)model.save_model(f'model/model_{fold_}.txt')lgb.plot_metric(evaluation_result, metric=current_metrics) train_predict = model.predict(X_train, num_iteration=model.best_iteration)test_predict = model.predict(X_test, num_iteration=model.best_iteration)print('Train Precision score: {}'.format(precision_score(Y_train, [1 if i >= 0.5 else 0 for i in train_predict])))print('Train Recall score: {}'.format(recall_score(Y_train, [1 if i >= 0.5 else 0 for i in train_predict])))print('Train AUC score: {}'.format(roc_auc_score(Y_train, train_predict)))print('Train F1 score: {}\r\n'.format(f1_score(Y_train, [1 if i >= 0.5 else 0 for i in train_predict])))print('Test Precision score: {}'.format(precision_score(Y_test, [1 if i >= 0.5 else 0 for i in test_predict])))print('Test Recall score: {}'.format(recall_score(Y_test, [1 if i >= 0.5 else 0 for i in test_predict])))print('Test AUC score: {}'.format(roc_auc_score(Y_test, test_predict)))print('Test F1 score: {}'.format(f1_score(Y_test, [1 if i >= 0.5 else 0 for i in test_predict])))調(diào)參
可視化
特征重要性分布lightgbm.plot_importance
lightgbm.plot_importance(booster, ax=None, height=0.2, xlim=None, ylim=None, title='Feature importance', xlabel='Feature importance', ylabel='Features', importance_type='auto', max_num_features=None, ignore_zero=True, figsize=None, dpi=None, grid=True, precision=3, **kwargs) lightgbm.plot_importance(model, max_num_features=10)模型保存 / 模型加載
- 模型保存 lightgbm.Booster.save_model()
- 模型加載:lightgbm.Booster實(shí)例化
- 另一種方式使用sklearn的 joblib擴(kuò)展庫(kù)
注意:保存的后綴名是.pkl
模型轉(zhuǎn)化
參考文檔
- LightGBM’s documentation
- LightGBM 中文文檔
- LightGBM’s 項(xiàng)目GitHub地址
- LightGBM 在Kaggle機(jī)器學(xué)習(xí)競(jìng)賽的應(yīng)用示例
- 論文"LightGBM: A Highly Efficient Gradient Boosting Decision Tree". Advances in Neural Information Processing Systems 30 (NIPS 2017), pp. 3149-3157.
總結(jié)
以上是生活随笔為你收集整理的LightGBM -- Light Gradient Boosting Machine的全部?jī)?nèi)容,希望文章能夠幫你解決所遇到的問(wèn)題。
- 上一篇: 一意孤行亚马逊----一个钓鱼疯子的巴西
- 下一篇: Zookeeper-Paxos-屁民的故