We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
It doesn't seem that feature importances change. Using "lightgbm==2.3.0" I get the following;
`xval, yval = make_classification(n_samples = 1000, n_features=10) model = lgb.LGBMClassifier(n_estimators=100, learning_rate = 0.05, verbose = -1)
for i in range(10): model.fit(xval, yval) print(model.feature_importances_) [244 537 213 214 183 222 282 264 175 648] [244 537 213 214 183 222 282 264 175 648] [244 537 213 214 183 222 282 264 175 648] [244 537 213 214 183 222 282 264 175 648] [244 537 213 214 183 222 282 264 175 648] [244 537 213 214 183 222 282 264 175 648] [244 537 213 214 183 222 282 264 175 648] [244 537 213 214 183 222 282 264 175 648] [244 537 213 214 183 222 282 264 175 648] [244 537 213 214 183 222 282 264 175 648]`
If this is "correct" LightGBM behaviour, then there is obviously no need to average the feature_importances_ over multiple iterations.
The text was updated successfully, but these errors were encountered:
No branches or pull requests
It doesn't seem that feature importances change. Using "lightgbm==2.3.0" I get the following;
`xval, yval = make_classification(n_samples = 1000, n_features=10)
model = lgb.LGBMClassifier(n_estimators=100, learning_rate = 0.05, verbose = -1)
for i in range(10):
model.fit(xval, yval)
print(model.feature_importances_)
[244 537 213 214 183 222 282 264 175 648]
[244 537 213 214 183 222 282 264 175 648]
[244 537 213 214 183 222 282 264 175 648]
[244 537 213 214 183 222 282 264 175 648]
[244 537 213 214 183 222 282 264 175 648]
[244 537 213 214 183 222 282 264 175 648]
[244 537 213 214 183 222 282 264 175 648]
[244 537 213 214 183 222 282 264 175 648]
[244 537 213 214 183 222 282 264 175 648]
[244 537 213 214 183 222 282 264 175 648]`
If this is "correct" LightGBM behaviour, then there is obviously no need to average the feature_importances_ over multiple iterations.
The text was updated successfully, but these errors were encountered: