You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
From my understanding, this would allocate 400 observations to both the training and estimation sets. Is my understanding correct?
Furthermore, should I assume that the 'test_calibration(cf)' function operates based on a test set of 200 observations?
Lastly, I would like to know how many observations the best_linear_projection(cf, X, target.sample = "overlap") function targets.
*I understand that "overlap" implies weighting, not the exclusion of observations.
Thank you for developing and maintaining such a useful package.
The text was updated successfully, but these errors were encountered:
Hi @ninetale, the effective number of samples used for estimation in both 1 and 2 is n=1000, but in 1 you set aside honest.fraction*sample.fraction=400 for "honest" splitting. 3: yes, but weights can be zero.
It uses 400 observations for forest construction and 400 for estimation.
And since 200 are held-out observations, I guess ‘test_calibration(cf)’ uses those 200.
ㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡㅡ
However, the question I still have is
“Does ‘best_linear_projection(cf, X, target.sample = “overlap”)’ use all the first 1,000 observations?”.
(Of the 1,000 observations, 400 for model construction, 400 for estimation, and 200 for heteroscedasticity validity analysis.
And all 1,000 again for the heterogeneity of effects analysis?)
If not, do I have to divide the original 1,000 for best_linear_projection as following steps?
For example:
divide the observations into 600,400 each.
use 200 of the 600 for forest construction and 200 for estimation (in ‘causal_forest’)
use the remaining 200 out of 600 for ‘test_calibration’.
Observe the heterogeneity of the effects through the 400 initial splits.
Should we follow this procedure?
There is confusion as the procedure differs somewhat from machine learning or deep learning for prediction.
Hello,
I am currently utilizing the causal forest package and have some questions regarding the observations. Let’s assume there are 1,000 observations.
I am using the following code:
cf <- causal_forest(X, Y, W, Y.hat = Y_hat_re, W.hat = W_hat_re, honesty = TRUE, tune.parameters = "all", sample.fraction = 0.8, num.trees = 20000)
From my understanding, this would allocate 400 observations to both the training and estimation sets. Is my understanding correct?
Furthermore, should I assume that the 'test_calibration(cf)' function operates based on a test set of 200 observations?
Lastly, I would like to know how many observations the best_linear_projection(cf, X, target.sample = "overlap") function targets.
*I understand that "overlap" implies weighting, not the exclusion of observations.
Thank you for developing and maintaining such a useful package.
The text was updated successfully, but these errors were encountered: