-
Notifications
You must be signed in to change notification settings - Fork 19
New issue
Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? # to your account
attrition -- too lenient? #7
Comments
Thank you for your comments. Regarding issue number one, your points about low power tests and the burden of proof are well taken. At the same time, without substantive reasons for suspecting asymmetrical attrition, I would be hesitant to make trimming bounds and certainly extreme value bounds the default. Bear in mind that our lab's RCTs frequently involve administrative outcomes such as voter turnout, and it would be unusual if attrition were to occur in an asymmetrical fashion. Sometimes we use survey outcomes as well (e.g., political attitudes); symmetry in this context hinges on the lack of apparent connection between treatment and outcome assessment, which is a feature of the design, albeit a potential fallible one. Regarding issue number 2, we would find our jury through EGAP, an organization that offers peer consulting to its members. If we were not EGAP members, I suppose we would put out a call to other political scientists through the APSA Experimental Research Section. As to what we propose to do if the jury result is negative, our SOP says the following: "Consult a disinterested 'jury' of colleagues to decide whether the monotonicity assumption for trimming bounds (Lee 2009; Gerber and |
Thanks for following this up.
On issue 1 I see your point, and I suppose that in the context of your
studies it does seem reasonable that attrition is usually 'neutral.'
However, I would still perhaps suggest including some consideration of the
power of the diagnostic test. If the test is truly underpowered, then the
fact that the test 'fails to reject' should not lead attrition in this
study to be treated more leniently than for a study where this diagnostic
test is more powerful.
On issue 2 It is very informative and useful to know about this EGAP
resource. I Apologies that I overlooked your statement about using the
Manski bounds as the fallback.
…On Sat, Aug 25, 2018 at 10:09 PM donaldpgreen ***@***.***> wrote:
Thank you for your comments.
Regarding issue number one, your points about low power tests and the
burden of proof are well taken. At the same time, without substantive
reasons for suspecting asymmetrical attrition, I would be hesitant to make
trimming bounds and certainly extreme value bounds the default. Bear in
mind that our lab's RCTs frequently involve administrative outcomes such as
voter turnout, and it would be unusual if attrition were to occur in an
asymmetrical fashion. Sometimes we use survey outcomes as well (e.g.,
political attitudes); symmetry in this context hinges on the lack of
apparent connection between treatment and outcome assessment, which is a
feature of the design, albeit a potential fallible one.
Regarding issue number 2, we would find our jury through EGAP, an
organization that offers peer consulting to its members. If we were not
EGAP members, I suppose we would put out a call to other political
scientists through the APSA Experimental Research Section. As to what we
propose to do if the jury result is negative, our SOP says the following:
"Consult a disinterested 'jury' of colleagues to decide whether the
monotonicity assumption for trimming bounds (Lee 2009; Gerber and
Green 2012, 227) is plausible. If so, report estimates of trimming bounds;
if not, report estimates of extreme value (Manski-type) bounds (Gerber and
Green 2012, 226–27). (If the outcome has unbounded range, report extreme
value bounds that assume the largest observed value is
the largest possible value.) In either case, also report the analysis that
was specified in the PAP."
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#7 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AH2RMNehw6KW9P9Qt3fOqMIPnS8OpJ5Yks5uUbz5gaJpZM4WCmNP>
.
|
This seems too lenient. The test for bias from attrition may not be powerful. You shouldn't give yourself the benefit of the doubt for something that may cause substantial bias. Why not make the Lee bounds or the Horowitz bounds the default, and only do the first proposed thing if you can somehow very convincingly demonstrate that "it is extremely unlikely that the attrition was asymmetric"?
Also
Where/how do you find this jury in practice? And what do you propose doing if they say it is not plausible?
The text was updated successfully, but these errors were encountered: