Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

attrition -- too lenient? #7

Open
daaronr opened this issue Aug 18, 2018 · 2 comments
Open

attrition -- too lenient? #7

daaronr opened this issue Aug 18, 2018 · 2 comments

Comments

@daaronr
Copy link

daaronr commented Aug 18, 2018

We will routinely perform three types of checks for asymmetrical attrition: ... In checks #2 and #3, p-values below 0.05 will be considered evidence of asymmetrical attrition., If any of those checks raises a red flag, and if the PAP has not specified methods for addressing attrition bias, we will follow these procedures

This seems too lenient. The test for bias from attrition may not be powerful. You shouldn't give yourself the benefit of the doubt for something that may cause substantial bias. Why not make the Lee bounds or the Horowitz bounds the default, and only do the first proposed thing if you can somehow very convincingly demonstrate that "it is extremely unlikely that the attrition was asymmetric"?

Also

  1. Consult a disinterested “jury” of colleagues to decide whether the monotonicity assumption for trimming bounds (Lee 2009; Gerber and Green 2012, 227) is plausible.

Where/how do you find this jury in practice? And what do you propose doing if they say it is not plausible?

@donaldpgreen
Copy link
Collaborator

Thank you for your comments.

Regarding issue number one, your points about low power tests and the burden of proof are well taken. At the same time, without substantive reasons for suspecting asymmetrical attrition, I would be hesitant to make trimming bounds and certainly extreme value bounds the default. Bear in mind that our lab's RCTs frequently involve administrative outcomes such as voter turnout, and it would be unusual if attrition were to occur in an asymmetrical fashion. Sometimes we use survey outcomes as well (e.g., political attitudes); symmetry in this context hinges on the lack of apparent connection between treatment and outcome assessment, which is a feature of the design, albeit a potential fallible one.

Regarding issue number 2, we would find our jury through EGAP, an organization that offers peer consulting to its members. If we were not EGAP members, I suppose we would put out a call to other political scientists through the APSA Experimental Research Section. As to what we propose to do if the jury result is negative, our SOP says the following: "Consult a disinterested 'jury' of colleagues to decide whether the monotonicity assumption for trimming bounds (Lee 2009; Gerber and
Green 2012, 227) is plausible. If so, report estimates of trimming bounds; if not, report estimates of extreme value (Manski-type) bounds (Gerber and Green 2012, 226–27). (If the outcome has unbounded range, report extreme value bounds that assume the largest observed value is
the largest possible value.) In either case, also report the analysis that was specified in the PAP."

@daaronr
Copy link
Author

daaronr commented Aug 25, 2018 via email

# for free to join this conversation on GitHub. Already have an account? # to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants