Building Solid Data for Your Organization

Self selection bias, controls groups, test groups, causation vs correlations. These are some of the more challenging concepts to master and control while you are measuring outcomes. For profits struggle with this all the time, but it’s extremely critical to think through these issues as you are touting your organization's impact and outcomes. Are those results really fair, or are you going to make a donor think that you aren’t being methodical and are claiming results unfairly?

Here is an example of the mistakes I see all the time (in collateral, on websites, at Philanthropitch, and over coffees):

Ex: Nonprofit whose mission is to get more kids to graduate high school with better grades by enrolling them in a social emotional learning (SEL) program that utilizes the great outdoors

Claims: Highschoolers who go through our SEL program graduate at a 50% higher rate than their counterparts, have GPAs that are a full point higher by the time they graduate.

Sounds pretty great, right? Not so fast!

Questions I would ask:

  • How do students get into the program? Did they volunteer? Did their parents sign them up?

    • Self Selection Bias

      • A student who is responsible enough to enroll or parents who care enough to enroll their kids are probably going have higher graduation rates regardless of the program. The organization may be taking credit for the work of the student or parents rather than solely the program results

  • Were 100% of the students that applied selected? If not, how did you pick the students? Was it random?

    • Control Group / Test Group

      • To address the first concern, a better approach would be to have all students apply and only take 50% into the program, THEN compare results just between those 2 groups.

  • What else was going on with those students? Was the principal more involved? Were these students going through other programs as well?

    • Causation vs Correlation

      • What other variables could have been responsible for student success? This would be especially worrisome if the organization's’ metrics compare one school to another where an overall effort by faculty could be contributing to student success regardless of the specific program.

This kind of analysis is hard and it requires a lot of thought and up front work, and sometimes even rejection of needy kids from the organization’s program in order to create a solid control group. Decisions like that can be hard, but are likely to pay off with solid data for future programs and funders.