Whether you’ve designed your own survey or have used one created by others, you now have responses and want to turn them into findings.

As was true with designing a survey, books have been written and entire courses developed on survey analysis.  What follows are some questions to ask yourself before and during your analysis along with resources to learn (much) more.

“Did I get enough responses?”

It seems intuitive that the more responses to your survey, the better.  And in many cases that’s true.  However, you still need to think about who didn’t complete your survey (i.e. non-response bias) and if you have enough respondents per subgroup (e.g. race/ethnicity, gender, year in school, etc.) if you plan to break down your results beyond just overall summaries.

One of the first things you should report in a survey analysis is the response rate (number of responses divided by the number invited to take the survey).  This is important to give the reader a sense of how generalizable (more below) your results may be.

Previous survey response rates, particularly of similar populations, may help you put your study’s response rate in context.  See the WFU Large Survey Results dashboard for past response rates.

Still, researchers have found that even low response rates (i.e. 5% to 10%), in higher education student surveys with sampling sizes of at least 500 produced reliable estimates of several measures of student engagement.

“How sure should I be of my results?”

Early in your reporting, share your confidence interval and/or margin of error with your audience so they may more fully understand the limitations of your results.  The following are straight copies and pastes from Statistics How To.

  • A confidence level is expressed as a percentage (for example, a 95% confidence level). It means that should you repeat an experiment or survey over and over again, 95 percent of the time your results will match the results you get from a population (in other words, your statistics would be sound!).
  • A confidence interval is how much uncertainty there is with any particular statistic. It tells you how confident you can be that the results from a poll or survey reflect what you would expect to find if it were possible to survey the entire population. Confidence intervals are intrinsically connected to confidence levels.
  • A margin of error tells you how many percentage points your results will differ from the real population value. The margin of error is defined as the range of values below and above the sample statistic in a confidence interval.
“Are my responses generalizable to the population?”

You need to report how representative or generalizable your responses are to the larger population.  This is often done by looking at the breakdown of responses by sub-groups (e.g. gender, race, age) compared to the overall population.  If the proportions in the responses are fairly close (consider using a Chi-Square Goodness of Fit Test) to the proportions in the overall population, you might feel your results are fairly generalizable.

Wake’s interactive, online Fact Book may help you determine the population size(s) at Wake Forest for students, faculty, and staff.

“What if response rates are not generalizable?”

Suppose males account for 25% of your population.  Despite you making sure 25% of your survey sample included males, only 10% of the respondents are male.  What to do?

  • While the survey is still in the field, send reminders to the groups with low response rates.
  • If response rates are not representative of certain groups after closing the survey, make a note early in your reporting explaining how differences between the population and respondents may affect the interpretation of the survey results.  
  • Consider adjusting the “weight” of the responses for under-represented groups through post-stratification.
“What sorts of analyses should I do?”

There are many ways to analyze survey data, depending upon the types of questions asked, subgroups identified in the survey, and whether or not this is the first time you’ve administered this survey.  Check out this survey analysis blog and this one on how to analyze survey data in Excel.  Also, survey software vendors Qualtrics and Surveymonkey.com give some advice (along with sales pitches).  If you find other sites you like, please send them to IR.

  • Your question types influence what analyses you can and should not do.
    • Questions where respondents entered numbers (e.g. hours spent per week studying) often are analyzed by calculating an average (i.e. “mean”).  Although if your data are skewed (i.e. have a lot of respondents at the high or low end of the scale) or have outliers (i.e. a few responses at the very high or very low end) then a median (i.e. the value at which half of the responses are above and half below) is probably a better statistic.
    • Questions where respondents selected one of several options (i.e. categorical variables) are often analyzed by reporting what percentage of respondents picked each option.  Be careful not to misuse Likert scale variables by treating them as if they were actual numbers – see a friendly word of advice on using Likert scales.
    • Questions where respondents entered text (e.g. “Please tell us what you liked best about this experience.”) requires more time to read through, characterize, then summarize what was written.  Don’t let the extra time deter you.  Qualitative, or open-ended questions, can provide extremely valuable insights.
  • Are responses different by subgroups?
    • Often it is instructive to see if members of different subgroups responded similarly or differently to items.  These comparisons can be done by cross-tabulating (e.g. pivot-tables in Excel)  or applying filters to your data.  Ensure that your subgroup counts aren’t too small (i.e. no less than five per group) to allow for meaningful comparisons.  This tool was created by IR to help examine if differences in percentages (or differences in averages) between subgroups are meaningful.
  • Are responses changing over time?
    • If you have data from prior studies, it is often a good idea to look at trends (e.g., compare this year’s results with last year’s).  You could use this same tool to compare percentages or averages across years.
  • Are responses to different questions correlated?
“Are my differences meaningful?”

Particularly when doing comparisons you (and your readers) will often want to know if the differences found are “real” or if they are meaningful. Statistical significance, effect size, and practical significance can help answer those questions.

  • Statistical Significance is the likelihood that differences in averages, or percentage differences, observed in the sample have occurred due to chance or sampling error.
    • Statistical significance does not automatically imply that the finding is important — even if your “p-value” is <0.0000001.
    • Given a large sample or low variance, one might still find statistical significance despite seemingly trivial differences.
    • When a difference is statistically significant, it does not necessarily mean that it is big, important, or helpful. Therefore, we would also look at effect size.
  • Effect Size measures the strength of the relationship between two variables. In a research setting, it is not only helpful to know whether results have a statistically significant effect, but also the magnitude of any observed effects.
    • Cohen’s d is a frequently used statistic when comparing the means of two groups that can help identify the magnitude of the effect as large, medium, small, etc.
    • IR’s pages examining differences in percentages and differences in averages report statistical significance and effect sizes.
  • Practical Significance looks at whether the difference is large enough to be of value in a practical sense.
    • Some research results may be compared with previous, public, and/or published results, which can help determine where specific results sit in comparison.
    • Examining the confidence intervals and margin of error around results will help with understanding how much certainty accompanies the findings.