From Focus to Findings: A Quick Guide to Data Collection Methods

 

Written by Hannah Wohltjen


Last month, we talked about the importance of defining your evaluation focus. Once you know your destination, the next step is figuring out the best road to get there: choosing your data collection methods. When your goals are clear, this decision becomes much easier.


What We Mean by “Methods”

Think of methods as the tools in your evaluation toolbox. They’re “the how” of gathering information, and include the specific ways you’ll answer the big questions about your program, organization, or collaborative. Most people know about the two broad categories: quantitative (numbers) and qualitative (stories). But the real magic happens when you think carefully about which method fits your purpose — and when you combine approaches for a more complete picture.

Here are four key types of methods, with examples of how they shine on their own and how they can complement each other.

Quantitative Methods: The Numbers

Numbers give you scale, support tracking toward a benchmark or goal, and are especially useful for showing change over time.

Examples: surveys with closed-ended questions, pre- and post-tests, or administrative data trackers like attendance records and sign-in sheets.

When to use them: Ultimately, quantitative data helps you measure progress toward a numeric goal or communicate impact in a way that’s quick to grasp (think charts and stats in a report). If the questions you are trying to answer include things like “Did we reach our target enrollment for the program?”, “How much did participants’ knowledge scores change after our training?, or “How many people are satisfied with our services?” then quantitative methods are a great choice.

Qualitative Methods: The Stories

If numbers tell you what’s happening, stories tell you why it matters. Qualitative methods capture human experiences, perspectives, and context. These are the pieces that bring your data to life, or as we like to say add “flesh” to the bones of your quantitative data.

Examples: in-depth interviews, focus groups, outcome harvesting.

When to use them: When you want to understand how a program works in practice, dig into sensitive or complex issues, or hear directly from participants about their lived experience. Evaluation questions that qualitative data can help answer include things like, “How did our staff feel program implementation went?”, “How do folks describe the value of our organization?”, or “How would our collaborative members define the quality of their relationships?”

Participatory Methods: Co-creating the Story

Sometimes the most powerful insights come when the people closest to the work help shape the evaluation itself. Participatory methods invite stakeholders, staff, community members, and participants to be co-evaluators — shifting the power dynamic and making sure the evaluation reflects what really matters to them.

Examples: community mapping, Photovoice (photos that tell stories), or Participatory Action Research (involving stakeholders in designing and conducting the evaluation).

When to use them: Participatory methods are excellent when you want your evaluation approach to build trust, elevate diverse voices, and strengthen community ownership around data. If your evaluation questions include things like, “How would program participants define the issue our program is designed to address?” or “How would our collaborative members depict our progress so far?” a participatory approach might provide more meaningful insights.

Mixed Methods: The Best of Both Worlds

Why choose between numbers and stories when you can have both? Mixed methods intentionally blend quantitative and qualitative approaches to give you a fuller, richer understanding.

Examples: Running a pre- and post-survey to measure change, then following up with interviews to uncover what drove those changes. Or hosting focus groups first to design a better survey and ensure it asks the right questions.

When to use them: Mixed methods are excellent for complex projects, when you want to double-check findings across methods, or when you need a truly holistic view of outcomes and processes. If your evaluation includes questions like “What is a ‘meaningful’ score increase on our basic needs assessment?” or “To what extent does training enrollment translate to behavior change?” mixed methods can help you effectively compare quantitative and qualitative sources of data to get more nuanced insights.

Things to Consider When Choosing Your Methods

Picking the right method isn’t just about what sounds interesting, it’s about what’s possible in your specific context. Ultimately, the best methods are the ones that map to your information goals AND that you can actually implement.

  • Your Goals: Do you need to explore a new topic (lean qualitative) or measure progress toward a known outcome (lean quantitative)? Often, a mixed approach gives the best insights.

  • Access to Respondents: Do you have easy contact with your audience, or are they harder to reach? A short online survey works great for the former; thoughtful interviews may work better for the latter.

  • Respondent Burden: How much time can you realistically ask of participants? Keep in mind: the less demanding, the more likely people are to engage.

  • Resources (Time & Money): Large surveys can be costly up front but efficient to analyze. Interviews and focus groups may be cheaper to collect but take longer to code and interpret.

  • Pilot First: Where time and resources allow, always test your tools with a small group. A quick pilot can save you from unclear questions, logistical hiccups, or costly mistakes later.


Want a partner to help sift through options and design an evaluation that truly works for you? Let’s connect!

Next
Next

Project Spotlight: The Nashville Food Project