You Want What I Want: Research, Rapport, and Surveys
It’s January at Year Up New York|New Jersey, and I’m once again compelling 292-odd program participants to participate in a research study. That is, a survey-reliant research study.
Ever tried to get hundreds of people to do a survey? It’s harder than you would think.
The first hurdle: having people even consider it. In client-facing contexts, many potential participants view corporate surveys as “lacking in empathy” and as a general inconvenience (Dholakia 2021). Employee engagement surveys—intended to gauge areas of discontent and improvement within organizations—sometimes exacerbate ongoing issues, failing to address the concerns workers actually have (Wahba 2023). According to a 2022 analysis, online surveys have an overall average response rate of 44.1%: even getting a tenuous majority of potential participants to respond is a major victory for researchers (Wu et. al 2022). Ultimately, whereas sloppy studies erode popular trust in surveys and their application, a good survey is always delicate, intentional, and targeted. Like a coffee shop.
The reason, of course, that you would ever invest time (and sometimes money) in such a particular, harder-than-it-looks research method is the data. You probably want lots of data. Lots and lots of quantitative data. (Kind-of) fast and cheap. Even the best surveys are a relatively quick and standardized method of gathering data on a population: interviews take days if not weeks to process, observational studies require precise planning and timing, and usability studies often necessitate proprietary tools and training. In the context of social impact, particularly in the non-profit space, internal surveys can churn out numerical insights without the budget crunch of an outside consultation.
Year Up, for example, loves its quantitative data—it’s part of its pervasive “Feedback Culture”. The national organization, headquartered in Boston, charts outs surveys across the participant life cycle, meant to track engagement and opinion metrics at critical touchpoints of a trainee cohort’s journey. These “Direct Service Survey” metrics are then, among a myriad of other functions, analyzed by a dedicated team, shared with corporate partners, and play a critical part in the organization’s overall and agile Service Design. Data is used to make data-informed decisions by data-informed teams. Year Up’s New York|New Jersey market, in turn one of the largest across the organization, has its own pattern of surveys and evaluations. After all, as useful as national metrics are, the needs of individual cities, tracks, and cohorts can be very different. That’s where I come in.
My first survey at Year Up, conducted in only my first three weeks of employment, was focused on our contemporary Learning & Development (L&D) and Internship cohorts. Preparation for the Program Evaluation segment of LC Lookback, a biannual market-wide meeting of staff, required a rapid and targeted collection of student metrics. I did what I could with the resources I had available to me—a detailed guide left by the previous Fellow, Nia. Yet, despite still hovering above the 44.1% average recorded by Wu’s 2022 team, I still felt a bit nonplussed at an ultimate L&D response rate of 51.6%. I hadn’t been fully acclimated to Year Up’s organizational culture, wasn’t entirely sure of the research goals of our study, and definitely lacked the time to become familiar with my study participants. As good as the insights we drew, I had a constant feeling that they could have been better. It seemed like trainees, in the midst of organizational changes, had viewed my research with raised eyebrows.
I was missing a rapport between researcher and participant.
Thus, I decided to roll out my own strategy for rapport building. Starting with the next L&D cohort, set to graduate July 2024, I adopted four, simple research practices. If you ever find yourself designing internal research, always try to:
1. Make your research meaningful
Appeal to your study participants’ interests, emotions, and experiences. It might be a Rousseau-flavor cliché, but people like helping other people. People, of course, also like helping themselves. Inform participants of the benefits of participating in your research: let them know that, with the data that they may provide, you can improve not only their experience but those of others. Rather than positioning researchers as exclusive arbiters of knowledge and authority, emphasize the collaboration inherent in your work. Align your interests with theirs. Human-focused research is ultimately the culmination of a researchers’ skills and interlocutors’ contributions. If a survey evaluates the design of a longer-term service (such as the L&D period at Year Up), actively cultivate trust between researchers and participants. Contributing to research becomes helping researchers out!
2. Make your research practical
In the case of surveys, do not make your survey openable only on the second blue moon of a leap year. Do not make your survey consist of 50 consecutive matrices. Do not make your survey a series of 30 required open-ended questions with minimum 100 word counts. Do not compel your participants into responding in a time window that biases their responses. Do not make your survey a combination of the above. Surveys, at a minimum viable state, should be painless to access, complete, and submit. Completion is the second hurdle of a survey, and a difficult survey diminishes data quality and participant trust. Minimize the length of your engagement. Provide gentle-ish reminders. To whatever extent you can, anonymize results and remove personally identifying information. No one enjoys being given a hard time. In particular, no one enjoys being given a hard time in the form of an over-extended Google Forms sheet. Research, at its best, becomes an opportunity to share information.
3. Make your research insightful
Discerning an actual takeaway from research serves as a third hurdle: results should be analyzed, then translated. When elaborating on a previous study, track ongoing trends and patterns: how have organizational changes been reflected in your metrics? Identify your set of key performance indicators, and drill down where and when you can. Document opportunities for expansion in future elaborations of the study. Archive your findings in a way that remains accessible to future researchers and stakeholders. Above all else: make sure you can digestibly communicate your findings to those outside of the research team, especially your participants. To build trust towards your research and those of future researchers, your findings must be actually legible. Research is the production of information. When you perform good research, you begin to assign that information meaning.
4. Make your research actionable
Ideally, indicate immediate actions that can be taken in accordance with your findings. Carry out the easy fixes and continue on the clear points of success. Action—or, at the very least, a plan of action—can make or break the trust of your participants. If you are cultivating trust on the basis that your research will work to the benefit of the participant group, it should work to the benefit of the participant group. Find the bravery to challenge institutional assumptions: at what points can we alter the course of our design or implementation, should it be to the benefit of our interlocutors? Recognize your study’s failings, limitations, and opportunities for improvement. Ultimately, champion a sustainable path forward. Any path of action shouldn’t lead to burn out: it should lead to progress. More than simply considering your research, stakeholders should be able to work with it. Action leads to better outcomes, which lead to better trust, which in turn leads to better research. Some would call it a virtuous cycle.
While these blocks of text may make things seem complicated, these maxims are fairly simple in practice. Chat with your future participants. Block out time for them. Make your intentions, methods, and motivations transparent. Take notes on just about everything.
Not only have these practices increased trainee trust towards organizational research, but they’ve grown both the volume and quality of our data. I’ve been able to gather contextual research on the experiences of participants throughout the Learning & Development Phase. I’ve been able to build upon the work of of previous fellows, crafting a qualitative research protocol for Year Up New York|New Jersey. I’ve been able to become a trusted advocate for our participants. And most metric-friendly, I’ve been able to raise the response rate from a passable 51.6% to a new high of 95.7%—and still counting!
But as you refine your own research practice, life goes on at Year Up New York|New Jersey. The reminder emails to the remaining 20% continue to fly out of my outbox. My little T-tests continue to sort out random chance from statistically relevant phenomena, my presentation decks prepared for their deployment. Participants pop up at my desk, asking for everything from advice to a laptop charger.
Yet in-between all of my little notes, projects, and ambitions—and a wave of organizational change—our site still had our own End of Year meeting. There, I was rewarded the “Newbie Award,” tagged as a new staff member that had already greatly contributed to the Year Up New York|New Jersey collective. Presenting to the entirety of our market staff, my supervisor cracked a joke about how I was able to bring data into any conversation, presentation, or strategy. Our market laughed. As I accepted my neatly printed certificate, I couldn’t help but flash a guilty smile.
I suppose, then, I’ve been doing something right.
Dholakia, Utpal. “Why Customers Hate Participating in Surveys.” Psychology Today. June 6, 2021. https://www.psychologytoday.com/us/blog/the-science-behind-behavior/202106/why-customers-hate-participating-in-surveys
Wahba, Phil. “Too Many CEOs Don’t Know What Their Workers Need. Employee ‘Engagement’ Surveys Can Make the Problem Even Worse.” Fortune. July 12, 2023. https://fortune.com/2023/07/12/employee-engagement-surveys-dissatisfaction/
Wu, Meng-Jia, Kelly Zhao and Francisca Fils-Aime. “Response Rates of Online Surveys in Published Research: A Meta-Analysis.” Computers in Human Behavior Reports 7 (2022). August 2022. https://www.sciencedirect.com/science/article/pii/S2451958822000409#sec1
SHARE THIS STORY