You and Me and Data
“They’re always surprised when I say this,” the panelist says. It’s the Head of User Research at a certain company. He’s poised confidently in his tall, white bar stool, gripping the microphone. “But here’s something I always tell the people I’m supervising: only 30 to 50 percent of a research job is being a researcher.” The other panelists listen intently, pivoting their heads towards him. “The other 50 to 70 percent are your soft skills: working cross-functionally, and storytelling with data.” Thoughts and murmurs reverberate across the crowd. He gauges the audience’s reaction. “That sort of thing.”
Meanwhile, my hands fly percussively across the keyboard of my work laptop—I’ve been taking notes the entire presentation. It’s not really common to see a researcher representing their company at a panel, much less a researcher active at the intersection of technology and social impact. Yet here he is: another queer man of color who had connected the dots between his own fancy-schmancy New England education and the impact he could make in Applied Research. Of course, it isn’t statistically insignificant that he has around a decade of experience over my measly full-time months. He’s the only reason I bothered coming to a panel that, for the most part, mostly had targeted client-facing B2B professionals. He can probably hear the clacking. I hope he understands—it’s not all that different from qualitative research, right?
For all the research I had executed and wanted to execute, for all the rigor I had taken into Year Up and learned within it, for all the data carved into the shape of 70 colorful, animated slides and graphs: am I truly embodying a researcher?
Keyboard clacking or not, the researcher continues his thought. “It’s a human-centered practice,” he concludes. He’s referring to both research participants and coworkers. A practice that’s 30 to 50 percent research, 50 to 70 percent stakeholder management. There’s a pause.
Another panelist flicks the switch of his own microphone. Conversation then drifts back to the Director of Sales at a Series A startup, and we start talking about product management again.
The idea of those human-centered proportions thus haunts me for several hours.
I don’t believe that the researcher had been wrong: even as an early career researcher and analyst, the words still ring true. Since the start of my fellowship at Year Up, I have been plunging headlong into data and its repercussions. Most of my first month of the fellowship—besides the inherently hectic nature of onboarding anywhere—had been occupied by meticulous survey design, study rollout, mixed methods analysis, workshop agenda-building, and the construction of a corresponding 80-odd slide deck. In fact, the researcher’s words align with what had been my approach at Year Up beforehand: gather your data into a nice, lean repository and do the work of extracting narrative for the data’s eventual audience. Easier said than done, and a process that stops getting faster after a certain point of skill. Doable.
Yet that researcher ratio strikes me at a very specific time. Less than 12 hours after this in-person, after-work, Wednesday night panel comes a very special Thursday morning: I am going to be facilitating another LC Lookback, the second of four.
I’ve been working towards this data democratization workshop since as early as October, starting with a lengthy research proposal centered on the use of semi-structured interviewing as part of market-level research. Even with semi-structured interviewing pushed to next cycle, I’ve been grinding away at participant surveys—whether attempting to eliminate all chance for respondent bias, or simply just making sure trainees respond at all. Surveys turn into results, results turn into spreadsheets, and spreadsheets turn into waves of PivotTables and flashy visualizations. With any luck, presentations transform into change– whether incremental or sweeping– that delivers a better experience for the program’s end users. It’s a continuous, delicate art of observation, adjustment, and implementation.
Even beyond the data of this study, I’ve dived into the rigor of research: I voraciously consume articles penned by researchers and rapidly absorb the language particular to different industries and facets of the craft. I consult mentors in the field and establish a laundry list of professional development opportunities for myself throughout the remainder of the fellowship. I decide that I want to embody the strengths of both qualitative and quantitative methods—and, in the process, earn a certification in User Research and Design from the University of Michigan. I interrogate my own qualitative skills, and obliterate any vaguely leading questions from my interview scripts. I pin Survey Design down to its academic science, grinding each and every distracting element away from my surveys. I grind away at quant: I master SQL, and build myself up to an intermediate degree of Python for Data Analysis. I convince IT to let me install RStudio, and I almost get them to install a Python IDE. Request denied—security reasons, they say.
Still, for a while, data becomes me. Rigor becomes me. Embodying the researcher becomes me. I learn what the hottest discourse among corporate researchers is this season: proving your value add.
And a few days before that Thursday, at the suggestion of a mentor, I drop a slide from my lengthy, lengthy workshop deck into a channel accessible to my stakeholders. That’s the nature of research, right? You grow stakeholder buy-in through drops of insights. It’s a good, basic metric: it’s a Net Promoter Score, a measurement cooked up at Bain & Co for the sake of comparing quantified, popular opinion between industries and companies. It’s not very common in non-profits, but we use it at Year Up. I upload my visualization to Slack, and respond accordingly when someone asks for a bit of context. It’s industry standard—and moreover, one of our standards at Year Up: I think there’s no harm done.
The day after—but still before that Thursday, and before that Wednesday night—the Director of Program combs over my slides over a video call, and he finds himself a bit frazzled by the length. He’s especially concerned about the technical language I’ve deployed across my presentation. I’ve put in a slide explaining the history, purpose, and conventions of Lickert scales, those 1 to 5 or 1 to 7 or 0 to 10 things. I’ve put in a slide explaining response bias, and the ways in which a respondent’s context may affect their answers. There’s three, hefty slides discussing the roles of Qualitative Research, Quantitative Research, and Desk Research before any bulk of the data rears its head. “That’s the first time I’ve heard ‘desk research’ used that way,” he tells me. Then, he continues, “I think we need to make some cuts here.”
I pause. I’ve been grinding away at these surveys, these spreadsheets, these hulking affinity diagrams of open-ended qualitative responses—to me, this is already a dilution. I think about the rigor I had so deeply immersed myself in. I think about the ways I had held myself back from including documentation of statistical significance. I think about my research, the ways I want it to grow, and how I want to grow. I wonder how, in that moment, to advocate for that growth. I wonder how to prove my value add.
I wonder how to prove my value add to myself.
“You’ve only got 60 minutes for this, and it’s going to be a long, long day on Thursday,” the Director tells me. “I think—well, you have to consider your audience.” I see his eyes, even over Zoom, scanning across the numbers and texts and graphics. He asks, “Is everyone going to be able to understand this?” I consider his question, and hover my mouse over the delete button. “Wait,” he continues, “I think nerds like you and me can talk about this forever when we have the time.” I hover off the delete button.
“Hide these slides—you or I or someone else can find them very, very helpful later,” he muses. I follow his instructions. That’s 3 fewer slides to sift through. “Try compressing them into one: what’s the main thought you’re communicating here? That a lot of work went into this, right?” Typing away again, I reply, “Yeah, that’s right.” The Director confirms my line of thinking, “Great. You have to make sure you’re telling a story with what you’re showing us. What’s the story we’re painting with the data?”
We trim a slide or two off the deck, and compress and clarify where necessary. The deck actually doesn’t change very much—he muses about whether an hour will be enough time for all of the content, and I try to respond with confidence. “Hey, by the way: Great work, Avery,” he tells me, “You’ve been working hard on this.” I smile a bit and nod. He’s right, after all.
Come Thursday morning, and I’m reflecting while setting up the presentation space. I’m not so much nervous as I am jittery, and I suppose that the coffee hadn’t helped. Yet I’m still asking myself questions.
For all the research I had executed and wanted to execute, for all the rigor I had taken into Year Up and learned within it, for all the data carved into the shape of 70 colorful, animated slides and graphs: am I truly embodying a researcher? In my pursuit of not only scientific precision in my analysis, but scientific exactness in my reporting: am I managing my stakeholders’ expectations and understanding, or am I arrogantly positioning myself as a sole source of knowledge? And for the participants, the stakeholders for whom the data would matter most—yet wouldn’t be in the room of this day-long workshop—is this report going to be to their benefit? Am I making sure not to reduce them to a set of data points?
Where am I on that scale of 30 to 50 percent against 50 to 70 percent?
Interactives and warm-up activities come and go. My presentation—in all of its data-laden glory—comes and goes. I try not to pontificate; instead, I carve out a story through a reservoir of statistics and words, all stemming from the participants we had worked with for months upon months upon months. If I keep anything to an exact science, it’s my timekeeping: questions have their dedicated sections, and interruptions are not allowed. It’s appreciated by my colleagues. Someone compliments my sweater; it was going to be a long day, so our Site Director thought it best for us all to wear casual.
At the end of the workshop, we hold a “Plus/Delta,” an exercise to identify the strengths and growth areas of an event. One of my coworkers comes up to a whiteboard in front of the room, pops the cap off of her marker, and gestures toward the dozens of people seated.
“Well,” she tells them, “I have a Plus to start off with.” My head pivots towards her.
“Avery,” she continues, “Fantastic data. I learned a lot today.” She writes Avery’s Data in fat, bold letters at the top of the board, right under a plus-sign. “Check,” someone calls out, signaling agreement. “Check,” goes another voice. “Check.” Another. “Check.” Another. “Check.” Again. “Check.” It goes on like this for a little while, until there’s a stream of little checkmarks following my name. “Clarity of the data,” someone else calls out. “Check to that too,” says another. I try not to laugh.
“Glad to hear that,” I joke. And there I am, an early career researcher and analyst.
Even by the end of our Plus/Delta— even by the end of a day spanning all the way from 9 AM to 6 PM— I don’t know where I lie on that scale of 30 to 50, 50 to 70. I still haven’t put the puzzle pieces of my own research entirely together with that of Year Up’s national research functions. Unfortunately, I’m still 78% of the way off from completing my certification in Data Science in Python, but NumPy hasn’t been too hard to figure that out. I’ve been given the possibility of starting some semi-structured interviewing, but even that hasn’t been set in stone.
Yet, through all of that, I know I’m a researcher with a clear value add. And I don’t think I have to perform too many tests to prove that. The data against the null hypothesis is self-evident, even to me.
Avery Trinidad
Avery Trinidad (he/him) is the Research & Insights FAO Schwarz Fellow at Year Up in New York City.
SHARE THIS STORY