Research and the ‘duh’ factor

Posted March 15, 2010 by Melanie Wilson

The other way signs.jpg

Recently a colleague approached me with an opinion that completely ruined my day.

I do original research, and I summarize and synthesize other people’s research, for youth-service professionals. I love the process of gathering information, analyzing it, shaping it into a coherent package, and letting it out into the world to illuminate as it will. But this colleague – he runs an agency for homeless and troubled youth – doesn’t feel the same way. In fact, concerning the bulk of research findings I routinely forward to him and others, he has one word: Duh.

To prove his point, he ticked through some research findings I had reported in a recent Into Practice bulletin – all of it carefully vetted (I thought) for relevance to the daily work of youth workers and administrators. The research was pretty typical fare, some of it commonsensical, most just underscoring what anyone paying attention would have already suspected.

One piece of research, for instance, confirmed that teenagers who listen to songs with hypersexual, misogynistic lyrics tend to initiate sex more quickly than they otherwise would. Duh, he said. Another piece reported that family conflict triggers depression more easily in teenagers who are already predisposed to depression. Duh, he said again.

One piece of research, for instance, confirmed that teenagers who listen to songs with hypersexual, misogynistic lyrics tend to initiate sex more quickly than they otherwise would. Duh, he said. Another piece reported that family conflict triggers depression more easily in teenagers who are already predisposed to depression. Duh, he said again.

Why on earth were these kinds of studies necessary, he wanted to know. Any half-intelligent person could guess this stuff. And anyway, wouldn’t the money that supports these projects be better spent on direct care for youth in trouble?

I told him I didn’t know – only research could tell us, and that kind of research wasn’t possible. To show him I sympathized, though, I did concede that all studies are incremental, narrowly defined, niche projects. They take the last similar study and go one small step further. Duplicative? Somewhat. Boring? To many folks, undeniably. But that’s how bodies of complete knowledge are built. It’s slow and tedious, but if you stick with it, you’ll end up knowing more than you used to know, and being better able to justify your programs. If we believe in evidence-based practice as much as we claim, we should be willing embrace research rather than fight it.

But how, exactly? What does it look like to really use research in a practical way?

Let’s look at the sexually degrading song lyrics study as an example. A lot of people are uncomfortable with such lyrics, and rightfully so, in my view. It doesn’t matter how hip the vehicle: the vicious denigration of women simply can’t have a good effect on young people, either male or female. But does it actually have a bad effect? Bad enough to ban such music from programs where youth play it and obviously enjoy it? Who knows? If you had a study in hand that demonstrated that, yes, such lyrics have a clearly negative impact on the sexual behavior of youth, you’d have a reason, an evidence-based one, for rejecting the music. Your program has thus changed, and you used research to change it.

Or maybe you won’t use the study to make a policy change in an entire program. You may use it to inform your clinical work with individual youth. Transitional living and other youth-serving programs routinely see adolescent boys who exhibit inappropriate attitudes or behavior toward girls and women. Could the music they listen to be a predictor, a toxic influence, a risk factor? Very possibly, according to this study. At the very least, it’s one more fact to work with, one more thing to think about and discuss with colleagues and youth themselves.

Which songs young people listen to may seem like a small issue, not terribly consequential in the scheme of things. But the stakes get much higher. Recently I acquainted myself with a government website, ExpectMore.gov. In it, the US Office of Management and Budget reveals, in nothing-concealed, plainspoken format, which government-funded programs are performing and which aren’t. Many youth programs have been put in the “not performing” category – rated “effectiveness not demonstrated” because they can’t yet prove that they work, and especially that they work over the long term. What happens to programs deemed “ineffective” or “effectiveness not demonstrated”? Some of them get slated for defunding, while others are pressured to produce the statistics that can show they deserve government support.

Many youth programs have been put in the “not performing” category – rated “effectiveness not demonstrated” because they can’t yet prove that they work, or that they work over the long term. What happens to programs deemed “ineffective” or “effectiveness not demonstrated”?

Every research study is designed to demonstrate the relationship between one thing and another. In the social sciences, it’s about action and consequences: If you do a particular thing, will another thing reliably happen? That’s all there is to it, and that simple question is applied to every aspect of our field, at every level, every day. It’s applied to practice and administration, to individuals and entire agencies, to clients and practitioners.

All that said, is research important to our field? Well, to borrow a word from my colleague, duh. It obviously is, both as a means to improve practice and to demonstrate effectiveness. He’s made me think, though. Do the majority of direct-service youth professionals share his views about the irrelevance of research? It’s a depressing thought for someone like me. I hate to say it, but maybe that in itself is a research project.

Previous
Previous

Fostering motivation and passion among direct-care staff

Next
Next

What little girls want, and what we want for them