Qualitative Research in Practice : Stories From the Field - Blogs Unpad

Qualitative Research in Practice : Stories From the Field - Blogs Unpad Qualitative Research in Practice : Stories From the Field - Blogs Unpad

blogs.unpad.ac.id
from blogs.unpad.ac.id More from this publisher
16.11.2014 Views

9 Epilogue: From research to practice, programs and politics We began this journey looking at how qualitative research can be generated from the swampy lowland of practice in the human services. We end it by examining the impact of such research, thus completing the loop back to practice. It is not always easy to identify the impact of research. Some studies have an obvious and immediate effect while the effects of others may be almost imperceptible, particularly in the short term. It is also hard to differentiate the impact of one study from that of others as one piece of research can lead to another, creating multiple ripples in a reservoir of research and practice in which it is impossible to determine the ripple from a particular stone. The researcher may not be aware of the impact which a study has, just as human services managers, policy-makers and practitioners may not be fully aware of the research which is influencing them in their decisions. In this chapter we look briefly at how qualitative research in the human services can be used to enhance the response of the service system and the broader community to complex human problems. We conclude by giving a few examples from our interviews with researchers. While it will mostly be people other than researchers who will put into practice the implications of their findings, researchers have an important role to play in determining the impact of their study. Making recommendations in the most effective ways possible is a key part of this. Surprisingly, very little attention has been 177

Qualitative research in practice given in the research literature on how to do this. Patton, an expert in qualitative program evaluation, commented that: Recommendations have long troubled me because they have seemed the weakest part of evaluation. We have made enormous progress in ways of studying programs, methodological diversity, and a variety of data-collection techniques and designs. The pay off from these advances comes in the recommendations we make. But we have made very little progress in how to construct useful recommendations (Patton, 1988, p. 90). Perhaps there needs to be a research project on how best to write research recommendations! Below is a summary of the suggestions which Hendricks and Papagiannis (1990, pp. 122–5) have proposed for making recommendations in relation to program evaluation. They are applicable also to other types of research. • Consider all issues in your evaluation to be ‘fair game’ for recommendations, not just those the research was designed to investigate. • Don’t wait until the end of your evaluation to begin thinking about recommendations—record possible recommendations from the commencement of data collection. • Draw possible recommendations from a wide variety of sources, including earlier studies of similar programs and program staff of different levels in the organisation. • Work closely with agency personnel throughout the process to minimise the threat which unexpected recommendations can pose, and engage stakeholders who have the power to implement them. • Consider the contexts into which the recommendations must fit and make realistic recommendations, thinking carefully before recommending fundamental changes. • Decide how specific you want your recommendations to be and consider the possibility of providing options for decision-makers. • Show the future implications of your recommendations in as much detail as possible and consider planning an implementation strategy and, if invited, consider becoming involved in the implementation itself. • Make your recommendations easy for decision-makers to understand, categorising them in meaningful ways (for example, short-term and long-term) and adapt the way recommendations 178

<strong>Qualitative</strong> research <strong>in</strong> practice<br />

given <strong>in</strong> <strong>the</strong> research literature on how to do this. Patton, an expert<br />

<strong>in</strong> qualitative program evaluation, commented that:<br />

Recommendations have long troubled me because <strong>the</strong>y have seemed<br />

<strong>the</strong> weakest part of evaluation. We have made enormous progress <strong>in</strong><br />

ways of study<strong>in</strong>g programs, methodological diversity, and a variety<br />

of data-collection techniques and designs. The pay off from <strong>the</strong>se<br />

advances comes <strong>in</strong> <strong>the</strong> recommendations we make. But we have made<br />

very little progress <strong>in</strong> how to construct useful recommendations<br />

(Patton, 1988, p. 90).<br />

Perhaps <strong>the</strong>re needs to be a research project on how best to write<br />

research recommendations! Below is a summary of <strong>the</strong> suggestions<br />

which Hendricks and Papagiannis (1990, pp. 122–5) have proposed<br />

for mak<strong>in</strong>g recommendations <strong>in</strong> relation to program evaluation.<br />

They are applicable also to o<strong>the</strong>r types of research.<br />

• Consider all issues <strong>in</strong> your evaluation to be ‘fair game’ for<br />

recommendations, not just those <strong>the</strong> research was designed to<br />

<strong>in</strong>vestigate.<br />

• Don’t wait until <strong>the</strong> end of your evaluation to beg<strong>in</strong> th<strong>in</strong>k<strong>in</strong>g<br />

about recommendations—record possible recommendations<br />

from <strong>the</strong> commencement of data collection.<br />

• Draw possible recommendations from a wide variety of sources,<br />

<strong>in</strong>clud<strong>in</strong>g earlier studies of similar programs and program staff<br />

of different levels <strong>in</strong> <strong>the</strong> organisation.<br />

• Work closely with agency personnel throughout <strong>the</strong> process to<br />

m<strong>in</strong>imise <strong>the</strong> threat which unexpected recommendations can<br />

pose, and engage stakeholders who have <strong>the</strong> power to implement<br />

<strong>the</strong>m.<br />

• Consider <strong>the</strong> contexts <strong>in</strong>to which <strong>the</strong> recommendations must fit<br />

and make realistic recommendations, th<strong>in</strong>k<strong>in</strong>g carefully before<br />

recommend<strong>in</strong>g fundamental changes.<br />

• Decide how specific you want your recommendations to be and<br />

consider <strong>the</strong> possibility of provid<strong>in</strong>g options for decision-makers.<br />

• Show <strong>the</strong> future implications of your recommendations <strong>in</strong> as<br />

much detail as possible and consider plann<strong>in</strong>g an implementation<br />

strategy and, if <strong>in</strong>vited, consider becom<strong>in</strong>g <strong>in</strong>volved <strong>in</strong> <strong>the</strong><br />

implementation itself.<br />

• Make your recommendations easy for decision-makers to understand,<br />

categoris<strong>in</strong>g <strong>the</strong>m <strong>in</strong> mean<strong>in</strong>gful ways (for example,<br />

short-term and long-term) and adapt <strong>the</strong> way recommendations<br />

178

Hooray! Your file is uploaded and ready to be published.

Saved successfully!

Ooh no, something went wrong!