Skip to content

STAY UP TO DATE

Join our FREE network for latest news, events, training, research, sponsorship opportunities and more

Data collection methods

 

Surveys

Surveys are a good way of gathering a large amount of data, providing a broad perspective. Surveys can be administered electronically, by telephone, by mail or face to face. Mail and electronically administered surveys have a wide reach, are relatively cheap to administer, information is standardised and privacy can be maintained.[1] They do, however, have a low response rate, are unable to investigate issues to any great depth, require that the target group is literate and do not allow for any observation.[1]

As surveys are self-reported by participants, there is a possibility that responses may be biased particularly if the issues involved are sensitive or require some measure of disclosure on trust by the participant. It is therefore vital that surveys used are designed and tested for validity and reliability with the target groups who will be completing the surveys.

Careful attention must be given to the design of the survey. If possible the use of an already designed and validated survey instrument will ensure that the data being collected is accurate. If you design your own survey it is necessary to pilot test the survey on a sample of your target group to ensure that the survey instrument is measuring what it intends to measure and is appropriate for the target group.[1]

Questions within the survey can be asked in several ways and include: closed questions, open-ended and scaled questions, and multiple choice questions. Closed questions are usually in the format of yes/no or true/false options. Open-ended questions on the other hand leave the answer entirely up to the respondent and therefore provide a greater range of responses.[1] Additionally, the use of scales is useful when assessing participants’ attitudes. A multiple choice question may ask respondents to indicate their favourite topic covered in the program, or most preferred activity. Other considerations when developing a survey instrument include: question sequence, layout and appearance, length, language, and an introduction and cover letter.[1] Sensitive questions should be placed near the end of a survey rather than at the beginning.

Offering young people an incentive for completing the survey or embedding the survey as a compulsory item within the program schedule or curriculum may be useful to maximise the response rate.

Interviews can be conducted face-to-face or by telephone. They can range from in-depth, semi-structured to unstructured depending on the information being sought.[6]

Face to face interviews are advantageous since:

  • detailed questions can be asked
  • further probing can be done to provide rich data
  • literacy requirements of participants is not an issue
  • non verbal data can be collected through observation
  • complex and unknown issues can be explored
  • response rates are usually higher than for self-administered questionnaires.[6]

 

Disadvantages of face to face interviews include:

  • they can be expensive and time consuming
  • training of interviewers is necessary to reduce interviewer bias and are administered in a standardised why
  • they are prone to interviewer bias and interpreter bias (if interpreters are used)
  • sensitive issues maybe challenging.[6]

 

Telephone interviews according to Bowling[6], yield just as accurate data as face to face interviews.

Telephone interviews are advantageous as they:

  • are cheaper and faster than face to face interviews to conduct
  • use less resources than face to face interviews
  • allow to clarify questions
  • do not require literacy skills.

 

Disadvantages of telephone interviews include:

  • having to make repeated calls as calls may not be answered the first time
  • potential bias if call backs are not made so bias is towards those who are at home
  • only suitable for short surveys
  • only accessible to the population with a telephone
  • not appropriate for exploring sensitive issues.[6]

 

Focus groups

Focus groups or group discussions are useful to further explore a topic, providing a broader understanding of why the target group may behave or think in a particular way, and assist in determining the reason for attitudes and beliefs.[1] They are conducted with a small sample of the target group and are used to stimulate discussion and gain greater insights.[6]

Focus groups and group discussions are advantageous as they:

  • are useful when exploring cultural values and health beliefs
  • can be used to examine how and why people think in a particular way and how this influences their beliefs and values
  • can be used to explore complex issues
  • can be used to develop hypothesis for further research
  • do not require participants to be literate.[6]

 

Disadvantages of focus groups include:

  • lack of privacy/anonymity
  • having to carefully balance the group to ensure they are culturally and gender appropriate (i.e. gender may be an issue)
  • potential for the risk of ‘group think’ (not allowing for other attitudes, beliefs etc.)
  • potential for group to be dominated by one or two people
  • group leader needs to be skilled at conducting focus groups, dealing with conflict, drawing out passive participants and creating a relaxed, welcoming environment
  • are time consuming to conduct and can be difficult and time consuming to analyse.[6]

 

Documentation

Substantial description and documentation, often referred to as “thick description”, can be used to further explore a subject.[7] This process provides a thorough description of the “study participants, context and procedures, the purpose of the intervention and its transferability”.[7] Thick description also includes the complexities experienced in addition to the commonalities found, which assists in maintaining data integrity.

The use of documentation provides an ongoing record of activities. This can be records of informal feedback and reflections through journals, diaries or progress reports. The challenge of documentation is that it requires an ongoing commitment to regularly document thoughts and activities throughout the evaluation process.

 

Triangulation

Triangulation is used to address the validity of the data.[8] Triangulation methods use multiple forms of data collection, such as focus groups, observation and in-depth interviews to investigate the evaluation objectives. Utilising multiple data collection methods leads to an acceptance of reliability and validity when the data from the various sources are comparable and consistent.[9,10] Using more than one person to collect the data can also increase its reliability. This, however, will significantly increase the cost of the evaluation. Additionally, theory triangulation provides new insights by drawing on multiple theoretical perspectives.[11]

 

Resources
  • SHBBV Program Planning Toolkit 
  • My Peer Toolkit A free online resource for youth workers involved in planning, implementing and evaluating peer based youth programs. The tool contains information about a range of evaluation methods and possible tools. The information is suitable for all programs, not just youth peer based programs.

 

References
  1. Hawe, P., Degeling, D., Hall, J. 1990. Evaluating health promotion: A Health Worker’s Guide, MacLennan & Petty, Sydney.
  2. Taket A. 2010. In Liamputtong L (ed). Research methods in health: Foundations for evidence-based practice. Oxford University Press: South Melbourne.
  3. Hanson W,. Creswell, J., Plano Clark, V., Petska, K., and Creswell, D. 2005. Mixed methods research designs in counseling psychology. Journal of Counseling Psychology 52 (2): 224–235.
  4. Leech, N. and Onwuegbuzie, A. 2009. A typology of mixed methods research designs. Quality & Quantity 43: 265–275.
  5. Greene, J. and Caracelli, V. 2003. Making paradigmatic sense of mixed methods practice. In A. Tashakkori & C. Teddlie (Eds.), Handbook of mixed methods in social and behavioral research (pp. 91–110).Thousand Oaks, CA: Sage.
  6. Bowling, A. 1997. Research methods in health: Investigating health and health services. Place Published: Open University Press.
  7. Nastasi, B. and Schensul. S. 2005. Contributions of qualitative research to the validity of intervention research. Journal of School Psychology 43 (3): 177-195.
  8. Barbour, R. 2001. Education and debate. British Medical Journal 322 (7294): 1115-1117.
  9. Golafshani, N. 2003. Understanding reliability and validity in qualitative research. The Qualitative Report 8 (4): 597-607.
  10. Ovretveit, J. 1998. Evaluating health interventions. Berkshire: Open University Press.
  11. Nutbeam, D., and Bauman, A. 2006. Evaluation in a nutshell. North Ryde: McGraw-Hill.

Latest News

E-News, Issue 54

/

E-News, Issue 53

/

E-News, Issue 52

/