METHODS

CIET has a set of methods that it brings to all its work as well as specific methods that come into play as needed. Specific methods continue to be developed/adapted as the work requires.

CIET conducts community intervention trials to demonstrate the extent to which a given solution to a problem will be effective. At its simplest, this is a comparison of outcomes in one group of communities whith those in a similar community or group of communities where the solution has not yet been tried.

We first do a baseline study in all communities likely to be involved in the study. Wherever possible we then randomly allocate communities to “intervention” and “control” status, using the baseline to help ensure balance between the groups. When they are randomized, community intervention trials are known as a cluster randomized controlled trials (CRCTs). Whether randomization is used or not,  if the solution is shown to be beneficial, our commitment is to extend it to all communities in the sample.

In late 2009 CIET began a large CRCT in both Mexico and Nicaragua, testing a “green” alternative (camino verde) to conventional dengue control measures through evidence-based community mobilisation.

In Pakistan we conducted a CRCT to test a communication solution to improve household cost-benefit decisions about childhood immunization. Five articles describing the results and various aspects of this trial have been published in BMC International Health and Human Rights and are available at: http://www.biomedcentral.com/1472-698X/9?issue=S1.

In Pakistan, we piloted an evidence-based education intervention based on locally designed communication aids used by Lady Health Workers.

In Limpopo province in South Africa, CIET tested the impact of “HIV literacy” among female elders.

In the Free State province of South Africa, we looked at the impact of an awareness intervention to increase acceptibility of antiretroviral therapy among rural populations.

In late 2008 CIET began a CRCT to test the impact of focusing local AIDS prevention on the choice-disabled, especially the victims of gender-based violence, in Botswana, Namibia and Swaziland.

Knowledge synthesis is the aggregation of existing knowledge about a specific question by applying explicit and reproducible methods to identify, appraise, and then synthesize studies relevant to that question. The best-known products of knowledge synthesis are systematic reviews and meta-analyses.

Good knowledge syntheses seek to include not only studies published formally but also those that can be found in the “gray literature” that circulates via the Internet, scientific conferences, academic courses, etc. But there is a great deal of knowledge that is not written. Most knowledge related to indigenous medicine, for example, is not available in written, much less in accessible published form. Among other contributions to knowledge synthesis, CIET is engaged in developing tools, such as cognitive mapping, for systematic documentation of traditional, local and unwritten knowledge that might otherwise escape scientific review and analysis.

Examples of CIET’s work in the field of knowledge synthesis include:

CIET has also contributed to the development of a tool, called AMSTAR, for assessing the methodological quality of systematic reviews. See: Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Medical Research Methodology 2007; 7:10. Available from: http://www.biomedcentral.com/content/pdf/1471-2288-7-10.pdf.

For every behavioural outcome there are intermediate stages on the way to a particular action or practice. Making these stages explicit can help in specifying indicators to be measured in studies of interventions aimed at changing a practice or behaviour. Using concepts from social psychology and decision theory, CIET uses the acronym CASCADA to describe an unpredictable but broadly sequential flow through conscious knowledge, attitudes, subjective norms and positive deviation from subjective norms, the intention to change, the sense of agency (collective and individual) that change is possible, discussing it and, finally, making the behaviour change. Each stage may be illustrated with reference to HIV and AIDS prevention.

Conscious knowledge: In primary prevention, we might be concerned about knowledge about rights (for example, to say no), modes of transmission of sexually transmitted infections, prevention mechanisms including condoms, misconceptions and myths about HIV/AIDS. In secondary prevention, the value and nature of HIV testing and for tertiary prevention, we might focus on knowledge of antiviral therapy.

Attitudes: Attitudes of interest in primary prevention include the belief that, for example, women enjoy sexual abuse or that people with AIDS should live apart from others. In secondary or tertiary prevention, an example might be the belief that precautions like condoms are worth the effort.

Subjective norms: In primary prevention, the belief that friends or neighbours see things in a certain way (people around here believe that women enjoy sexual abuse) can be part of a negative cultural environment or, if modified, part of an incentive to change. The positive deviation from a negative social norm (in secondary prevention, my friends do not believe condoms are worth the effort, but I believe they are) can provide an early positive indicator of a behaviour change strategy.

Intention to change: The intention to go for HIV testing and, in tertiary prevention, intention to use a condom during intercourse, are useful markers of progress towards behaviour change.

Agency: Individual agency or sense of self-efficacy can be measured by the declared ability to insist on using a condom (secondary prevention). Also important is collective efficacy, the perception that a particular issue can be dealt with in the community.

Discussion or ability to talk about it: The ability to talk about an issue (refuse unwanted sex, use a condom, go for HIV testing or take up an ART offer) often precedes the actual practice. This also provides pointers for interventions in primary, secondary or tertiary prevention.

Actions/practices/behaviours: Prevention-related practices include sexual violence, going for HIV testing, using condoms, multiple concurrent partners, and uptake or adherence to ART. Like any prevention outcome, behaviour might be conditioned by factors like age or sex. A general concern is that of the social, cultural or economic factors supporting risk behaviour. The key to understanding preventive impact is to relate individual behaviour change to individual exposure to interventions.

Neither the steps in the cascade nor its slope are fixed. In a given situation, x% will have conscious knowledge of AIDS prevention, y% will have helpful attitudes and z% will have the agency to do something about it. The implication of the CASCADA model is that a positive attitude counts for more than conscious knowledge, and agency counts for more than a positive attitude. We do not know exactly how much more – this is probably situation and subgroup specific.

Initially developed in the context of HIV/AIDS prevention in South Africa, the CASCADA has also been applied in Nicaragua to community control of dengue, and is applicable to many other types of research involving human behaviour, such as suicide prevention among Aboriginal youth in Canada.

Communication is at the core of any development effort. However, development agencies, governments, researchers, and academics disagree on what to use communication for and how best to communicate with people. Some think in terms of “dissemination”; others speak of  “social mobilization,” “social marketing” or “knowledge transfer”. Many seem to believe in a higher knowledge that comes from science and should go straight into people’s minds and change their lives.

In CIET, we see communication as the way to build the voices of the community into planning, particularly those of the most disadvantaged. As researchers, we partner with communities to help them identify and solve their development challenges in their own terms and in a sustainable manner, based on the participatory search, lay interpretation and public discussion of local evidence. We call this process SEPA: socializing evidence for participatory action.

Countries with special SEPA experiences include CanadaMexicoNicaraguaPakistan and South Africa.

For a three-page executive report on SEPA, click here.

Social audits make organizations more accountable for the social objectives they declare. Calling an audit “social” does not mean that costs and finance are not examined – the central concern of a social audit is how resources are used for social objectives, including how resources can be better mobilized to meet those objectives. But even a thoroughly competent and honest financial audit may reveal very little about the results of the programme being audited. Only reliable evidence that links a programme’s impact and coverage to its costs can serve the needs of managers who seek to manage on the basis of results. Nor can social accountability be achieved by looking only at internal records of performance, however well and honestly these are kept. A social audit must include the experience of the people the organization is intended to serve.

 

The term “social audit” has been applied to many CIET projects. For an explanation of the term as used by CIET see the document “The Social Audit: Fostering Accountability to Local Constituencies” first published on line in Capacity.org and available from the Library.

 

Notable among CIET social audits are the following:

Baltic States – In 2002, with the support of the Organisation for Economic Cooperation and Development (OECD), CIET surveyed attitudes and experiences of unofficial payments in the health care and licencing sectors of Estonia, Latvia and Lithuania.

Bolivia – At the request of the Vice-President, some 33,000 people,1,600 businesses and hundreds of public servants were consulted on corruption.

Bosnia – The World Bank cash assistance programme was evaluated to estimate system leakage, targeting and programme misses.

Bangladesh – Over 125,000 people, mostly women, from 250 communities gave evidence on their use and perceptions of health and family planning services as part of the evaluation of the country’s Health and Population Sector Programme.

 

Costa Rica – A 1996 social audit of human rights in the Canton of Upala examined the treatment of immigrants in this Canton on the border with Nicaragua. It was the result of collaboration among United Nations agencies and the office of the Costa Rican Human Rights Ombudsman (Defensoría de los Habitantes)

Mali – An enquiry into the how people view availability and quality of public services identified corruption affecting women and men.

Nicaragua – With the support of the World Bank, CIET tracked corruption in public services in Nicaragua from 1995 to 1997: public transport, Customs, social security and public administration. In 1999-2001 CIET carried out a social audit of civil society’s response to the devastation of hurricane “Mitch.”

 
 

Pakistan – An audit of the gender gap in primary education revealed teachers demanding unofficial charges from students. A social audit on abuse against womensought to identify ways in which local action could improve the situation of women. A social audit on people’s responses to the devolution of public services is  tracking devolution’s impact at local levels over a five-year period.

South Africa, Gauteng – The role of corruption in the prosecution and conviction of rape cases set the stage for a much broader-based programme to prevent sexual violence.

South Africa, Wild Coast – Unofficial charges for health care and other public services were a major factor in the failure of small and micro-enterprises to accumulate sufficient wealth for survival.

Tanzania – The Tanzanian Presidential Commission on Corruption requested a social audit as part of its anti-corruption strategy. It documented corruption in the police, revenue and land sectors.

Uganda – Audits of the health and agriculture sectors were done in 1995. The 1998 national integrity survey included the experience of nearly 100,000 people and 1,500 civil servants, producing district-level integrity indicators on the police, judiciary, health, education and local administration.

A central objective of CIET’s approach – usually geared for planners — is to estimate the change in risk associated with a given potential determinant, usually a programme intervention. A household survey might produce a certain status indicator, such as that 55% of women attend prenatal care. Repeated surveys might even be able to detect a change in the associated health status, say low birth weight. The challenge is to decide whether the change in status is in reality an impact of the programme input. This is rarely a simple exercise; it requires documentation of exposure to programme interventions, and it requires that many other explanations for change be excluded. Even with modern multivariate analysis, survey data may be too shallow to provide much insight into what ought to be done about the status.

Valuable additional insights may be obtained from more qualitative approaches such as direct or participant observation, community meetings, Delphi techniques, focus or nominal group discussions and case studies. Rapid assessment procedures can deepen the knowledge base on the way things work, based on external observer opinions. On their own, these are rarely useful for measuring impact. Since qualitative techniques are based on small samples, repeating the observation or focus group discussion at another time can produce a different result by chance alone.

If the depth of knowledge generated by the qualitative studies is to be generalised, then its representativeness must withstand scrutiny in the manner of epidemiological data. Usually, qualitative techniques provide intense detail on a purposively chosen segment of society. The insights represent what goes on in those chosen segments. And if the segments chosen for qualitative study are not the same as those for quantitative study, linking qualitative appreciation to macro-level data is difficult. Typically the sample survey covers one domain (the district or country); the focus groups, participant observer reports or case studies reflect another domain.

The “sentinel sites” that characterise CIET methods are essentially survey clusters whose size has been increased to offer a representative panel of “mini-universes”.There is no sampling within a site. Each site can be followed over time by returning to it and repeating the enquiry. Like age, sex, or education, “site” can be considered an individual factor or grouped by characteristics (geography, religion, prevailing opinions, or price of basic grains) to provide a link between qualitative and quantitative data, between data from the household and data from the community or local environment.

In the evaluation of the 1998-2003 Health and Population Sector Programme in Bangladesh a detailed institutional review of facilities serving each sentinel site enabled CIET to identify two characteristics of health centres – separate toilets for women and use of screens when examining patients – which were key determinants of client satisfaction. In South Africa’s Free State province  variations in attitudes and performance of health workers in the different facilities providing antiretroviral therapy (ART) proved to be key influences on community engangement with the ART programme. In a baseline CIET survey on social vulnerability in Venezuela in 2003, a composite indicator of environmental conditions in each site was established on the basis of interviews with community leaders. Where these conditions were weakest, the risk of chronic child malnutrition as measured by height for age was more than double that in sites with better environmental conditions. This kind of analysis is not easy with typical random cluster surveys that use much smaller clusters.

Attributes of place, a specific aspect of local service, or the result of a focus group discussion – have a quite different analytical relevance when observations or focus groups are repeated in a comparable manner across a panel of communities chosen to be representative of the geographic area in question. These differences can be related to programmatic input and other factors that might be heterogeneous across different sites. The impact assessment is based on the time sequence and the heterogeneity among sites.

Some of the most disadvantaged people in the world live in geographically or socially isolated small groups with little access to services or opportunities of any kind. Such is the case of indigenous rural populations relegated by society to the most remote and least accessible locations in their countries. In urban centres as well, ethnic minorities often find themselves scattered, marginalized and effectively isolated. Gathering actionable evidence that these people can use to assert their human rights presents special challenges.

CIET has been working to develop epidemiological methods particularly suited to this challenge.

Since 1987 CIETmexico began studying the special needs of working children scattered throughout the city of Acapulco. In some of the most remote rural areas of Guerrero State the project on Microregional Planning assisted communities to identify and carry out their own solutions to pressing community problems.

CIET’s work with Aboriginal Canadians, both in cities and on reserve, began in 1995 and continues to the present. See Indigenous Canada.

A discussion and an example of methods for research among isolated and marginalized groups can be found in a doctoral thesis by Professor Lorenzo Monasta of the Institute for Maternal and Child Health in Trieste, Italy: Macedonian and Kosovan ROMÁ living in “Nomad Camps” in Italy: Health and living conditions of children from birth to five years of age. The full thesis, an English summary, and an Italian summary, are available from the library. The main findings of the thesis are published in: Monasta L, Andersson N, Ledogar RJ, Cockcroft A. Minority Health and Small Numbers Epidemiology: A Case Study of Living Conditions and the Health of Children in 5 Romá Camps in Italy. American Journal of Public Health 2008; 98(11):2035-2041. Available to institutional and personal subscribers at http://www.ajph.org/cgi/content/abstract/98/11/2035.

Knowledge Profiling in Community-based Participatory Research (CBPR)
 

Many sources of equally valid knowledge and experience may be relevant to a research question within communities, industry and government, as well as academia. Much of this information is not usually included in an inventory of knowledge related to a research project. The concept of a knowledge profile (KP) is designed to systematise this initial stage in the participatory research process by identifying and integrating all relevant knowledge that, once assembled, can best address a research issue. The KP process is a systematic set of phases or steps that carry a collection of individuals concerned about an issue forward to the point where they become an organized, inclusive team with a clear, valid research question.

 
1. Creating the Research Space
 

CBPR teams are initially self-selected because of their interest or skill in a particular issue and their willingness to collaborate in seeking a solution. A facilitator emerges whose role is to help bring out the skills, experience and complementary knowledge of the partners and keep the process on course. Positive research relationships are established with recognition of mutual autonomy for all partners, and the commitment to collaboration among people from fields that are often incongruent. Indicators of the success of this phase include: active buy-in by all participants as evidenced by tangible commitment of time and resources; expansion of the team to fill knowledge and experiential gaps; shared enthusiasm about the potential for action / change with the evolving research project; acknowledgement of differing agendas; and open and honest discussions of initial research issues.

 
2. Articulating and Negotiating
 

The KP process provides all participants with an opportunity to learn new perspectives from one another. Thus the research process itself becomes an important outcome. Each partner’s knowledge and experience is identified through facilitated discussions and roundtables where everyone has the opportunity to speak to a topic. Indicators of success in this phase include the articulation of guiding principles; identifying a process for group facilitation; evolution of a learning partnership.

 

3. Identifying the Research Question
 

CBPR teams come together around a research issue that is important to the community a recognised need for more information that has not yet been focused into a research question or questions. Indicators of success in Phase 3 include: articulation of the range of associated knowledge and perspectives around the table, and identification of the common research issue, emergence of the research question(s), identification of strengths and any further gaps in the available knowledge/expertise around the table.

 

4. Creating the Resource Inventory
 

The KP resource inventory expands the sources of assets to include the knowledge and experience of partners with diverse backgrounds (i.e. university, industry, or government) but a common interest in the evolving research issue. As a resource inventory develops it provides the team with a context in which to design the project. The end result of Phase 4 is a profile of existing resources and knowledge, a list of additional resources and knowledge to be added, and an initial team that is working in a safe and energetic space, committed to learning together.

 

The outcomes of a successful KP include: an inventory of existing and required resources, a well established research team operating in an ethical and safe research space, and articulation of an appropriate research question.

 

For further information see: Edwards, K. and Gibson, N. (2008) Knowledge profiling as emergent theory in CBPR. Progress in Community Health Partnerships: Research, Education, and Action. 2(1): 73-79.

With the explosion of biomedical publishing in the latter half of the 20th century, keeping up with primary research has become an impossible feat for policy makers and practitioners. There has been a proliferation of systematic reviews as one of the key tools for evidence-based health care.  This presents both opportunities and risks. The opportunities are to base decisions on accurate, succinct, credible comprehensive and comprehensible summaries of the best available evidence on a topic.

The goal was to create a valid reliable and useable instrument that will help users differentiate between systematic reviews, based on their quality, and to facilitate the development of high-quality reviews. This exercise led to AMSTAR and was reported in a series of papers. Candidate domain items were identified through a review of the evidence and existing tools. Nominal group consensus methods were used to identify best items and to gauge face validity.  A comparison among the draft instrument and competing instruments was undertaken to establish its measurement properties. AMSTAR, the resulting instrument has good inter-rater agreement, test-retest reliability, face and construct validity, and is easy to use.

We believe that AMSTAR has significantly advanced the practice of assessing the methodological quality of systematic reviews. It has received endorsements from the Canadian Agency for Drugs and Technologies in Health CADTH and several authors and had been cited nearly 400  times as of early 2013. It has been translated into Japanese, French and Spanish. It is used to evaluate reviews, as a guide to conduct of reviews and as an aid to teaching about systematic reviews.

See:

Shea BJ, Grimshaw JM, Wells GA, Boers M, Andersson N, Hamel C, Porter AC, Tugwell P, Moher D, Bouter LM. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Medical Research Methodology 2007; 7:10. Available from:  http://www.biomedcentral.com/content/pdf/1471-2288-7-10.pdf.

Shea, B. J., Hamel, C., Wells, G. A., Bouter, L. M., Kristjansson, E., Grimshaw, J., … & Boers, M. AMSTAR is a reliable and valid measurement tool to assess the methodological quality of systematic reviews. Journal of clinical epidemiology 2009, 62(10), 1013.

Shea, B. J., Bouter, L. M., Peterson, J., Boers, M., Andersson, N., Ortiz, Z., … & Grimshaw, J. M. External validation of a measurement tool to assess systematic reviews (AMSTAR). PloS one 2007, 2(12), e1350.

Close Menu