DOI: 10.5553/IJCER/221199652014002001007

International Journal of Conflict Engagement and ResolutionAccess_open

Article

Success in Conflict Intervention Is What We Make of It but Significance Is the Goal

Keywords conflict intervention research, measure of success, measure of significance, third party impact, mediation
Authors
DOI
Show PDF Show fullscreen
Abstract Author's information Statistics Citation
This article has been viewed times.
This article been downloaded 0 times.
Suggested citation
Brian Polkinghorn and Abraham (Avi) Mozes-Carmel, "Success in Conflict Intervention Is What We Make of It but Significance Is the Goal", International Journal of Conflict Engagement and Resolution, 1, (2014):53-58

    This article examines two issues relating to why and how we measure and derive any meaning of ‘success’ regarding the effective intervention into conflict episodes. The first issue focuses on who we say we are in relation to what we do as interveners and researchers who occupy an eclectic and clustered field of study and practice. We argue the field itself impacts the framing of success and as such we should resist the urge to fuse the field into tightly bound conceptual frameworks or through any unifying theories and remain – at least for now – a wide open and diverse conglomerate so as to focus our attention on the fission of unique ideas. The second issue argues that there is no one universal or ‘normal’ framework or method relating to how we measure success in conflict intervention. Therefore we argue that the measure of success is not the true aim of conflict intervention research, but rather gaining an understanding of the significance and impact the process and intervener have on the parties.

Dit artikel wordt geciteerd in

    • 1. Who Are We as a Field and How Does That Impact Our Notion of Success and Failure?

      Our field of conflict resolution, in comparison with others, in both academic and practical terms, is relatively young and yet quite complex. Outsiders might see the field as a chaotic beehive of activity or perhaps random programmatic development, yet they are actually witnessing our prolific growing pains. As such, we don't think folks in the field should be too concerned about trying to reel in every new idea, concept, methodology or other development into a pre-existing or evolving mold. However, to answer the question “what is success”, it helps to first get a grasp of the field. In this paper, we aim to explore the connection between the nature of the field and the definition of success.

      1.1. Fission vs. Fusion

      Who we are as a field is not easy to answer. We should be cautious in this endeavour because in doing so someone might try to use this exercise as a pretext or rationale to get us organized. We think getting the field organized, say legally, professionally/practically and academically poses many serious challenges and obstacles that, if not managed over a long period of development, could serve to stunt our growth and our ability to be flexible to social needs, stymie outlandish creativity and ultimately diminish our impact and significance to the world. At best, we think our field is composed of clusters of groups, akin to cousins, with some similar origins, and there is no need to create boxes and arrows in some master flow chart to push or pull us into a neat alignment. Our clusters have been developing, in our own ways, for decades and our once small clubs are now, thankfully, more crowded and diverse and, on the surface, appear to be a bit more disorganized. And we say: “That's evolution for you.”
      Order isn't always the end state that a field of study wants to achieve. Indeed, chaos can be good; it can be managed in such a way so as to breed innovative improvements in our thinking and practice, which can lead to the generation of novel ideas. Just think of Lockheed's skunk works project or the work done at the Rand Corporation as prime examples of what a creative space can produce.
      What we are espousing is reminiscent of Thomas Kuhn's work, but different in a sense. The Structure of Scientific Revolution was Kuhn's eye-opening work on how fields of study, and especially disciplines, indoctrinate adherents into the culture, concepts, theories, tools and norms of the field/discipline and how this can lead to sub-optimal problem solving and stymied development. What we are talking about here is the optimal creative space in which people can operate. We are arguing that the best way for our field to address Kuhn's warning about ‘normal’ problem solving dilemmas in various disciplines is not to build academic structural blinders in the first place, but rather to focus on maintaining something of an ambiguous state of affairs so as to enhance creative space to experiment and explore without reinforcing an orthodoxy. In other words, we don't think leaders should rush to develop an organized field, like many other clinical or applied service-based fields of research and practice have. We most likely have many years of exploration and experimentation, both in terms of practice (art) and research (science) to go through first before coming to this stage in the evolutionary path. If Kuhn is correct, and we tend to think he is, then it's where the field remains clustered, maybe cluttered might be a better word, that divergent creative thinking can be found and where discoveries can be more readily made. Maybe divergence is, in a meta-sense, our idea of normal.
      That is precisely why we believe our field should be protective of our wide-open creative spaces and resist those elements that demand coherence through a more organized field structure that develops over time through fusion. Let's be careful and not push it prematurely as we have examples of forced organization that did not work as intended. Look at what happened when some of the leaders in our field took the Hewlett money. One of the attached strings was to form ACR, which was supposed to be an elegant fusion of SPIDR, AFM and CREnet. A lot of people saw serious challenges coming out of this process but no one seemed to have the entrepreneurial ideas in mind to counteract the temptation to take the money. The result is part of the protection of our wide-open creative space was arguably dismantled in the name of organization. Instead of fusion, we should be thinking more about fission and the dispersing of our ideas, practices and scholarship by first limiting the desire to think we need to build organizational walls that create the norms that eventually narrow the cognitive bandwidth or field of vision. These are the precise elements that explorers and creators need to thrive.
      We should also be cautious when developing alleged field-wide theories or other means to sew pieces of the clusters together with canonical concepts. The field can be disorganized and yet coherent and allow for a variety of values, theories, practical approaches and methodologies to flourish. For that matter, what does a unified field achieve anyway? If and when parts of the field begin to fuse, we say ‘let it happen’ when the time is ripe; not when outside entities or inside leaders demand or force it. Let's prevent what Kuhn predicts. We know that sounds deterministic but let's not get carried away. All we are saying is: what is the rush?
      The state of a field has much to do with how we measure success. Various clusters of researchers in, say, psychology, social psychology, anthropology, law and social work operate under a variety of ‘normal problem solving’ agendas using ‘normal problem solving’ tools. Even purists in the ‘social sciences’ and ‘conflict resolution’ also operate, but perhaps to a lesser degree, under the same principles. Research on say programmatic or process effectiveness will be framed in some disciplinary context, no matter who is doing the work, and that is what shapes our notion of success.
      This leads directly to the second question we promised to address: how do we measure success? The first question has provided the context within which to allow us to appreciate the significance of the second.

    • 2. Success Is What We Make of It – But More Importantly, Forms of Success Point toward Significance

      By arguing for a view of fission instead of fusion, we leave open the possibility of framing ‘success’ in many constructive ways. As the conflict resolution field is rather eclectic and chaotic, it would seem arbitrary to assign definitive characteristics to what we consider ‘success’ or ‘failure’ when it comes to the results of programmatic or process/practice outcomes. In order to attend to the multitude of characteristics of success as it relates to outcomes, it is necessary here to make two broad claims that might limit and hopefully clarify what is otherwise a confusing debate. The first: each definition of success, be it derived from quantitative or qualitative approaches, should embrace several notions instead of demanding one or two clear-cut means of measurement. The second: there is a need to rethink why we are focusing on success (or failure) in the first place and not examining an arguably more important outcome relating to our impact and the meaning of significance. By this we mean not just the quantitative understanding of significance as arrived by statistical computation and analysis where we can make some tentative statements about how a process impacts parties and outcomes. This type of significance will usually keep the policy wonks and funders happy. We also mean that significance can be conceived as the ability to have some constructive impact on the people who take part in conflict intervention processes. This measure of significance can take into account what parties experience, think and feel about themselves and others parties, as well as take into consideration their changes in perceptions on important ideas such as fairness, equity and (access to) justice. These two notions of what we mean by significance combine measurement approaches that have otherwise been unnecessarily pitted against one another by notions of ‘norm science’ found in various fields or disciplines. Both forms, in essence, try to demonstrate that a relationship exists between what we do as process practitioners and what impact we and the process have on parties in conflict. Instead of using the loaded concept of ‘success’, we focus on what can easily be understood by various methodological schools of thought.
      The arguments for and against these two methodological distinctions are well worn. The statistically oriented researcher makes a good point that prescriptive and descriptive testimonials are not the strongest platform for arguing policy changes. Quantitative results can lead to accurate predictions of future behaviour and conduct. The qualitative and experiential researchers do not limit their exploration and understanding of the impact of their work to a few isolatable variables. Rather, they try to think holistically and systemically about an engagement, situation or interaction and would likely argue that narrow examinations and interpretations are insufficient to the meaning of conflict intervention and its impact on parties. Notice how these frameworks impact the arbitrary notion of ‘success’ and ‘failure’? One is, by definition, quite bounded and limited in scope, and the other is quite wide and with little structure. Though the arguments have merit, we would rather sidestep this debate and lay claim that both and/or all measures of ‘success’ are really sub-varieties of the exploratory or experimental means to measure significance. Below are two examples from different ends of the intervention spectrum.
      In international conflict intervention, we would be setting impossible standards if we were to declare that success is achieved by something that is quantifiably tangible such as the writing of an agreement. This line of thinking is flawed in so many ways. First, we have to examine what the conditions were that led up to the mediated intervention and subsequent agreement. Were some parties forced into a conflict intervention and a subsequent agreement? Will this make the agreement fail? Are some parties procedurally favoured, leading to an unsustainable agreement? Second, how does the type of intervention (e.g. a united versus divided mediation engagement or a neutral versus partisan approach) impact the outcome of the intervention? Will an aggressive party not listening to the calls for ceasefire more likely be subjected to a partisan-unified approach or a neutral, disorganized mediation approach? Are the differences we see in mediators – say a power broker versus a facilitative mediator – going to impact the interaction leading to an agreement (success) or not (failure)? What type of mediator is more likely to get the parties to reach an agreement regardless of its quality? (The answer is the power broker.) Who is less likely to get an agreement but if so is more likely to continue to work with the parties to see that it is fully implemented and enforced? (The answer is the more facilitative mediator.)
      Would it not make more sense for this type of complexity to be required for a more nuanced understanding of the relationship between conditions on the ground, intervention processes and outcomes?

      2.1. Learning from Community Mediation

      On the other end of the intervention spectrum, what can we learn from research on community mediation? One might argue that conflicts at the community level could or should be less complex, involve fewer stakeholders and/or issues, involve fewer secondary and tertiary parties and outside forces in particular layers of laws. One could also argue that the outcome will have a much smaller impact on society. Even if we accept these assumptions, there are several worthwhile lessons regarding the notion of success and significance that can be gained by practitioners and researchers working in other conflict venues.
      Let's examine several forms of success first. In some community settings, getting the parties to agree to come to the table is counted as a threshold indicator of success. Getting them to show up and talk is also considered by some programs as another major milestone of success as this usually requires parties to undergo a (voluntary) shift in their thinking and attitude toward the other party as well as the process. Realizing that another means to address issues can potentially lead to a better outcome is significant. Thinking about the other party as someone worthy of working with can also be a measure of significant change in thinking from an adversarial to a potentially cooperative partner.
      Reaching an agreement in community mediation is one of the universal measures of success for policy makers, funders, program administrators and mediators/interveners. However, there are two fundamental problems that need to be addressed. The first is what goes on between getting parties to the table and signing an agreement. This requires much attention if we are to gain a better understanding of the significance of any form of intervention. Presently, we do a fairly lousy job of capturing what goes on in situ mediation sessions, although progress is being made. Current research undertaken at our center employs several pre-intervention data sets, in situ behaviour coding with mediation sessions and multiple pre- and post-mediation measures to attend to these shortcomings. Second, many research tools are inadequate or poorly constructed and measures of ‘success’ are, at best, questionable. (It is noteworthy that there are no studies that measure significance.) For instance, we often see surveys that confound us. What does it mean when a participant is asked ‘I was satisfied with ______’ and just fill in the blank with items such as the process or the mediator or the outcome? These types of questions are completely lacking in specificity or connectedness to conflict events and intervention strategies, and they are likewise not temporal in nature, so we can't attach satisfaction with any given instance, behaviour or action. These types of questions are so vague as to have no meaning whatsoever and provide a poor surrogate when it comes to any attempt at examining or explaining significance. These types of questions are not helpful in learning what goes on in the mediation process. The shift to examining the significant impact the process has on the parties – quantitatively and qualitatively – is more encompassing and meaningful.
      Placing an emphasis on significance means differentiating between, say, a written agreement (i.e. a successful outcome), a written agreement where the parties think differently about each other (i.e. a measure of relationship change) and personal efficacy in being able to solve problems (i.e. a measure of empowerment). Taken together, we might be able to examine significant changes (e.g. transformation of relationships and parties) that standard fare tools relating to process (procedural justice) and outcome (distributive justice) success measures would not be able to address.
      In the end, we have an understandable obsession with measures we use to define success. The core problem is we have been sloppy in answering other people's (funders, not participants) questions. We need to understand what is meant by success as it is related to the relationship, context, issues, process outcome and all the ways these variables interact together to form a more coherent understanding of significance. Therefore, we argue that if we begin our inquiry on how we measure the impact of process on parties and outcomes beginning with what we mean by significance, in all its forms, then we are in a more favourable position to gain a better understanding of what we as researchers and practitioners are doing.
      It is one thing to be “successful” yet have no impact at all; it is another thing altogether to have a significant impact on those whom we assist.


Print this article
Button_em_en