Toward Collaborative Research in Dispute Resolution
Christopher Honeyman, Barbara McAdoo and Nancy Welsh, with 21 colleagues

This article was first published in Conflict Resolution Quarterly, Fall 2001. 
____________________________

Introduction

Unknown to many negotiators and mediators, scholars have by now produced a healthy body of literature on the social psychology, economics and sociology of negotiation and conflict resolution. What’s more, parts of this body of material begin to answer some very troubling practical questions — questions that arise every day in real live cases. At the same time, negotiators regularly encounter problems that are not answered by existing studies or theories — yet the studies that could produce better answers often don’t even get off the ground. Why?

As is true for many other fields, conflict resolution suffers from an impoverished relationship between researchers and practitioners. There is a gap between what’s known, or scrutinized, about conflict resolution in the halls of academia and what’s known (or at least believed) and used in meeting rooms, judicial chambers and church basements. Often, practitioners and scholars act like parties who suspect that they might be better off if they worked together constructively, but don’t understand or trust each other enough to capitalize on opportunities for collaboration.

We believe that this self-imposed separation has serious consequences for practice and for research. On the practice side, even well-established and highly practical concepts such as “reactive devaluation,” let alone relevant newer theories and research results, are only very slowly making their way into day-to-day use. In addition, individual practitioners’ most perceptive questions are often answered primarily by the “school of hard knocks” even though they and their programs might be spared some of the harder knocks through more rigorous analysis as well as more effective dissemination of research findings. On the research side, academics are often discouraged from conducting the field experiments and evaluation studies that are in turn critical to the development, testing and correction of conflict resolution-related theories and findings.

Sometimes, both sets of consequences are visible at once. One of the largest and most carefully done studies in the history of conflict resolution provides an example. In 1996, RAND’s Institute for Civil Justice reported its findings in a study of dispute resolution in the federal courts.1 The study compared cases that went through ADR processes with those that went through litigation. To the outrage of dispute resolution advocates, it found no significant savings in costs or time to disposition overall for cases that went through ADR. RAND endured much ill-informed criticism from people who had been busy selling ADR based on widely held assumptions of cost and time savings to litigants. It seemed possible to us, however, that the disappointing overall results might be explained by the fact that the quality of mediation, even within a given program, can vary widely from mediator to mediator — and by our field’s widespread failure to ensure consistent quality control.2 So we asked if the study’s data could be re-analyzed to answer the question of whether the “best” mediators in the federal programs saved their parties more money and time than did the other mediators.

The answer was no, the data couldn’t be re-analyzed – because nobody in the Congressional office that commissioned the study had thought to ask that question, because the researchers hadn’t thought of it on their own, and because during the design phase nobody asked for feedback from practitioners who were most concerned with quality control in mediation. Therefore, a design adjustment that could have significantly enhanced our knowledge at minimal additional expense wasn’t made – at a cost to the researchers’ reputations, and to the practitioners’ ability to improve their field. One consequence is that most program managers have in effect preferred to maintain their intuitive beliefs rather than bow to the implications of data they and their peers had little part in amassing. Thus few programs seem to have taken the RAND report’s implications seriously enough to have questioned their own practices or results.

This instance is far from an anomaly — in dispute resolution or more generally. But for now, we will move on to two illustrations of the difficulties that can arise even when practitioners and scholars do decide to try to collaborate.

One example put forward at a session of the Theory to Practice project (which is described below) concerned a highly productive researcher-practitioner partnership which has been running for several years, between researcher Lisa Bingham of Indiana University, and a mediation program at the U.S. Postal Service. The collaboration, it seems, persisted in the face of fears on both sides, largely due to an initial stroke of luck — i.e. it so happened that the first questions asked by the research quite quickly produced answers which strongly supported continuation and expansion of the program. The attorney in overall charge of the program observed at one point that

It is very threatening to be evaluated….fortunately, the studies that (Bingham) did were very positive for the program. Had they not been positive, that would have been the end of the program, and it probably would have been the end of me…..

Had the first report contained “negative” implications for the program, in other words, there might not have been a second report. As luck would have it, however, the research effort became a material part of the pilot program’s eventual credibility to senior management, a make-or-break issue in expansion throughout a conservative institution of 850,000 employees.3

Another example concerned the experiences of the “researched.” Howard Bellman, a national figure in environmental mediation, described at two project events his experience as the subject of one lengthy study in which he found the researcher’s integrity “impeccable” — and his less fortunate experiences in other studies. His conclusion:

I wouldn’t do it again unless I had confidence in the individual, that’s what turns out to be the point. Not confidence in the methodology, confidence in theindividual.

While many practitioners have spoken of distrust of research in general terms,4 this became one of a series of clues that the need to develop personal relationships of trust had been underrated. Methods of trust-building that will encourage practitioners to accept the risks and discomforts of being closely observed, or even the annoyances and time investment of participating in less invasive research methods, are in their infancy in our field. But we hope this report will help.

Unless this pattern — of missed or squandered opportunities for collaboration — changes, our field is likely to slide gradually into unthinking repetition of unproven and sometimes downright dubious modes of practice. Meanwhile, academics’ concerns will become increasingly distant from the reality of conflict resolution. In a field built on fervent belief in the value of collaboration, we must ourselves find a better way to practice what we preach.

Theory to Practice and the “moveable feasts”

These stories hint that we cannot simply assume that the relationship between researchers and practitioners will go well. This is consistent with other findings of the Theory to Practice project. And now for a bit of background: The Theory to Practice project is a Hewlett Foundation-funded effort to improve communication and collaboration between conflict resolution practitioners, researchers and theorists.5 One of the project’s explorations of scholar-practitioner relationships was through a series of “conversations” at three national conferences — role plays, really — in which we enlisted a total of almost twenty colleagues to portray mediators, scholars, law school deans, and foundation officers, placed in a variety of situations. To create the scenarios, we tapped our colleagues’ real-life experiences, scripted the beginnings of dialogues, and let the role-players improvise the rest. Our actors were unusually experienced people from both practice and research, but we had them pretend that they were more typical examples of their assigned roles. Our actors quickly took their roles to heart, and the resulting dramas were honest, intimate and, we think, real. The revelations were often enlightening, and sometimes, a bit shocking.6

One of the most striking results of these scenarios was that even though we wrote only the very beginning of each one, letting the actors explain their motives for their subsequent decisions and actions, it generally didn’t take long for many of them to begin to feel manipulated by others. What rapidly became evident was that there were very strong career as well as attitudinal influences at work. We identified, among others, these five factors which inhibit mutual understanding:

1. Scholars and practitioners define “wisdom” differently. Practitioners believe wisdom is derived from direct experience, while scholars tend to see it as the product of tested hypotheses that are independent of any one individual’s “anecdotal” knowledge.

2. They don’t speak the same language. Academics develop specialized terminology (jargon) to communicate with their colleagues accurately and efficiently, while practitioners do the same thing — but often practitioners don’t notice their own forms of specialized language, and thus accuse the scholars of deliberate obscurity.

3. Neither group truly trusts or respects the other.

4. Scholars and practitioners face different professional pressures — and give credence only to their own.

5. Practitioners only want “news they can use” — and they define this in increasingly narrow terms.

How we arrived at these conclusions is described in detail in a monograph we have recently published elsewhere (see fn. 7.) For now, we’ll simply note that few of our colleagues, scholars or practitioners, have found these conditions hard to believe. Certainly, they have all been with us in older fields for a long time.7

Over the course of our role-plays and follow-up audience discussions, as well as from many other discussions (see fn. 6), it became clear that if collaborative relationships were to be started on solid ground, between researchers and the program managers and practitioners who have control over the real-world data, we had to make progress on some basic understandings. The Theory to Practice working group concluded, therefore, that industry-wide, we need “protocols” or, less formally, something close to them, to inform the relationship between a researcher and a practitioner group, particularly when research must involve collaboration. “Protocol” is not used here as a term of art, or in the quite technical sense that term is often given in the hard sciences. Our use is more vernacular: a set of considerations that, experience has shown, must be built into the formative stages of a relationship, in order for all parties later to feel they were treated fairly.

Theory to Practice has sought various ways of developing discussions which include both experienced scholars and experienced practitioners. The discussion of “not quite protocols” was one of a series the project has called “moveable feasts.” This particular model is for an informal working meeting which doesn’t require a major commitment of funds and time to convene, to which only experts are invited, at which many points of view are represented, and which produces a tightly focused product.8 This article is the product of the particular moveable feast described below; two others will be published in subsequent issues of this journal.

The “methodology” was appropriate to a non-official project which seeks to encourage dialogues rather than engage in official pronouncements. We tackled this issue by inviting two dozen leading practitioners and scholars to dinner — an inherently informal occasion, designed to foster the most open and constructive discussion we could arrange. In an effort to ensure that the discussion was personal, the group assembled as a whole only briefly, for a grounding in the problems at hand. The group then divided into four tables of six for a couple of hours’ intense discussion, and later got back together for a quick summary. Rough notes were produced on the spot by four “reporters,” one at each table. Subsequent drafts of this document circulated primarily by email, as have a good number of careful and constructive commentaries by those who were present.

While many distinguished institutions are represented among those who have been part of this discussion, the result is in no sense an official product of any of them. It is simply an attempt to describe what we think happens and what you as either a researcher or practitioner might want to watch out for. This document is a relatively informal one, and is in no sense intended to forestall the development of more formal protocols when and if national scholarly and practitioner organizations wish to do so — quite the contrary. For the time being, the present attempt to lay out “not quite protocols” is primarily an effort to provide a common basis for discussion between a researcher and a practitioner group who are contemplating working together, as well as to provide advance warning of some predictable problems. Only by addressing these forthrightly can we hope to help both practitioners and researchers get past perceptions of risks of working with each other that are, in the main, much worse than they need to be. 

“Not Quite Protocols” for researchers
We’ll cut to the chase here. Because one of our hopes is to provoke more lengthy and considered discussions of these issues by larger and more representative groups, we will simply summarize what our group of 24 agreed on (and didn’t), and leave the deservedly elaborate explication-with-footnotes job to more leisured discussions. In particular, a thorough discussion of how to implement such protections has yet to take place; this would be a fit subject for a larger and more permanent organization to tackle. Here, however, are the results of this first foray into the subject.

On the most general level, the group felt that dispute resolution researchers should neither assume that they are ungoverned because of a lack of field-wide ethics/protocol enforcement mechanisms, nor assume that it is necessary to reinvent the wheel. Many of the protocols developed for older fields are based on a core of logic and fairness that should apply in dispute resolution also. A starting point is to follow existing standard social science procedures for research. Also, of course, legally mandated protocols and requirements of institutional review boards must be adhered to. And existing protocols and ethical codes in other disciplines should be consulted where they govern similar types of research.

But the development of protocols specific to the particular project represents a very important occasion for the practitioners to judge the trustworthiness of the researcher — and for the researcher to demonstrate that this trust is warranted. For this reason, “boilerplate” protocols should not be allowed to substitute for the process of explanation and negotiation, lest one side feel blindsided later. The group came up with a number of things a researcher ought to bear in mind in initial discussions with practitioners:

  • A sense of reciprocity. Without this, the project is unlikely to endure beyond the first few glitches. The researcher needs to build something into the research that most, if not all, the practitioners involved view as beneficial (e.g., the opportunity for intellectual discussion or introspection, feedback on their own practice, opportunity to improve practice, publicity). The researcher needs to understand that practitioners are busy and that they need to allocate time efficiently. Making conspicuous a willingness to think through the potential uses of the research from the practitioners’ point of view, in the first conversation, can be a great help in getting the project off the ground.
  • Reality testing with practitioners. Research has a way of producing results which surprise practitioners, or even alarm them if disclosed prematurely or without opportunity for feedback. Where the subject matter is sensitive, it may increase confidence in the researcher for an explicit offer to be made that conclusions will be shared only, or initially, in person or by telephone rather than in writing.
  • Discussion of data collection instruments. Early and frank discussion of how these will work, and of what options there may be for incorporating information that is of secondary interest to the researcher but important enough to the program to offset the opportunity costs of becoming involved in research, may be important elements of building practitioners’ trust in the research and the researcher.
  • “Informed consent” should be the key principle. This includes:
    • Risks of disclosure of confidential information. (When researchers report results in quantified/aggregate terms, however, this may not compromise confidentiality even slightly, because the identity of the parties is not relevant.)
    • Other risks to subjects. Researchers have an affirmative duty to inform practitioners about protections available, e.g. human subject protocols, what items can be discussed/negotiated, and what is possible through negotiation without violating the integrity of the research.
    • Researchers must recognize that to service providers, research is secondary. Day-to-day work must go on, deadlines must be met, and these and related concerns may “trump” research needs. The onus is therefore on the researcher to help the practitioners think through the uses and risks of engaging in research. Openness on this issue is the beginning of a process of trust, and practitioners cannot be expected to devote the time or interest until shown a reason to. (Also see below: what the parties are told about what is recorded, etc., is a requirement of their informed consent.)
  • During the data-gathering phase, the key element is integrity of observation. In assuring integrity of observation, preparation of the observers may not be cheap, especially in terms of the principal investigator’s typically scarce time, but it is paramount. They must understand the process they are observing. There is a real risk that academics and even graduate students may be perceived as arrogant if they do not take conspicuous steps to familiarize themselves with the conditions under which the practitioners must perform.
  • The usual concept of a reporting “phase” seems to create its own problems, many of which can be anticipated:
    • The timeliness of reports back to the program manager(s) is a subject of great interest to practitioners, who are often unfamiliar with the reasons behind the typical research/reporting timetable and are likely to see it as unreasonably slow for their needs.
    • There should be established principles concerning communication throughout the project, including, in larger organizations, who is the designated contact person for what kind of issue.
  • There is always a possibility that the research will show that the program has unwise or ineffectual policies or methods and that policy changes are needed. This possibility should be discussed at the outset so that the researcher and program managers can reach a straightforward understanding of when and in what manner such news should be communicated to the program if it develops. (This principle does not necessarily apply to research that is done by order of a funder or regulator rather than by voluntary consent and negotiation.)
  • There should be an explicit agreement near the outset of the project as to whether the subject can review the research and make a commentary if the subject disagrees, in advance of publication.

“Not Quite Protocols” for practitioners

Again, this is not intended as a treatise, but to get a discussion going on a broader level, so we’ll summarize the group’s agreements and disagreements briefly.

  • Practitioners should expect to invest some effort in participation in formulating research questions. Casual agreement to the first proposed formulation can waste scarce research resources, by failing to include important questions that later require duplication of much of the effort and expense.
  • Where practitioner groups fund the research, they can demand a higher degree of input in advance. The specifics can vary greatly depending, for example, on whether the researcher is working as a hired consultant (to the program, or to a third party), as an independent academic, or in some other capacity. But practitioners must understand that once the project has been designed, the integrity of the research must be maintained.
  • Practitioners’ role in formulating good questions and methods carries with it a corresponding obligation to the parties. Some practitioners believe the parties should have the right to say no to having observers present. Others see the product of better knowledge about what is really going on in cases as too important to allow this, and prefer to rely on parties’ freedom to opt out of using that program or service. But in either case, parties should be informed concerning confidentiality issues. They should also be informed of measures to be taken to protect individuals’ data, where aggregate data is all that is really needed.
  • Sophisticated persons will only collaborate with someone they trust highly. But unsophisticated people are more vulnerable. There is a heightened responsibility when the parties are least able to understand. They must be told they are being observed, there must be measures to protect their identity and confidentiality, and the principle of informed consent must be given specific meaning, all in ways that are appropriate to the capacity of those observed.
  • In some kinds of studies, subjects might be able to check off acceptable uses for data that is personal to them: for in-house training purposes vs. for broader educational purposes, for instance.
  • Practitioners should assist in designing observation approaches that will minimize the phenomenon of being observed (i.e. does the researcher’s intervention alter the practitioner’s handling of a given situation?)
  • The likelihood of mid-course program changes needs to be assessed, and reassessed at intervals. More than one promising project has had to be truncated because the researchers were not warned in good time that the cooperating program might have to “switch gears” in ways that made further (or even existing) data useless.
  • Collaboration must be distinguished from improper influence. Practitioners must understand that research conducted without its own integrity is ultimately useless. The researcher has no obligation to soft-pedal criticism, has a right to be vigorous, and reports are not limited to just what the practitioner wants. Finally, a conspicuous willingness to accept and respond constructively to “bad news” should be seen as a hallmark of a program’s underlying integrity.

Joint responsibilities

Some of the group’s conclusions lend themselves particularly well to joint design by the participating practitioner group and the researcher:

  • Researchers often present information in an unappealing way compared to anecdotal “warm and fuzzy” stories. The result is that the anecdotal stories carry disproportionate weight in practitioner as well as policy-making circles. Consideration should be given in the design stage to the value of promulgating the research results in a short, attractively formatted version specifically for use by practitioners and policy-makers.
  • There should be advance discussion of possible press attention to the ongoing process of research, and how to handle it if it occurs. In general, one basic safeguard is that if the press shows interest, the party contacted should let the other know this promptly.
  • Objective and semi-permanent records of parties’ actions and responses, such as audio and video tapes of a mediation session, can be extremely valuable, but require special precautions. Tapes should not be used over the specific objection of a party. If they are to be made, parties have a right to know the procedures for security, retention and disposition of them.
    • The realities of data storage and security are often overlooked. An innocuous but telling example: In 1987, Chris Honeyman ran a series of performance-based mediator qualification tests for a state agency, using a highly innovative methodology. The oral character of the selection process made the state’s civil service examiners nervous, and they insisted on protocols for retention of the videotapes that were made of each candidate’s test –– including a stipulation that the civil service agency, not the hiring agency, would retain these records, and would destroy them one year after the hires were completed. The candidates were duly informed of this…..but during another exercise six years later, Honeyman needed to check something –– and blandly requested that the tapes be forwarded over to his office. No problem, of course. They were still in existence: In any busy environment, it takes more than good intentions to ensure that somebody will actually open a locked cabinet on the appointed date, and erase the contents.
  • In larger studies, the design phase should take into account the likelihood of ineffective transmission of accurate information about the study, as the circle of those involved grows wider. It may be appropriate to prepare a brochure for use by researchers and practitioners, to include a general description as well as any appropriate warnings.

Additional notes

Among the working group of 24, there has been vigorous disagreement over whether, where the project protocols are viewed as a contract, it may improve the process of building trust in the first place and consistency of performance in the second place to include consequences of any breach of protocols (e.g., liquidated damages). Some believe this could help to overcome past impressions that only the practitioners suffer when and if a breach occurs; others are concerned that no researcher would tie himself or herself into such a contract, because the research environment is inherently one where things don’t go as originally expected, or are delayed, for a long list of reasons.

In addition, a couple of techniques specific to dispute resolution research came to light in the course of this discussion, which seem likely to be useful if more broadly known. They are reprinted here for convenience:

  • Where party agreement is considered particularly problematic, the researcher might be allowed access to a number of successive cases, but under terms that provide the parties with the right to decide afterwards whether anything from that case can be used. A guarantee of “no reporting without post-case agreement of the parties” may be sufficient to obtain permission to observe. Since many pre-case fears turn out to be overblown, at least some of these parties will probably agree afterwards to reporting. (We believe that researchers would honor such agreements — not least, for the same reputational reasons why journalists routinely honor embargo and “deep background” conditions that are often imposed on them.)
  • There is evidence that response rates can be improved by some relatively straightforward precautions. One involves getting the collaboration of a different sort of practitioner — i.e. a senior official or institution. In one highly successful instance, a cover letter by the state supreme court’s Chief Justice helped raise response rates by lawyers to an almost unheard-of 75%. In another case, a small gift to the subjects of a particularly demanding survey (a few tea bags), sent along with a note acknowledging that filling out the survey was going to be time-consuming, helped assure respondents that the researcher had not considered their time as valueless, and carried the implication that other aspects of the study were similarly carefully thought out. This also led to a higher-than-usual response rate.

The value of an uncompleted discussion

We are fully aware that the brief conclusions of our group are rough enough, as descriptions of “what should be.” We believe that they will only be improved by attention and redrafting by other groups, and encourage this document to be seen as the start, not the finish of a process. It is important to note with all humility that these thoughts reflect the weight of opinion by a single and ephemeral (though expert and diverse) group. And while membership groups such as ACR may, over time, derive more codified protocols from reconsideration of such early attempts as this one, it should not be assumed that there was universal agreement on all points even among the two dozen people who assembled to consider the problem on this one occasion. Our “findings” are offered as points that deserve consideration, not as hard-and-fast rules. In fact, our field would be wise to be wary of optimism that even the most carefully drafted protocols will actually be adopted broadly and followed reliably. Achieving that goal will require a long-term, vigorous campaign of professional education.

We look forward to its beginning. Readers are invited to offer thoughts, amendments and concerns, and updates will be posted at Theory to Practice’s web site at www.convenor.com.

1. Kakalik, J. S., Dunworth, T., Hill, L. A., McCaffrey, D., Oshiro, M., Pace, N. M., and Avian, M. E. (1996). Just, Speedy, and Inexpensive? An Evaluation of Judicial Case Management Under the Civil Justice Reform Act. Institute for Civil Justice, RAND Corp.

2. For background, see Test Design Project (Honeyman, C. et al, 1995), Performance-Based Assessment: a Methodology, for use in selecting, training and evaluating mediators. Washington, DC: National Institute for Dispute Resolution. This monograph, along with some of the key papers that led up to it, is reproduced at www.convenor.com/madison/quality.htm

3. The researcher did, however, make a number of discoveries which, when reported back to the program managers, caused a certain amount of resistance; in one instance, she had to deliver the “bad news” that the program’s happy but superficial comparison between (cheap) “inside” mediators and (relatively expensive) independent contractors, which the program thought showed that both groups were doing well, wasn’t quite what it seemed: The inside mediators, it turned out, had been getting pre-screened cases. (In other words, an agency official had already evaluated the case as being likely to be resolved in mediation. This, of course, made for an apples-to-oranges comparison to the unscreened case pool given to the outside mediators.) But the program managers were not especially interested in the reliability of the research conclusions. Especially at junior levels, they were focused on getting wider and deeper acceptability for a program in which they believed strongly, within an agency in which acceptability of so novel an approach was seen as a touch-and-go proposition. Under these real-world circumstances, even with the luck to have a lopsided initial flow of “positive” research results, keeping the research effort afloat took continuing diplomacy.

4. See Honeyman, C. (1998). “Not Good for Your Career.” Negotiation Journal 14: 13-18. Republished on the Web at http://www.convenor.com/madison/career.htm

5. For more information on the Theory to Practice project, please see our web page at www.convenor.com/madison/t-t-p.htm

6. See Honeyman, C., McAdoo, B., and Welsh, N. (2001.) “Here there be monsters: At the edge of the map of conflict resolution.” The Conflict Resolution Practitioner (Office of Dispute Resolution, Georgia Supreme Court.)

7. Consider this bit of invective:

….academic persons, when they carry on study, not only in youth as a part of education, but as the pursuit of their maturer years, most of them become decidedly queer, not to say rotten; and (those) who may be considered the best of them are made useless to the world by the very study which you extol.

We found it in F.M. Cornford’s Microcosmographia Academica, a satire on academia published, at Oxford, in 1908. But Cornford, in turn, got it from Plato. It is truly time-tested.

8. See Honeyman, C. (1999.) “ADR Practitioners and Researchers in a ‘Moveable Feast’. Alternatives to the High Cost of Litigation (CPR Institute, New York), June 1999.