Sunday, November 15, 2015

Blog series on measuring implementation of UN Guiding Principles on business & human rights: "The Measure of . . . . Things--Measurement First Principles and the Business and Human Rights Assessment Project"

(Pix © Larry Catá Backer)

Measuring Business & Human Rights launched a blog series on how to assess and track implementation of the UNGPs.

Contributors were asked to address the following questions:
How can we measure progress in the implementation of the UNGPs? What are the most daunting challenges and/or the most promising solutions?
Do you see progress in the implementation of the UNGPs? If so (or if not), what is the evidence in support of your argument?
See contributions below on a daily basis, leading up to the UN Forum on Business and Human Rights HERE.  I will also take a stab at the questions



The Measure of . . . . Things--Measurement First Principles and the Business and Human Rights Assessment Project
Larry Catá Backer

The 2015 United Nations Forum on Business and Human Rights will focus on "Tracking Progress and Ensuring Coherence." The business and human rights stakeholders are increasingly focused on taking the measure of that project.  An influential stakeholder, Measuring Business & Human Rights, launched a blog series on how to assess and track implementation of the UNGP.
How can we measure progress in the implementation of the UNGPs? What are the most daunting challenges and/or the most promising solutions?
Do you see progress in the implementation of the UNGPs? If so (or if not), what is the evidence in support of your argument?
This is an essential task; one that cannot wait for the languid processes of national action plans and corporate policy statement implementation.  And yet, perhaps, patience is required.  For in the rush to measure, those who measure might overlook or misapply those critical first principles on which sound and effective measurement is founded.  In the absence of consensus on a well theorized set of first principles of measurement the measurement project will produce much data, and even more interpretation, but these will serve no legitimate ends.  Thus, when one considers the measurement landscape today, one sees much "progress" int he form of measurement toolkit, measurement metrics and techniques.  But it is not clear that there is consensus on those critical first principles--why measure (its ends), what is measured (the objects of measurement), and how that measure is taken (the techniques of measurement). Without consensus on these first principles there can be little hope of consensus about the most important object of this well structured data harvesting--to interpret the data in light of clear objectives against which data provides a sound measure.

To measure something requires a sure knowledge of what is measured--with some precision.  It also requires a sense of the reason or objectives of that measure.  Both are limiting principles.  To decide that we measure something is to decide, and quite definitively, that we are not measuring anything else.  But it may be that the measure excluded might be critical to a sense of the purpose of our knowledge.  Still, one cannot measure everything; but choice is a normative exercise of the first magnitude.  To choose to measure one manifestation of the business and human rights project and not another is to make a choice about what exists--for purposes of measurement--and what has ceased to exist. That choice is usually made against the objectives of the measure--"progress in the implementation of the UNGP" requires a set of measurement data quite distinct, for example, from measuring the efficacy of human rights due diligence in the extractions industry.  If he decision is made to assess "everything" the resulting complexity and the burdens of the process of harvesting data increases transaction costs of compliance, and also of producing goods, either shifting or reducing the productive capacity of society (in the aggregate; with disastrous effects sometimes with specific firms). Those choices, are, in a sense, normative.   The temptation to use the data harvested for the former to buttress interpretations of progress in the later constitutes the usual sort of misuse of metrics that tends to plague many assessment projects. And this one would be no different.

As important, the techniques chosen and the standards against which the measure is taken provide the last two of the great first principles.  Assessment requires both objective and standard.  The objectives of purpose provide the direction vector for choosing among data that might be harvested.  The standard provides the point of comparison, a point of reference.  But that point of reference can be many things.  At one end, it can represent the aspirational ideal to be attained. . . at last.  At another (not the other) end it can represent the aggregate performance of the set of actors from which data is extracted. One suggests the need for movement toward a goal, the other suggests the discipline of the moving average--without a set referent.  Choice of standards also, then, includes a normative element, disguised as a ministerial determination. Lastly techniques are the essential means of translating the objectives and framing of the assessment project into methods for most usefully harvesting data.  Bt that also includes normative elements.  Harvesting that is burdensome can produce incentives to avoid producing harvestable data.  Harvesting that is complex creates larger risks of fabricated information. But all data harvesting provides a point of corruption.  The temptation to provide monitoring and data harvesting organizations with "good" data has plagued all private enterprise centered monitoring of downstream supply chain partners.        

The measurement project of the business and human rights enterprise in general, and of the UNGP in particular, finds itself on the verge of multiple sets of increasingly sophisticated metrics, of systems of measure, in a context in one ought to ask whether or to what extent first principles have been rigorously built into the architectures of measurement now in the process of implementation.  I briefly suggest some points where such consideration might be useful.

What we measure.  The focus of measurement are qualitative measures.  The focus is on the measurement of formal compliance--are policies and rules in place, do their content satisfy formal requirements, have they been endorsed by management, have they been embedded in operations, etc.  These formalist measures tend to focus on intent and the construction of an institutional infra structure.  But they do not measure much that is measurable in a way that is useful--other than as a means of determining the extent of formal compliance.  Moreover, such qualitative measures can be compared--which companies have undertaken formal compliance and to what extent.  But they can measure little else that is comparable--either to other companies or to a standard. And the choice to measure only corporate compliance makes invisible the driving power of the UNGP--the state duty to protect human rights.  The invisibility of that measure substantially hobbles the analytical power of techniques are are supposed to reach the question of the efficacy of the UNGP as a whole.  Instead they provide evidence of what many suspected--that these are mearly a means of avoding the state  and its duty in favor of privatizing legal compliance in private actors.

How we measure.  Qualitative measures of course have consequences have other consequences.  They tend, for example, to exacerbate tendencies to segment analysis.  Data becomes constrained by sectoral or even geographic barriers.  One can move only slightly from data harvesting to analysis--precisely because beyond compliance, there is little to analyze except formal compliance.  That may be sufficient as an initial matter--but it is hardly worth the cost of an assessment architecture.  And, of course, qualitative measures do not get to the issue of risk--they do not measure those things that data generators--and remedial institutions--care about, in a meaningful way. And beyond that comparability is sacrificed.  In this way one constrains the potential of the human rights due diligence project to technique rather than measure.

Why we measure. Ostensibly one measures for compliance.  But that is not clear in many of the measurement architectures put forward for adoption.  One measures to assess, but assessment on qualitative measures is a weak reed and unsuitable for measurement in context.  One has no way of measuring compliance in the absence of a sense of either an idealized or aggregated  consensus of compliance.  The sense that formal compliance necessarily  provides "results" is a dangerous assumption.  And of course, one must ask, and ask again, where is the state in these metrics, in these techniques, in these compliance frameworks that are meant to measure a three pillar structure that itself is understood as highly integrated.  If that is the case they measurement has little to do with the efficacy of the UNGP.  Perhaps it has only to do with the efficacy of the 2nd pillar.  But f that is the case, the absence of first principle consensus makes data identification difficult and interpretation impossible. This confusion is augmented by ambiguity in the structuring of measurement frameworks.

Unintended consequences? Assessment measures skew behaviors.  This is an old insight--and perhaps the intended though unstated purpose of the measurement structures put forward.  But if this normative standard is at the base of the measurement project, then one would expect a more rigorously pointed set of measures--from data identification, to standard, to objective to analysis--so that the entities measured might more clearly understand the ways in which their behaviors are meant to be modified, and can better assess the costs of and means to compliance. And of course one wonders about the legitimacy of a project in which normative projects are undertaken through ministerial or technical measures to which democratic consultation tends to be absent. And data harvesting not coordinated with the state can quickly run into issues of incoherent state information regimes (just consider the growing differences between the U.S. and the E.U, on this score). Lastly, of course, the unintended and peripheral consequences of data harvesting and analysis must be considered.  This is particularly so with respect to issues of risk and liability--a place where the 1st and 2nd pillars meet (in the 3rd). The normative consequences of ministerial and technocratic projects (assessment) thus implicate basic legal norms that vary across jurisdictions in ways that remain underexplored. 

Taken together, the project of measurement is a crucial advancement in the implementation of the UNGP.  But it is, in its own way, as delicate a project as the initial development of the UNGP themselves.  Measurement product differentiation, a marketplace of data harvesting that are incompatible, partial and grounded in substantially distinct objectives, tested against different and shifting standards may, in the short run, cause more confusion than help.  And that may set the project of the UNGP back rather than forward, drowned in a growing heap of data collected that, in large measure, remain incomprehensible or relevant.  This is not to suggest an abandonment of the measurement project, or that the project to date has produced no good.  Quite the contrary.  But the UNGP measurement/assessment project is at that stage where a bigger picture and coherence enhancing, pillar coordinating meta-measuring framework is long overdue from out of the bits and pieces produced to date.  It is possible. . . if there is sufficient will.

No comments:

Post a Comment