CFP: "Research Evaluation : Reopening the Controversy" - Special issue for Quaderni - Deadline: Feb. 1, 2011

Dear Colleagues,

Please find below a call for papers for a special issue on « research
evaluation : reopening the controversy » to be published in Quaderni Special
issue coordinated by Severine Louvel (PACTE, CNRS and Grenoble University).

http://www.pacte.cnrs.fr/IMG/pdf_Quaderni_Call_for_papers_ResearchEvalua...

Sorry for any cross-postings.
Kind Regards
Virginie Tournay

Scientific committee:

Nicolas Dodier (Inserm/GSPM, CNRS-EHESS),
Lars Engwall (Uppsala University),
Frédéric Forest (Paris 7/Rouen University),
Olivier Godechot (CMH – CNRS/Ecole Normale Supérieure),
Michèle Lamont (Harvard University),
Liudvika Leišytė (Twente University),
Séverine Louvel (PACTE, CNRS and Grenoble University),
Christine Musselin (CSO –Sciences Po Paris and CNRS).

- Deadline for submission of abstracts (max. 8,000 characters): February 1st,
2011

Research evaluation: reopening the controversy
Research evaluation has become an intrinsic part of public research policy, and
built into those technologies that national states now use to pilot, guide and
regulate social groups, and control their activities at a distance[1].

Research evaluation is based on three notions:
ü Sociotechnical arrangements(standards, procedures, criteria, indicators,
measurements, etc.) that aim to ensure a “mechanical objectivity”[2] that is
free from the subjectivity inherent in personal judgment;

ü A definition of powerbased on contractual agreement (auto-assessment,
reversal of roles between assessor and assessed) and on the production and use
of expert knowledge[3];

ü A Weltanschauung-both a world view and a value system - designed to guide
action and structured around values of competition, profitability and
performance.

Research evaluation is the product of some well-documented historical movements:
the long-term emergence of measurement in public policies[4]; the appearance in
the 20th Century – in both the industrial order and the public sector - of such
touchstone dogmas as “Total Quality Management”, “Performance Management” and
“benchmarking”, which enshrine competition as an organizing principle[5];
finally, the consolidation of this trend in western states (and the EC) since
the 1980s with the adoption of the New Public Management doctrine (in higher
education and research, but also in health, justice and so forth), whose
principles of profitability and accountability[6] are directly inspired by
management thinking and practice in the for-profit sector.
This impulse behind this special issue of Quaderni grew from the observations
that neither the uses nor the effects of research evaluation have received
sufficient analysis, and that this lack does not favor debate around recent
events[7]. While it might appear that a consensus has emerged as to how research
evaluation should be characterized – and that that consensus has gained a
general legitimacy among scholars – in fact other voices have suggested there is
still a debate to be heard. Thus:

ü On sociotechnical arrangements: The notion of “mechanical objectivity” has
been described as ontologically-oriented. The assessment procedures are seen as
giving rise to a clutter of new entities (“publication rates”, “scope of
influence of research”, “proportion of contract-based funding”, etc.) which
decide the survival or disappearance of existing entities.
Butthis interpretation projects too monolithic a vision of contemporary
socio-technical assessment arrangements - in reality, these are highly
diversified, both at the international level[8] and depending on the entity
under consideration (journals, scientists, laboratories and institutions).
Moreover, it suggests the superiority – even the idealization - of previous
forms of assessment (e.g. the collective judgment of peers) perhaps at the cost
of simplifying the process[9].

ü Poweris seen as tyrannical,[10] as if claiming the right to ultimate
knowledge, and to be able to enforce subservience to some ‘project for
domination’.
Butthis view tends to imply a fundamental schism between knowledge and power,
and to characterize research evaluation as being purely an exercise in keeping
scientific communities at bay. Power is seen as impersonal (European,
bureaucratic); with experts[11] having only a ‘behind the scenes’ role at
specific stages, despite the fact that, in reality, they play a key role in how
evaluation actually unfolds.

ü The Weltanschauung enshrined in assessment is held to include a set of
anthropological prescriptions that appear to call for the “creation of a new
type of researcher”,[12] who has no critical sense, but who merely relays
conformist thought, and largely abides by management injunctions.
Butthis interpretation does not consider disciplinary mechanisms, or give any
sense of either their enforcement or of resistance against them. Nor does it
take on board the view that researchers may not be merely victims, or that there
has been both agreement with - and strategic buy-in to - the new arrangements.

Contributions to this special issue of Quaderni will therefore go beyond such
relatively mechanical or Manichaean interpretations and re-open the whole –
potentially controversial – debate around research evaluation. They will analyze
the variety of its uses and effects, both by and on different public research
stakeholders: scientists (in their role as researchers, members of journal
editorial boards, doctoral supervisors, project coordinators, etc.); expert
assessors; research institute managers; ministerial bodies; the media, etc.

- Contributions may come from a variety of disciplines: sociology,
political science, information science, anthropology, etc.

- Empirical analyses will focus on one (or more) assessment procedures:
criteria, notes, indicators, bibliometrics, rankings, recommendations,
appraisals, jury decisions, visiting committees, etc., and may relate to
researchers, teams or departments, reviews, establishments, program committees,
etc..

- Articles will go back over the history of the procedure under study,
in order to specify modes of dissemination and standardization of research
evaluation.

- They will question which arguments and actions might achieve closure
in the controversy around assessment, and how they relate in particular to the
continuation of classical political decision-making models, both linear and
rational[13].

- They will focus on the uses and effects of research evaluation
procedures by choosing one or other (or several at once) of the following
angles:

ü Sociological arrangements. The emergence, stabilization and circulation of
socio-material or discursive entities: an imposed process? “Rank-A journals”,
“four star departments”, “high socio-economic impact research”: authors are
invited to compare the development of these entities in terms of procedures,
disciplines, establishments, etc. so as to decouple discourses from practices
and reintroduce processes, uncertainties and path-dependence in place of
deterministic interpretations. Comparisons over time (in particular with older
style peer-review mechanisms) will also foster discussion around the efficacy of
the “mechanical objectivity” which seems to be the aim of current research
assessment procedures.

ü Power.Recomposing the relationships between research stakeholders: the
unquestioned diktat of policy? Where many previous contributions on this subject
have seen power as impersonal, we hope articles in this special issue will
examine the complexity of power issues raised by research evaluation,
identifying the actors involved in the process and considering how they exercise
their prerogatives. Rather than simply concluding (in a general fashion) that
there has been a unilateral increase in control of managers over scientists,
authors are invited to look at the shifts in spheres of influence in the public
arena (ministerial bodies, hierarchy of institutions, scientific bodies and
their representatives, program committees, assessment agencies, etc.) and beyond
(to the socioeconomic world, rating agencies, think-tanks and the media).[14]

ü New “Weltanschauung” and new values. Standardization of behavior - or
strategic conversionsto the new evaluation dogmas? Historically, professionals
have used quantified measurements of their activities to establish their
legitimacy[15] and to defend their professions before politicians. Scientific
communities are far from speaking with one voice on the subject of
assessment[16] – and certain researchers have managed to keep one step ahead of
assessors by calculating their own individual scores (“H Index”, Google Scholar
citations, etc.). Special Issue articles might look at how certain scientists
have taken active roles in the emergence of the “new researcher”, supporting the
new forms of assessment, particularly their quantitative methods. Other options
for investigation could include the increasing influence of experts (such as
membership of assessment agencies’ decision-making bodies, etc.) and how
“ecologies of practices” have been transformed by the need to satisfy new
assessment procedures,[17] which can be sensed in research program launches,
answers to calls for tenders, setting up of partnerships, etc..

Finally, articles may examine whether the new forms of assessment will modify
the professional rhetoric of researchers: reforms can push certain actors to
adopt a denunciatory approach which characterizes research evaluation as an
attack on researchers’ professional legitimacy and the whole scientific ethos.
On the other hand, some scientists are finding new sources of legitimacy in
these reforms. But do the shifts in practices and messages occasioned by new
forms of evaluation risk reviving old dividing lines in academia? In particular,
it will be interesting to look back at how the power relationships and
legitimacies between specialties, disciplines, establishments, generations, etc.
have developed since the institution of the new assessment procedures.

How to submit a contribution?
Interested authors are invited to submit an abstract (not exceeding 8,000
characters) which should indicate: the main argument to be developed in the
paper; its empirical basis and theoretical frames; and the paper’s five main
bibliographical references. Proposals may be made in English or in French, and
the abstracts should be sent as an attachment to severine.louvel@iep-grenoble.fr
or to quaderni_researchevaluation@yahoo.fr (All additional requests may be sent
to the same addresses).
Please provide the contributor's name(s), department and professional
affiliations, address, phone number and e-mail address in the body of the e-mail
message.
Proposals for papers will be evaluated by an international scientific committee
selected by Quaderni’s editorial board specifically for this issue.

Scientific committee: Nicolas Dodier (Inserm and GSPM, CNRS-EHESS), Lars Engwall
(Uppsala University), Frédéric Forest (Paris 7 and Rouen University), Olivier
Godechot (CMH – CNRS and Ecole Normale Supérieure), Michèle Lamont (Harvard
University), Liudvika Leišytė (Twente University), Séverine Louvel (PACTE, CNRS
and Grenoble University), and Christine Musselin (CSO – Sciences Po Paris and
CNRS).

- Deadline for submission of abstracts (max. 8,000 characters): February 1st,
2011
- Selection of abstracts: March 15th, 2011
- - For selected abstracts, full papers (in English or in French, max. 35,000
characters) to be sent by October 1st 2011.

________________________________

[1]Porter, T. M. Trust in Numbers: The Pursuit of Objectivity in Science and
Public Life Princeton, NJ, Princeton University Press, 1995.
[2]Ibid.
[3]Miller, J.-A. and Milner, J.-C. Voulez-vous être évalué ? Paris, Grasset
2004.
[4]Desrosières, A. La politique des grands nombres. Histoire de la raison
statistique. Paris, La découverte, 1993.
[5]Bruno, I. A vos marques, prêts… cherchez ! La stratégie européenne de
Lisbonne, vers un marché de la recherche. Collection Savoir/Agir. Paris,
Editions du croquant, 2008.
[6]Garcia, S. "L'évaluation des enseignements : une révolution invisible." Revue
d'Histoire Moderne et Contemporaine, volume 55-4bis, number 5, 2008, p. 46-60.
Ferlie, E., Musselin, C., et al. "The steering of higher education systems: a
public management perspective." Higher Education, volume 56, number 3, 2008, p.
325-348.
[7]Here we refer to the creation of national assessment procedures (Research
Assessment Exercise in the UK, Valutazione triennale della ricerca in Italy,
Standard Evaluation Protocol for Public Research Organizations in the
Netherlands, Agence de la Recherche et de l’Enseignement Supérieur in France,
etc.), the proliferation of rankings for institutions (Shanghai, Times Higher
Education, etc.) and journals (the European Science Foundation’s ranking of
social science journals ‘European Reference Index for the Humanities’ (ERIH),
etc.)
[8]Whitley, R. and Gläser, J. (ed). The Changing Governance of the Sciences.
Sociology of the Sciences Yearbook. Dordrecht , Springer, 2008.
[9]Lamont, M. How Professors Think: Inside the Curious World of Academic
Jugdment. Cambridge, Harvard University Press, 2009.
[10]Zarka, Y.-C. "L'évaluation : un pouvoir supposé savoir." Cités, volume 37,
number 1, 2009, p. 113-123.
[11]Vilkas, C. "Des pairs aux experts : l'émergence d'un « nouveau management »
de la recherche scientifique ?" Cahiers internationaux de sociologie, volume
126, number 1, 2009, p. 61-79. Garcia, S. "L'expert et le profane : qui est juge
de la qualité universitaire ?" Genèses, volume 70, number 1, 2008, p. 66-87.
[12]Gori, R. "Les scribes de nos nouvelles servitudes." Cités, volume 37,
number 1, 2009, p. 65-76.
[13]SFEZ, L. Critique de la décision, Presses de Sciences Po. First édition
1973, fourth édition 1992. SFEZ, L. "Evaluer: de la théorie de la décision à la
théorie de l'institution" forthcoming in Cahiers Internationaux de Sociologie.
[14]Whitley, R., Gläser, J., and Engwall, L. (ed). Reconfiguring Knowledge
Production: Changing Authority Relations in the Sciences and Their Consequences
for Intellectual Innovation. Oxford, Oxford University Press, 2010
[15]Porter (1995) op. cit.
[16]Mérindol, J.-Y. "Comment l'évaluation est arrivée dans les universités
françaises." Revue d'Histoire Moderne et Contemporaine, volume 55-4bis, number
5, 2008, p. 7-27.
[17]Stengers, I. La vierge et le neutrino. Les scientifiques dans la tourmente
Paris, Les empêcheurs de penser en rond, 2006.

Séverine Louvel
Maître de conférences en sociologie à Sciences Po Grenoble
chercheure à PACTE (UMR CNRS Grenoble Université)
severine.louvel@iep-grenoble.fr
http://www.pacte.cnrs.fr/spip.php?article331