Is it necessary to evaluate? It is necessary to measure

Image: Plato Terentev
Whatsapp
Facebook
Twitter
Instagram
Telegram

By ARLEY RAMOS MORENO*

The intellectual and scientific production carried out in our public university

Here is a question and a statement, placed as a title, which can be applied to the intellectual and scientific production carried out in our public University. The theme is current and, for this very reason, raises controversies that only discussion and deepening will be able to clarify and, perhaps, calm down.

Like every question, this one we pose is also part of a doubt: is it necessary, in fact, to evaluate academic production? It seems to me that the evaluation is, in this case, very important. For several reasons. I would like to underline just two of them, among the others possible.

The first concerns the social importance of the results of academic research: the quality of the products created at the University will have a direct impact on the quality of life of society – when, of course, these results are distributed and properly implemented by public authorities and mechanisms several who take care of its transmission. It is about fulfilling an ethical duty, with the society that maintains the public University, to carry out periodic and systematic evaluations of the research results. Ethical duty, not only to evaluate, but mainly to seek to improve the quality of research products: evaluation should be an instrument for research improvement and not an end in itself.

From this point of view, the importance of evaluation is linked to the public nature of the university institution, which cannot fail to comply with this way of rendering accounts to those who place their trust in it, in the form of public taxes.

Now, this assessment is quite different from another, also very important, which is exclusively aimed at the internal development of research itself in the various areas of knowledge. In this second case, the evaluation will take different forms, related to the areas and various activities that make up the University. Therefore, it is a question of keeping present, for the members of the academic community, the state of research in their specific areas. The forms of internal evaluation in each area are consensual and, normally, do not lead to major controversies beyond those that oppose the different explanatory models from a theoretical point of view; in this case, the conflicts are part of the very evolution of the areas of knowledge.

Controversies and disputes proliferate, however, when, when evaluating academic production in order to be accountable to society, university institutions enter into competition to obtain funding from research funding agencies. It's just that by showing their academic merit and excellence, institutions become, in the eyes of agencies, more or less worthy of new funds to develop research. The conflict arises, at this moment, because the evaluation method must be standardized – to be applied indistinctly, with supposed objectivity, to all areas of research – despite, however, the great diversity that exists between them. And here we come to the second question that we put in the title in the form of a statement: it is necessary to measure.

There is no doubt that the activity of measuring is indispensable to the organization we impose on our experience of the world at large. We become capable of comparing the most diverse objects and events with each other, establishing standards and norms through theories and techniques that are, at the same time, conventional and consensual. With this, we create temporal, spatial, gravitational, energetic units and also create units of other types, such as samples of colors, shapes, sounds, or even samples of objects, actions, situations and even , of psychological states and sensations – for example, through behaviors instituted as samples.

There are more varied techniques to create measurement standards, some allowing to make precise comparisons through numerical quantification and others allowing to make analogical comparisons. However, there is no difference in nature between the various types of comparison techniques, only differences in the degrees of precision with which their units measure. They are all techniques for measuring and, with that, comparing objects and events – none of them authorizing a judgment on the value of the measured objects. We address here the controversial point of the academic production evaluation process.

Numerical measurement techniques are quite suitable for organizing objects and events that can be segmented into discrete units and thus allow them to be assigned numbers. Despite its tautological character, this statement does not always seem to be well understood when, for example, the numerical measurement type is indiscriminately generalized to other objects and events that do not share this same characteristic. It is as if numbering techniques could capture qualitative properties of the objects themselves, beyond just allowing measurements and comparisons through arbitrary associations between conventional units and numbers – a bit like, centuries ago, the members of the Pythagorean sect thought; these, however, with more finesse and depth.

The controversy surrounding academic evaluation consists in stating that, contrary to the generalization indicated above, numerical quantities do not express “qualities” of the objects and events measured – in our case, that the quantification of academic production cannot express its quality. Numerical quantities can only express the construction techniques of standard units themselves, never qualities of objects and events produced in the academy – which are the cultural products of research, teaching and extension. Now, what the measure intends, in this case, is to evaluate the academic production through the enumerated quantification of standard units established in a very little consensual way – which, in addition to being controversial, is a serious theoretical mistake.

In fact, both the standard units could be very different from those being proposed, and the reduction of quality to quantitative units is a philosophical illusion that, in the XNUMXst century, echoes its pre-Socratic origins.

Everyone who lives in academia is well aware of the various attempts to reduce the quality of academic production to quantitative units. We will take just one example, among the most spectacular, taken from the new discipline especially focused on counting the quality of scientific production, scientometrics. This is the concept of impact. How to evaluate the cultural repercussion of an academic work, its insertion and theoretical influence in the scientific community? For this, we tried to create measurement units by indicating the number of citations of published works. The assumption that guides the creation of this measurement standard is that the higher this number, the greater the influence of the work in the community, the greater its impact, and therefore, the better its quality.

As can be seen, the mistake consists in the assumption that countable units have the capacity to express qualities, by the simple fact of being countable – having, as a corollary, the idea that bigger numbers express bigger qualities, bigger being synonymous with better. Even Heraclitus would have his tomb shaken by such a contradiction...

In fact, the number of citations of a published work – in an international journal, in English, Qualis A – only expresses the number of times the article was cited by other authors in other articles published in international journals with equal academic insertion, but it does not, nor could it, express its quality. The concept of impact does not even guarantee that the work has been effectively read by those who cite it, nor, much less, does it indicate that there has been adequate assimilation of its content by those who read it. The number of citations of a work expresses, much more, the sociological circumstances of the academic areas involved, than the quality of the work. Important information for the sociologist of the sciences, without a doubt, but of no use for the legislator who intends to issue rules of conduct regarding quality. Unless, of course, the legislator has in mind the political influence that he can exert to gain academic power.

Note, in fact, the political use that is made of any measuring instrument when intending to deliberate in a supposedly objective way regarding the quality of the social and academic work of the individuals concerned – deliberating and imposing norms as criteria for their survival . Here, however, we enter another domain that we will not now explore.

It seems natural, therefore, and even healthy that there is a lot of controversy around the issue of evaluating academic production, because if measuring is necessary to better understand the world around us, we can ask, however, if measurement techniques by quantification are really appropriate to express the quality of academic work. And, equally, when we take into account the great diversity of the areas of knowledge that make up the University, we can also ask whether the normative criteria that are being presented are adequate to this diversity.

If, on the one hand, it is necessary to standardize the standards of comparison in order to be able to account to society about the value of academic production, on the other hand, the issue of judging this value through the application of current techniques seems to be an obstacle that has not yet been overcome. Techniques that adapt well to objects and events of a physical nature, but very little to cultural or symbolic objects and events, such as the products of academic activity. If we do not recognize this difficulty, the new science of metrics should be renamed Scientology...

Finally, let us return to the two initial points, placed as the title of this text, in the form of two questions: evaluate and measure.

Yes, it is necessary to evaluate the academic production of the public university, both for ethical and social reasons, and for theoretical reasons, internal to each area of ​​knowledge. But then the first difficulty arises: if, in the second case, the criteria can be reasonably consensual, in the first case, on the contrary, the standardization of evaluation standards presents a difficulty that, until now, is far from being overcome. And it will remain so, if the standards adopted for standardization are only and exclusively those adopted to quantify natural processes – as natural scientists of physical matter, chemistry, biological processes, etc. do.

On the other hand, if the activity of measuring is essential for us to know the natural world that surrounds us – as natural scientists do, supported by mathematicians and logicians – it will still be necessary for us to develop techniques to judge the quality of cultural and symbolic products that constitute academic activity. We are far from this.

Finally, the third aspect we just point to is the political, social and academic use that can be made of standard measurement techniques. In our case, they are techniques that present norms to determine the quality of academic production – norms that are, in turn, presented as a condition for the survival and development of individuals and institutions that make up the University. This aspect will be a difficulty for academic areas or groups, just as it will be a weapon of power for others – as in every political dispute in which collaboration is not at stake, but only competition between peers. Hence the importance of expanding and deepening the discussion around this controversial issue.[1]

*Arley Ramos Moreno (1943-2018) was a professor of philosophy at Unicamp. Author, among other books, of Introduction to a philosophical pragmatics (Unicamp Publisher).

 

Note


[1] Allow me to send the reader to an article in which I develop some of the ideas presented here in summary form. This is “The area of ​​humanities in the era of the technological university”, published in the book Human formation and education management: the art of thinking threatened (Cortez, 2008).

See all articles by

10 MOST READ IN THE LAST 7 DAYS

See all articles by

SEARCH

Search

TOPICS

NEW PUBLICATIONS