Hostname: page-component-7c8c6479df-r7xzm Total loading time: 0 Render date: 2024-03-29T14:01:28.177Z Has data issue: false hasContentIssue false

Indicators, Rankings and the Political Economy of Academic Production in International Law

Published online by Cambridge University Press:  07 March 2017

Rights & Permissions [Opens in a new window]

Extract

Rankings and indicators have been with us for a while now, and have increasingly been objects of attention in international law. Likewise, they have been with the LJIL as a journal for a while now. Our so-called ‘impact factor’ is, to my untrained eye, the most prominent feature on the LJIL webpage hosted by our publisher. So this editorial is something of a rearguard action. The indicators and rankings, however, keep piling up. Proliferation may afford a perverse sort of optimism, about which below but, as will be clear, I do not share it. The increasing number and command of indicators and rankings reflects a consistent trend and a bleak mode of knowledge production. Knowledge production has been a topic in these pages recently, for instance Sara Kendall's excellent editorial on academic production and the politics of inclusion. I mean to continue in that vein, with respect to other aspects of the political economy of the academic production of international law, especially at a nexus of publishing, scholarship and market practices. There is an undeniable element of nostalgia in what will follow, but I do not really mean to celebrate the publishing industry, status quo ante, that has put me in this privileged position to wax nostalgic. The academic publishing business is flawed. What we are preparing the way for is worse. When I say we, I mean to flag my complicity, both as an individual researcher and as an editorial board member. I use the word complicity to convey a personal anxiety, also in my role as editor, so let me be clear: the LJIL board has no policy concerning rankings, and rankings have never influenced review at the journal. Moreover, while I cannot claim to speak for the LJIL as a whole concerning the topic of rankings or any other matter, nor is mine exactly a dissenting voice on the board. The tone of this polemic is mine alone; the concern is not.

Type
EDITORIAL
Copyright
Copyright © Foundation of the Leiden Journal of International Law 2017 

Rankings and indicators have been with us for a while now, and have increasingly been objects of attention in international law. Likewise, they have been with the LJIL as a journal for a while now. Our so-called ‘impact factor’ is, to my untrained eye, the most prominent feature on the LJIL webpage hosted by our publisher.Footnote 1 So this editorial is something of a rearguard action. The indicators and rankings, however, keep piling up. Proliferation may afford a perverse sort of optimism, about which below but, as will be clear, I do not share it. The increasing number and command of indicators and rankings reflects a consistent trend and a bleak mode of knowledge production. Knowledge production has been a topic in these pages recently, for instance Sara Kendall's excellent editorial on academic production and the politics of inclusion.Footnote 2 I mean to continue in that vein, with respect to other aspects of the political economy of the academic production of international law, especially at a nexus of publishing, scholarship and market practices. There is an undeniable element of nostalgia in what will follow, but I do not really mean to celebrate the publishing industry, status quo ante, that has put me in this privileged position to wax nostalgic. The academic publishing business is flawed. What we are preparing the way for is worse. When I say we, I mean to flag my complicity, both as an individual researcher and as an editorial board member. I use the word complicity to convey a personal anxiety, also in my role as editor, so let me be clear: the LJIL board has no policy concerning rankings, and rankings have never influenced review at the journal. Moreover, while I cannot claim to speak for the LJIL as a whole concerning the topic of rankings or any other matter, nor is mine exactly a dissenting voice on the board. The tone of this polemic is mine alone; the concern is not.

The indicator scheme that occasions my screed is Google's h-index, and I will refer to it to illustrate the operation of indicators in the context of academic production. I will refer in addition to one other scheme, namely Thomson Reuters' Journal Citation Report ranking and Impact Factor indicator (the Impact Factor is one part of the over-all Journal Citation Report), because these are the more established in the field. The latter, as noted, is prominent on our webpage. Google's h-index is not yet a coequal, but is gaining attention. And that is enough: as will be clear, indicators and rankings work on the basis of dispersed ‘attention’ that produces self-disciplining behaviour, rather than any more classically formal exercise of control. I will offer more details in just a moment. First, let me lay out my concerns in brief (to return to them again after more detail).

Much of my agitation will be familiar from the burgeoning literature in international law and governance addressing indicators, for instance as investigated by Sally Engle Merry, Tor Krever and others (while their literature is not typically directed at sites of academic production per se, much of their analysis remains relevant).Footnote 3 Perhaps more than anything else, the reduction of the perceived quality of a journal's contents to that journal's reputation, and the reduction in turn of the journal's reputation to a quantifiable equivalence, evinces a mode of knowledge production that serves one master, and it is not the critical interlocutor of international law. Normative ordering on the basis of numerical abstractions, such as the Impact Factor and h-index, reproduces economic logics; academic production in conformance with the same reinforces the market practices with which the h-index coincides, and entrenches its discontents.

Google's h-index privileges Google's position in one such market in particular, albeit an enormous one affecting countless other markets: namely, the global digital information market. The Journal Citation Report has long done the same for Thomson Reuters. Thomson Reuters' dominance has fed a corporate information operation of global scope, conditioning academics to compete on the same basis as stockbrokers and security analysts in a single global information market driven by profit. The cost of Google's privilege can likewise be anticipated from top to bottom, so to speak, including potentials for consolidated infrastructural dominance, diminished institutional and individual autonomy, and fewer dissenting voices. Google's bid for dominance at the top, like Thomson Reuters', will be reinforced by the capacity of its indicators, by design, to induce self-disciplining behaviour favourable to its own consolidated market position. Effected pursuant to already-existing structural inequalities, that self-disciplining behaviour induces in individuals, elites and non-elites alike, their participation in an unequal distribution of values and the perverse logics necessary to sustain the same. For reasons of space and simplicity, I will use the example of the Journal Citation Report and Thomson Reuters to describe aspects of the market dynamic in and by which they operate. Thereafter, I will describe the technicalities of Google's h-index, and make clear how the particular technology of the indicator can serve a particular market position. Google competes with Thomson Reuters, among others, for that position. After observing those details, I will return to what I see as broader ramifications of their competition and my and our acquiescence to it.

First of all, who is Thomson Reuters? It is a multinational media and information firm headquartered in New York City with over US$31 billion in assets. In Thomson Reuters’ own publicity literature, the corporation is: ‘the world's leading source of intelligent information for businesses and professionals’.Footnote 4 As of December 2016, Thomson Reuters sells products and services in the following eight corporate divisions: Financial; Risk Management Solutions; Intellectual Property; Legal; Reuters News Agency; Pharma & Life Sciences; Scholarly & Scientific Research; and Tax & Accounting.Footnote 5 The Financial products are intended to ‘generate the largest returns’;Footnote 6 the Intellectual Property products to ‘[m]anage, protect, and drive the value of your IP assets’;Footnote 7 Pharma & Life Sciences to ‘help your pipeline flourish and your business grow’;Footnote 8 the Tax & Accounting unit to ‘make your work easier, faster, and more profitable’; the Legal unit helps ‘[Thomson Reuters] customers meet client demands for increased efficiency and greater value’;Footnote 9 while the Reuters News Agency is more about entertainment, intended to ‘Build and engage your audience with real-time breaking news and high-impact global multimedia content.’Footnote 10 The Risk Management Solutions unit is more of the same, intended ‘to help you manage the challenges you face – and seize the opportunities’, but with a twist. Its products represent intelligence assessments on individuals globally, assessing and listing persons under categories for terrorism and terrorist financing, organized crime, money-laundering, and what they call politically-exposed persons.Footnote 11 They operate by monitoring ‘over 300 sanction and watch lists and hundreds of thousands of information sources 24 hours a day, often identifying high-risk entities months or years before they are listed’.Footnote 12 These are not random nor unrelated enterprises. They represent altogether the efficient and calculated use of US$31 billion in assets for maximum profit in a market predicated on information consumption. This is the market to which the indicators under discussion apply; academic production yields one commodity in this market. It is against this backdrop of profit, surveillance and security, that Thomson Reuters' Scholarly & Scientific Research unit promises ‘to help you [the academic] reach your goals and broaden your impact’.Footnote 13

Let me now try to explain the mechanics of Google's h-index, which employs a simpler calculus than the Thomson Reuters products, and will illustrate Google's competing market position. Google offers the following definition:

The h-index of a publication is the largest number h such that at least h articles in that publication were cited at least h times each. For example, a publication with five articles cited by, respectively, 17, 9, 6, 3, and 2, has the h-index of 3.Footnote 14

Thus the index quantification represents the same number that is the highest number of citations, achieved by the largest number of articles. In short, the h-index primarily incentivizes a maximal number of articles published, because each next article is another chance at more citations, and there is no penalization for additional articles, even when they are not cited. As long as there are two articles cited two times, the h-index remains 2, no matter how many additional, uncited articles a journal also publishes. And the moment three articles are cited three times, the h-index score goes up to 3. Likewise unto infinity, which Google's servers (among very few others, such as Thomson Reuters, Amazon.com and the US National Security Agency) are prepared to handle for an infinite number of publications.

There is one other key indicator, the h-median, which, in turn, is based on what Google calls the h-core. The h-core:

is a set of top cited h articles from the publication. These are the articles that the h-index is based on. For example, the publication above has the h-core with three articles, those cited by 17, 9, and 6.Footnote 15

The h-median, then, is ‘the median of the citation counts in its h-core. For example, the h-median of the publication above is 9. The h-median is a measure of the distribution of citations to the articles in the h-core’.Footnote 16 Google ranks journals according to their h-index score. The h-median is indicated for each ranked entry on the list, though the list is ordered only according to the h-index. Thus Google prioritizes two things: maximal citations and maximal articles. The allowance for constantly more articles without penalty indicates that the privilege goes to maximal publication first, and maximal citation second. The number of articles is counted cumulatively in two sets, one over the lifetime of the journal, and another over the last five years; thus not in either case by volume or issue.

This sort of ranking encourages at least two sorts of proliferation. It encourages a proliferation of articles generally; and one seeming consequence, already borne out in the rankings, is that it encourages what you might call citation communities, or a proliferation of cottage-industry publications. The first makes clear Google's structural interest in its h-index. As noted, there are few entities, let alone publishers, capable of supporting and interested actively to encourage an infinite number of publications. Bounded (and binded) production must fall away. Print publications, already much-changed in recent years, must find still another so-called business model: not even specialized market strategies will suffice when only a combination of algorithm, access and super-massive information resources are necessary.

Individual actors are the first target, as Google makes clear, and as Thomson Reuters did before Google. Thomson Reuters promises to help you, the academic, ‘identify the most influential journals in which to publish’.Footnote 17 Likewise, Google promises that its:

Scholar Metrics provide an easy way for authors to quickly gauge the visibility and influence of recent articles in scholarly publications. Scholar Metrics summarize recent citations to many publications, to help authors as they consider where to publish their new research.Footnote 18

It bears emphasizing, however, that the authors Thomson Reuters and Google refer to, such as myself, will not be relying on the h-index out of pure enthusiasm. The rankings will also be used by our administrators and department heads, to decide whom among us to hire, whom to retain, whom to promote, and whom not. Thomson Reuters, which has been in the game longer, is clear about this, with another promise to help ‘[e]valuate potential employees, collaborators, reviewers, and peers’,Footnote 19 as well as a related promise to guide ‘strategic and funding decisions’. We, the academic producers, are already familiar with the consequences in practice. We are subject, with our publications, to A lists and B lists and don't-bother lists, so that department heads and administrators can demonstrate our so-called productivity in quantified terms to accreditation committees who have little idea what we are working on, but are charged with making sure that the universities are capable of offering degrees that attest to our students’ readiness to contribute to society as they, our accreditors, know it.Footnote 20

In this combination of people and positions operating at different scales and towards different ends, what does any one person (such as myself) know? Precious little. And that is a source of anxiety that also feeds into the seemingly irresistible power of numbers, indicators and rankings. There is a common sentiment that I recognize and even share, a sentiment holding that there must be some clear way to measure truth, however imperfect. Often, this sentiment is not matched by any great willingness to articulate grounds for truth, or examples. I am usually in the position of arguing against truth claims, but let me say here that truth exists. People are starving today, and they cannot eat fictions. Grander aspirations to truth, however, have long since been revealed as ideological, relative, etc. This leaves many of us ashamed to defend a truth we desire, and anxious about the foundations of our own intellectual labours, especially insofar as they are divorced from practical so-called realities. We are left to seek the comfort of an affirmation of rightness by other means. Patronage used to be one way. Or we can put our faith in numbers, and, following the Impact Factor and h-index, resort to the old argument put forward by the Elvis album: 50 million people can't be wrong.

The ill-conceived urge to uphold a singular measure of rightness, or whatever, destroys one of the few values the traditional preserve of editorial boards represents: each board at each journal represents, at least notionally, the possibility of an independent evaluation of aspects of international law on different normative grounds. The turn to indicators threatens to end, once and for all, that modestly pluralist possibility. The numerical vocabulary of the h-index pins the grounds for evaluation to a flattened normative possibility that reflects a field stripped of substance. This leaves work-place pressures in the place of meaningful engagement, by which I mean that we are obliged to compete on the basis of numerical equivalences for our academic positions, rather than establish who we are and what we do by engaging meaningfully with the research the indicators are made to represent in abstraction.

In the Netherlands, the work-place pressures that I mean include faculties who are funded by the number of students they pass. This of course means the faculties have to attract students in the first place. There are two main channels of attraction: for students who are not disposed to worry about a salary after graduation, selling them on the basis of whatever they are inclined to want; for the rest, selling them on the basis of a well-paying job after graduation. Law faculties usually steer towards the latter. Which means catering to the practicing community, to put one's graduates in the best position to find a job there. Large firms and corporate demands are best catered to, usually in connection with some well-established recruiting network, with work for multinational enterprises representing the top prize, sizeable private concerns on a national scale coming in a respectable second, lucrative criminal work an alternative, etc. In short, law faculties reproduce by design attributes that are maintained and rewarded in the market. In addition to this, remaining with my example of the Netherlands, research money has been taken away from the faculties, and concentrated in a single, nation-wide funding agency. That agency stages funding competitions at several levels of seniority, and the criteria emphasized beyond all others at each one of them is valorization. Valorization is basically defined according to the demonstrable possibilities for consumption of the research product as commodity. Thus, it is against the backdrop of a mode of academic production that is already saturated with market conditions – market conditions that dovetail with those observed in the Thomson Reuters literature – it is against this backdrop that the h-index facilitates competition (which faculty and which individual has the greatest number of highest-scoring publications in the best-rated journals) on the basis of numerical equivalences shorn of more meaningful substance.

In other words, the normativity of Google's h-index ranking is conditioned by a hegemonic market, undergirding (and giving the lie to) the pretension to neutral objectivity of the h-index. It is worth pointing out, though, that Google's interest, evident in the construction of the h-index, is not strictly an interest in The Market for its own sake, any more than Thomson Reuters and Google have gone to the trouble of establishing the Impact Factor and h-index out of altruistic devotion to Academic Scholarship generally. Google's interest, like Thomson Reuters, is in its own position at the top of a particular, and particularly pervasive market, namely the market for digital information. The Journal Citation Reports complements Thomson Reuters’ other corporate intelligence units globally; whereas the h-index privileges Google's competing bid to carry and monitor, process and distribute a maximal amount of information, beyond the storage and search capacities of any individuals or less-well-resourced corporate groups. The h-index in this way competes on two fronts, at once weakening the position of both Thomson Reuters and smaller publishers, entrenching Google's unique capacity to dominate the information market.

Most importantly, in entrenching Google's bid for dominance of the market for digital information, the h-index reflects biases in the infrastructural underpinnings of that market. By infrastructural biases I mean the pipes, cables and switches that connect the market for digital information, and in a radically unequal way. That infrastructure favours the global North on an astronomic scale, an enormously inequitable distribution exacerbated by superstructural restraints of copyright and intellectual property. The distribution of undersea cables linking global information technologies, like the distribution of internet carrying devices generally, shows a world utterly divided in the enjoyment of digital information resources. A feedback loop emerges: the Google h-index favours users, such as myself, who are favoured by structural inequities. It reflects our stake in the markets that determine our practices. In my case, that will mean as many cited articles in as many as I can manage of the most-cited journals, all of them English-language and typically operating out of Northern Europe or the US.

But a note about the condition of being elite in this context. My specially-favoured situation does not change the self-disciplining nature of the exercise. For all of my membership in a privileged elite, I compete as an individual cypher with so many other cyphers. And by complying with this disaggregated technology of governance, I and we lock ourselves ever-more-firmly into our situation, in this case as elites, with an ever-more-limited scope for addressing or redressing the field in which we operate as elites, and our role in it. Rather, we are tied still more thoroughly to the market that defines us – not merely because we acquiesce, but because once enough of us do, there is less and less within the field of academic production that we can do about it. Once there was a possibility for capture of these elite platforms, with the chance of turning them against the field, so to speak. Something like that happened here, once upon a time at LJIL, under the direction of Thomas Skouteris, among others. As editor-in-chief, he helped establish the identity and substance of this journal precisely by pushing it against the grain, into the field of critical legal theory. Wouter Werner and Fleur Johns broadened the theoretical ambit of the journal, a direction carried on by Larissa van den Herik and Jean d'Aspremont, among others. Between them all, both of the principle attributes celebrated by the h-index are sacrificed: mainstream affirmation, and a narrow citation community. Thomas and his successors could achieve what they did because the editorial board at least afforded the opportunity of prerogative once attained. The h-index would disperse that editorial prerogative across the market defined by its metrics, and disaggregate it into the self-disciplining authority of academic readers as consumer and competitors. Under those conditions, the possibility of achieving what was once done here becomes increasingly unlikely.

René Urueña, among others, has argued in favour of renewed political possibilities opened up by indicators.Footnote 21 I hope he is correct, but I struggle to see it. He, like many others addressing indicators in the international legal context, writes from the vantages of development and global administrative law. Others, such as Francis Collins and Gil-Sung Park, have suggested similarly redemptive possibilities from outside of the law, applied in their case to questions of higher education.Footnote 22 These optimistic scenarios track three related strategies; I will call them capitalize, chaos and co-opt. To capitalize roughly means taking advantage of the new technology of quantification to make visible what was not visible before. It is an injunction to use indicators as productively as possible. Assume, for instance, that before indicators came on the scene, the legal-academic publishing industry failed to provide a platform for many of the most important investigations that might have been done. And assume that the reason had to do with the biases embedded in a system organized around elite editorial prerogative and the networks of peer review to which that prerogative is joined. Those assumptions require no stretch of the imagination. And in that context, the capitalizing strategy holds that the move away from the old system, and towards a distinct system organized around the sort of quantification represented by the h-index (and impact factors before it, etc.) opens up the possibility for new publications which failed to see the light of day under the old system.

There is merit to the capitalizing strategy, at least, perhaps, in other fields in which indicators operate. In the case at hand, however, the merit remains with the critique, but not the solution. The system of editorial prerogative was inadequate, and moreover was hardly free of the market pressures that corrupt the h-index: editors need publishers, and, on balance, both need to prove themselves to markets and in markets (and I don't mean just any markets, but I don't believe I need to belabour the point here). The question remains, however, what sort of alternative the new technology of quantification and ranking enables? In the case of academic publication, the h-index merely disperses market-determined editorial prerogative into the markets in question. Whereas before at least there was the possibility for capture, as indicated above in the history of the LJIL itself, now that possibility is diminished. Prerogative still exists, but it exists still more squarely with the entities and instruments that control the currency that sets the markets, rather than the academics who subject themselves, individually or as editors, to competition conditioned by those markets.

The second possibility, which I call chaos, would find opportunity in the apparent success of indicators, by exploiting their hyperactive growth. The idea is that there has been such a multiplication of indicators and indices and rankings, with each of them predicated on such variable expert appraisals, that conflict among them must inevitably arise – and can be gamed. Joined to the first optimistic possibility, to capitalize, chaos suggests that participants in the field can productively rally behind the competing indicator that opens up the most transformative possibilities. Frankly, this seems to me a delusion. My point here is to defend the possibility of opening up, or at least maintaining, discourses outside the mainstream. In that context, with respect to a competition among technologies designed according to market techniques – such as rankings and indicators represent – it seems inauspicious to bet on the competitive viability of marginal contestants obliged to use the master's tools, so to speak.

There remains the third possibility, to co-opt. It comes from the more typical scenario in other fields, where indicators require some local information collection, conducted by local agents, to fill out the formulas established on high. The possibility holds that local operators can exploit reporting conditions to privilege local issues, making the technology of the indicator work to reflect local concerns. Whatever the feasibility of this possibility in other circumstances, it is not available here: the h-index is drawn directly by Google from its own resources; nor does Thomson Reuters require local agents to inform its indicators – only access. There is no more local involved. The market and its governance are radically disembedded from traditionally local points of control.

Each of the three possibilities, moreover, suffers a crippling flaw under any circumstances. To take advantage of these quantified vocabularies, one must speak their language. Agency of this sort, and the subjectivity necessary to enjoy agency, in this context means identification with abstract equivalence, and this is not an innocent or easily contained act. It means to reproduce as a matter of one's active identity the grammar by which hegemonic domination deepens and becomes further entrenched globally, every day. Here, the familiar observation applies, that indicators help create the reality they propose to describe. They enter the field of discourse, and change it. I do not mean that the change to which they contribute is either predictable or solely due to the indicator. But neither do I mean with that caveat to allow that they are arbitrary. To what sort of reality does the Journal Citation Report contribute? I believe the Thomson Reuters publicity material speaks for itself, and it is not pretty language. To what sort of reality does Google's h-index contribute? A reality in which a maximal number of publications are vying for a maximal number of citations. We do not need to guess at future specifics to observe the general constraints this reality entails. The two concrete incentives of the h-index – maximum publications and maximum citations – exhibit some tension, insofar as the diffusion of the first would seem to work against the sort of concentration represented by the latter. That tension, in turn, favours a division of labor, such that a proliferation of literature is increasingly subdivided into niche fields, allowing citations to concentrate around increasingly narrow (and narrowly read) subfields while the overall field of literature continues to expand, in keeping with the basic pressure inscribed in the h-index.

What we are talking about, in short, is a marketplace of ideas defined by a division of labour. That division of labour appears to exacerbate a prior division of labour, in which the intellectual works independently of other forms of labour: the relatively autonomous intellectual exercise becomes increasingly esoteric. The esoteric exercise is only sustainable, however, so long as it contributes to some project with sufficient capital (of one sort of another) to subsidize the otherwise-autonomous subfield. Each subfield attends to a more and more limited part of a market predicated on constant overall growth. In terms of scale, the subfields become smaller and smaller while the market by which they exist and operate grows bigger and bigger. The relative power of the overall market vis-à-vis the participants (individual and collective) in any one of these subfields grows accordingly. And that market, in the present context, is not just any market. It is a market for information structured generally according to liberal market tenets, and dominated in its particulars by Google. Guided by the h-index, this is our bleak route forward.

References

1 It occupies a text box of roughly 90 square centimeters, which is larger than any other allotted area for any one piece of text with the exception of the abstract for the journal as a whole. It is placed on a par with our featured content, just below the abstract. And while the journal abstract receives first billing, the font announcing the impact factor is far larger, larger than any other font on the page with the exception of the journal title at the very top of the page, which appears to be of equal size.

2 Kendall, S., ‘On Academic Production and the Politics of Inclusion’, (2016) 29 LJIL 617 CrossRefGoogle Scholar.

3 Merry, S.E., ‘Measuring the world: Indicators, human rights, and global governance’, (2011) 52 Current Anthropology S3 CrossRefGoogle Scholar; Krever, T., ‘Quantifying Law: legal indicator projects and the reproduction of neoliberal common sense’, (2013) 34 Third World Quarterly 131 CrossRefGoogle Scholar. Please see the selected bibliography at the conclusion for others.

4 Thomson Reuters, Journal and Highly Cited Data: Journal Citation Reports and Essential Science Indicators Brochure (2016), at 6.

5 thomsonreuters.com/en.html. A check of this web address in February 2017 indicates that the eight divisions have been consolidated into the following six: Financial; Legal; Risk Management Solutions; Tax & Accounting; Reuters News Agency; and Thomson Reuters Corporate.

7 Supra note 5.

8 Ibid.

10 Supra note 5.

12 Ibid. Indeed, the literature boasts of the many persons on Thomson Reuters lists without or before their appearing on government lists. It bears noting, however, that many governments and other parties simply purchase their security lists from Thomson Reuters, so that presence on the Thomas Reuters list means all of the restrictions that come with public listing, such as no-fly restrictions, blocked bank accounts, etc. de Goede, M. and Sullivan, G., ‘The politics of security lists’, (2015) 34 Environment and Planning D: Society and Space at 76–81Google Scholar.

13 Supra note 5.

15 Ibid.

16 Ibid.

18 Supra note 14.

19 Thomson Reuters, Journal and Highly Cited Data: Journal Citation Reports and Essential Science Indicators Brochure (2016), at 3.

20 Supra note 17.

21 Urueña, R., ‘Indicadores, Derecho Internacional y el Surgimiento de Nuevos Espacios de Participación Política en Gobernanza Global’, (2014) 25 Revista Colombiana de Derecho Internacional 543 Google Scholar.

22 Collins, F. and Park, G.-S., ‘Ranking and the multiplication of reputation: reflections from the frontier of globalizing higher education’, (2016) Higher Education 1 Google Scholar.