Le classifiche delle università? «Dal punto di vista delle scienze sociali sono spazzatura». A dichiararlo nel 2013 era stata Simon Marginson, intervistata da The Australian a proposito della classifica QS. La stessa classifica che il Corriere non esita a indicare come “la più importante a livello internazionale“, forse per compiacere il Rettore del Politecnico di Milano che primeggia tra gli atenei italiani. Un primato che non deriva da particolari meriti ma da un cambio delle regole, favorevole agli atenei tecnici, operato da QS nel 2015. Risultato? La Nanyang Technological University di Singapore, da 39-esima nel 2014 era salita fino al 13-esimo posto, sorpassando Yale, John Hopkins and Cornell. Su quell’onda, il Politecnico di Milano, 229-esimo nella classifica 2014, risalì magicamente al 189-esimo posto,  mentre perdevano oltre 100 posizioni Pisa, Tor Vergata, Federico II di Napoli, Cattolica di Milano, Genova, Perugia e Bicocca.

Clamoroso il caso di Siena che dal 2014 al 2015 si trovò ad arretrare di  ben 220 (duecentoventi) posizioni in un anno. Il suo rettore Angelo Riccaboni, giusto un anno prima, aveva assicurato che «il ranking QS, redatto da Quacquarelli Symonds, è tra i più autorevoli al mondo». Più saggio il Rettore di Roma Tor Vergata: «È impossibile in ogni classifica anche sportive perdere centinaia di posizioni in pochi mesi se non cambiano gli indicatori». Una grande verità che viene troppo spesso rimossa quando si guadagna qualche manciata di posizioni e fa più comodo attribuirsene il merito.

Oltre che per la volatilità dei criteri, la classifica QS è stata messa in discussione per il peso sproporzionato  (50% del punteggio totale) che assegna a sondaggi reputazionali la cui aleatorietà e manipolabilità sono da sempre oggetto di discussione. Basta consultare Wikipedia per scoprire che furono proprio queste debolezze metodologiche ad indurre Times Higher Education a divorziare da QS (fino al 2009 esisteva un ranking THE-QS):

The rankings of the world’s top universities that my magazine has been publishing for the past six years, and which have attracted enormous global attention, are not good enough. In fact, the surveys of reputation, which made up 40 percent of scores and which Times Higher Education until recently defended, had serious weaknesses. And it’s clear that our research measures favored the sciences over the humanities.

Phil Baty (THE World University Rankings Editor): Ranking confession, Inside Higher Ed

A titolo di cronaca, va detto che, nonostante i buoni propositi, nemmeno la classifica di THE ha mai brillato per scientificità. Basti pensare all’exploit di Alessandria di Egitto, collocata da THE davanti a Stanford e Harvard nella classifica 2010 dell’impatto citazionale.

QS è anche nota per le spregiudicate pratiche commerciali: la vendita di consulenze alle università valutate e  il suo “infamous star system“, che permette di pagare per veder comparire “stelle di qualità” accanto al nome dell’ateneo. “Valutazioni a pagamento per le università più piccole” (Ratings at a Price for Smaller Universities) aveva intitolato il New York Times. Inutile dire che non pochi atenei italiani pagano i servizi di QS. Se speravano che questo li aiutasse a salire nelle classifiche, il tonfo del 2015 dimostra che hanno fatto male i loro conti.

Insomma, in termini di scientificità e imparzialità, le classifiche degli atenei godono di una reputazione immeritata.  Poco male, penserà qualcuno: tra le  tante “fake news” in circolazione le classifiche degli atenei non sono probabilmente tra le più dannose. In realtà, grazie alla loro pervasività mediatica contribuiscono a plasmare le agende dei governi perché ricacciano sullo sfondo tutti quegli obiettivi che non vengono contabilizzati nei ranking. Sono queste le considerazioni che Stephen Curry, Professore di Structural Biology all’Imperial College, London, ha riportato nell’articolo “University rankings are fake news. How do we fix them?” che ripubblichiamo di seguito per i nostri lettori.

____________

Per approfondire:

 

HESPA University Rankings Panel - May 2017University rankings are fake news. How do we fix them?

This post is based on a short presentation I gave as part of a panel at a meeting today on Understanding Global University Rankings: Their Data and Influence, organised by HESPA (Higher Education Strategic Planners Association).

Yes, it’s a ‘manel’ (from the left: me, Johnny Rich, Rob Carthy). In our defence,  Sally Turnbull, who was chairing, sat off to one side and two participants (one male and one female) had to withdraw at short notice. Photo by @UKHESPA (with permission).

The big news on the release of the Times Higher Education World University rankings for 2017 was that Oxford, not Caltech is now the No. 1 university in the world.

According to the BBC news website, “Oxford University has come top of the Times Higher Education world university rankings – a first for a UK university. Oxford knocks California Institute of Technology, the top performer for the past five years, into second place.”

Ladies and gentlemen, this is what is widely known as ‘fake news’. There is no story here because it depends on a presumption of precision that is simply not in the data. Oxford earned 1st place by scoring 95.0 points, versus Caltech’s 94.3. (Languishing in a rather sorry sixth place is Harvard University, on 92.7).

The central problem here is that no-one knows exactly what these numbers mean, or how much confidence we can have in their precision. The aggregate scores are arbitrarily weighted estimates of proxies for the quality of research, education, industrial contacts and international outlook. And they include numbers based on opinions about institutional reputation.

In all likelihood these aggregate scores are accurate to a precision of about plus or minus 10% (as I have argued elsewhere). But the Times Higher (and most other rankers – I don’t really mean to single them out) don’t publish error estimates or confidence intervals with their data. People wouldn’t understand them, I have been told. But I doubt it. That strikes me rather as an excuse to preserve a false precision that drives the stories of year on year shifts in rank even though they are, for the most part, not significant.

Now Phil Baty, the editor of the Times Higher Rankings (and someone who, to give him his due, is always happy to debate these issues) is stout in his defence of what the Times Higher is about. A couple of months ago he wrote in an editorial criticising the critics of university rankings:

“beneath the often tedious, torturous ad infinitum hand wringing about the methodological limitations and the challenges of any attempt to reduce complex universities to a series of numbers, the single most important aspect of THE’s global rankings is often lost: the fact that we are building the world’s largest, richest database of the world’s very best universities.”

But who can define ‘best’? What is the quantitative measure of the quality of a university? Phil implicitly acknowledges this by conceding that “there is no such thing as a perfect university ranking.” I would ask, is there one that is good? Further, if the point is to assemble a database, why do the numbers in the different categories have to be weighted and aggregated, and then ranked? Just show us the data.

The problem, as is well known, is that these rankings have tremendous power. They are absorbed by university managers as institutional aims. Manchester University’s goal, for example, stated right at the very top of their strategic plan is “to be one of the top 25 research universities in the world”.* How else is that target to be judged except by someone’s league table? In setting such a goal, one presumes they have broken down the way that the marks are totted up to see how best they might maximise their score. But how much is missed as a result? Why not be guided by your own lights as to what is the best way to create a productive and healthy community of scholars? Surely that is the mark of true leadership?

Such an approach would enable institutions to adopt a more holistic approach to what they see as their missions as universities. And to include things that are not yet counted in league tables, like commitment to equality and diversity, or to good mental health, or – in these troubled times when we are beset on all sides by fake news – to scholarship that upholds the value of truth.

A couple of years ago, a friend of mine, Jenny Martin, who is a Professor at Griffith University in Australia suggested some additional metrics to help universities complete the picture. For example:

How fair is your institution – what’s your gender balance?
How representative are your staff and student bodies of the population that you serve?
How much of their allocated leave do your staff actually take?
How well do you support staff with children?
And… How many of your professors are assholes?

Now, Jenny may have had her tongue in her cheek for some of these but there is a serious point here for us to discuss today. How often do rankers think about the impact on the people who work in the universities that they are grading?

I would argue that those who create university league tables cannot stand idly by (as bibliometricians used to do), claiming that they are just purveyors of data. It is not enough for them to wring their hands as universities ‘mis-use’ the information they provide.

It is time for rankers to take some responsibility. So, I call for providers to get together and create a set of principles that governs what they do. A manifesto, if you will, very much in the same vein as the Leiden manifesto introduced in 2015 by the bibliometrics community.

To give you a flavour, the preface to the Leiden manifesto reads:

“Data are increasingly used to govern science. Research evaluations that were once bespoke and performed by peers are now routine and reliant on metrics. The problem is that evaluation is now led by the data rather than by judgement. Metrics have proliferated: usually well intentioned, not always well informed, often ill applied. We risk damaging the system with the very tools designed to improve it, as evaluation is increasingly implemented by organizations without knowledge of, or advice on, good practice and interpretation.”

What is true of bibliometrics is true of university ranking. Therefore I call on this community here today to take action and come up with its own manifesto. Since we are in London, we could even call it the London manifesto. (After Brexit, we’re about to become the centre of nowhere and nothing, it would be nice to have something for people to remember us by!)

I stand ready to help with its formulation. I urge you to consider this seriously and quickly. Because if providers won’t do it, maybe some of us will do it for you.

Thank you.

A couple of afterthoughts on the meeting:

It was noticeable that the rankings provider who spoke after the panel addressed more of the technical shortcomings and cultural issues of university league tables than those who presented earlier in the day. It is important to keep the debate on rankings and university evaluation alive.

I was surprised that there were relatively few questions after each talk from the audience, which consisted mostly of people involved in strategic planning at various universities. I hope that doesn’t indicate a certain degree of resignation to the agenda-setting power of rankers and, as a result, a reluctance to consider the broader impacts. But I remain concerned. In answer to my question about why one of the providers had bemoaned the fact that some university leaders rely too heavily on rankings, I was told – candidly –  that in some cases he felt it was a matter of poor leadership.

I was struck by an example mentioned by my co-panellist, Rob Carthy, from Northumbria University which pointed out one of the perverse effects of rankings. His university works hard to select and recruit Cypriot students even though they often only do one A level (a feature of the school system). In doing so, however, the average A level tariff of their intake drops which, on some league table measures, will reduce their score. The rankings therefore disincentivises searches for student talent that look beyond mere grades. I suspect they may also be reducing the motivation of some universities to widen participation.

 

*To be fair to Manchester, on this web-page the phrase appears to have been edited to read: “Our vision is for The University of Manchester to be one of the leading universities in the world by 2020.”

 

Ripubblicato da: http://occamstypewriter.org/scurry/2017/05/16/university-rankings-are-fake-news/

 

 

 

Print Friendly, PDF & Email

14 Commenti

  1. Gli effetti collaterali della superstizione nelle classifiche sono veramente imprevedibili. Sembrerebbe lecito immaginare che a Singapore non possano che essere contenti di avere la loro Nanyang Technical University all’11-esimo posto della classifica mondiale, davanti a Princeton (https://www.theonlinecitizen.com/2017/06/09/qs-ranking-downright-shady-and-unethical/).
    In realtà, lusingati dal primato, pare che abbiano reso più difficile l’accesso ai connazionali per mantenere alta la quota di studenti stranieri. Con il paradosso di avere studenti brillanti di Singapore che non riescono a studiare nella loro università, ma devono immatricolarsi negli atenei della “Ivy League”. Poco male, per chi ha mezzi e opportunità, ma qualcuno (magari anche più bravo dello studente straniero immatricolato) finisce per essere sacrificato sull’altare del Ranking:

    ________________
    “Some NTU “rejects” even went on to Ivy League Universities overseas. Many understandably could not afford the costly overseas education. A mere tweaking of the arbitrary cut-off points for NTU Admissions would easily have absorbed 6,500 more Singapore students. The cutoff point appeared deliberate in order to have less local students, in favour of foreign studnets in order for NTU to excel in the foreign students criteria of the QS Ranking criteria.”
    _______________
    Were Singaporean Students and Professors Sacrificed for NTU Top Rankings?

    http://miko-wisdom.blogspot.it/2014/10/were-singaporean-students-and.html

  2. Anche la mossa del Politecnico di Milano di rendere obbligatoria la lingua inglese per tutte le lauree magistrali (con strascichi di ricorsi fino alla recente sentenza della Corte Costituzionale: https://www.roars.it/corsi-solo-in-inglese-la-consulta-ribadisce-la-centralita-della-lingua-italiana-e-definisce-i-limiti-dellinsegnamento-in-lingua-straniera/) potrebbe essere almeno in parte leggibile in questa chiave: aumentare la quota di studenti internazionali per guadagnare posizioni nella classifica QS, anche a costo di peggiorare la qualità delle classi, dei corsi e della formazione dei laureati.
    Non è forse un caso che nelle dichiarazioni dell’ex-Rettore Azzone (http://tinyurl.com/o8dpjt2) si notasse, tra tutte le classifiche, la predilezione per la Classifica QS (quella più facilmente manipolabile e scalabile).
    Predilezione rafforzata dal colpo di fortuna del 2015 quando lo sconvolgimento delle metriche usate da QS, pensato forse per favorire le università tecniche asiatiche, finisce per regalare un balzo in avanti al Politecnico in contemporanea con la bastonata agli altri atenei italiani retrocessi (incolpevolmente) anche di oltre cento posizioni.
    A dire il vero, la bastonata un po’ te la meriti, se fino all’anno prima dichiaravi che “il ranking QS, redatto da Quacquarelli Symonds, è tra i più autorevoli al mondo”.

  3. Io sottoporrei anche gli atenei stranieri alla VQR (tanto l’ANVUR i soldi li trova: sembra che il tesoretto IIT sia stato accantonato solo per quello). Si potrebbe allora ottenere una “graduatoria italica ed autartica delle università mondiali”, una fotografia dettagliata e fedelissima degli istituti di istruzione e ricerca sparsi sul globo terraqueo. Mi domando se almeno con una graduatoria prodotta nel tinello di casa da vecchi arnesi del socialismo reale e da scopiazzatori seriali di tesine altrui, da questa specie di selfie fatto su misura, che non esca una buona volta quello i docenti italiani hanno sempre sospettato essere la verità nascosta: che i colleghi da zombizzare sono quelli di Princeton e che l’MIT debba essere degradato a teaching university …

  4. Le classifiche delle università sono fake news? No. Sono uno strumento per guidare i futuri studenti nelle loro scelte, in particolare la QS. Alla domanda se sono perfette la risposta è sempre no, ma questo non vuol dire rifiutare sempre e comunque ogni forma di valutazione. Il rifiuto a priori sostiene un’ideologia autoreferenziale che non giova a nessuno e non porta il sistema universitario a un miglioramento.

    • Ironia perfetta. Non è banale raccogliere in poche righe tutti i luoghi comuni e le frasi fatte. Grazie!

    • Quando leggo commenti di Giuseppe De Nicolao come questo, mi rassegno a pensare che lui è molto più “smart” di me. E irraggiungibile. ;-)

    • Confermo: è irraggiungibile malgrado il fatto ci siano dei commentatori che piazzano un calcio di rigore a porta vuota :-D :-D … segno dei tempi e della decadenza intellettuale dell’accademia.

    • Si Giuseppe De Nicolao è imbattibile.
      E’ il primo in classifica.
      Ops questa è una fake news!



  5. ____________________
    Anche in Canada c’è chi si pone il problema degli effetti distorsivi sull’opinione pubblica, a cui vengono dati in pasto fenomeni inesistenti o irrilevanti (con il rischio per nulla remoto di perdere di vista le cose davvero importanti). Segnalo un articolo che è interessante già a partire dal titolo. Anche in questo caso, il punto di partenza è la pubblicazione della classifica QS. Eccone alcuni stralci.
    ===============
    Universities are not sports. So why do we pay so much attention to rankings?
    by Alex Usher
    _______________
    The 2018 QS World University Rankings, released last night, are another occasion for this kind of analysis. The master narrative for Canada – if you want to call it that – is that “Canada is slipping.” The evidence for this is that the University of British Columbia fell out of the top 50 institutions in the world (down six places to 51) and that we also now have two fewer institutions in the top 200 – Calgary fell from 196 to 217 and Western from 198 to 210 – than we used to.

    People pushing various agendas will find solace in this.
    […]
    Nationally, people will try to link the results to problems of federal funding and argue how implementing the recommendations of the Naylor report would be a game-changer for rankings.

    This is wrong for a couple of reasons. The first is that it is by no means clear that Canadian institutions are in fact slipping. Sure, we have two fewer in the 200, but the number in the top 500 grew by one. Of those who made the top 500, nine rose in the rankings, nine slipped and one stayed constant. Even the one high-profile “failure” – UBC – only saw its overall score fall by one-tenth of a point; the fall in the rankings was more a result of an improvement in a clutch of Asian and Australian universities.

    The second is that in the short term, rankings are remarkably impervious to policy changes.
    […]

    And that’s exactly right. Universities are among the oldest institutions in society and they don’t suddenly become noticeably better or worse over the course of 12 months. Observations over the span of a decade or so are more useful, but changes in ranking methodology make this difficult (McGill and Toronto are both down quite a few places since 2011, but a lot of that has to do with changes that reduced the impact of medical research relative to other fields of study).
    […]
    What’s not as useful is to cover rankings like sports, and invest too much meaning in year-to-year movements. Most of the yearly changes are margin-of-error kind of stuff, changes that result from a couple of dozen papers being published in one year rather than another, or the difference between admitting 120 extra international students instead of 140. There is not much Moneyball-style analysis to be done when so many institutional outputs are – in the final analysis – pretty much the same.
    ____________________
    https://beta.theglobeandmail.com/opinion/universities-are-not-sports-so-why-do-we-pay-so-much-attention-to-rankings/article35248888/?ref=https://www.theglobeandmail.com&service=mobile

  6. Il Ranking è la foglia di fico per ogni nefandezza. L’unico Ranking è fare buona didattica, fare ricerca onesta ed intelligente, avere buona reputazione dagli studenti che ti frequentano. Tutto il resto è fuffa. E poi: chi vorrebbe studiare nella 200° università del mondo? Che differenza ci sarà tra la 200° e la 300°? Basta prenderci in giro. Un corso di inglese in più ti fa saltare 100 posizioni, ma chi valuta la qualità di quello che insegni?
    Il vero problema è che l’opinione pubblica cade nel tranello in buona fede e crede a queste cose e nessuno la avverte della fregatura. Come sempre colpa di noi universitari.

Questo sito usa Akismet per ridurre lo spam. Scopri come i tuoi dati vengono elaborati.