Image may contain: Graffiti, Art, Poster, Text, Signature, Handwriting, Autograph, Person, People, Human

The Tab’s Mental Health Rankings 2017: How we did it

An explainer on our methodology

| UPDATED

Our Mental Health Rankings 2017 aim to paint the truest picture possible of UK universities’ ability to care for students with mental health problems.

They’re in their second year, and now contain 47 of the UK’s top universities, up from 30 last year.

This post explains how we did it.

How the rankings work

The rankings are broken down into four major parts: finance, satisfaction, outreach and waiting times. The two most significant are finance and satisfaction: essentially, what universities are putting in, and what students are getting out.

Finance:

The finance score is a compound metric, derived from FOI data: pound per student, pound per applicant, and and measures concerning investment in the counselling/wellbeing service. These measures are all calculated separately before coming together to form the total finance score, with pound per student and pound per applicant being the most significant individual metrics.

Satisfaction:

The satisfaction score is also a compound metric, derived from the mental health survey data: overall satisfaction with the university on how it is perceived to handle mental health issues, satisfaction with the counselling and other services provided, and satisfaction regarding how the university handles student suicides. Again, two individual metrics tend to be more important: overall satisfaction score and satisfaction with services provided.

Those universities who did not receive enough survey responses from those who had experienced counselling were not judged on that score. Barring a few exceptions, overall satisfaction tracked relatively closely with counselling satisfaction, with on average, a less than six percent difference in the raw data. Major deviations from the norm were rare. When translated to the rankings, any potential point gain or loss is minimal, even when multiple standard deviations away from the norm. While our aim is always to have more data, missing this particular data is unlikely to misrepresent an individual university’s quality of care, as the general student population typically has a good approximation of the quality of care offered. Those universities which were exceptions to this rule had that fact reflected in their overall score.

Outreach:

Outreach is a holistic measure of how well the university is reaching those students who declare that they have a mental health issue, whether via counselling, a personal tutor, or other means.

Waiting Times:

Waiting times are a measure of how long students have to wait to receive an appointment after their initial assessment appointment with the university’s counselling/wellbeing service. Due to how different universities calculate waiting times, depending on what their services offer and how they are run, data on this was extremely patchy, and as such it only applies to a few universities.

Notes:

Universities missing data in to a particular category were not judged on that category.

Universities unable or unwilling to provide counselling/wellbeing service budgets were asked to provide their student support budgets instead, with some caveats. Those universities are clearly marked in the ranking. To ensure an appropriate dataset against which to judge those few universities, data was collected on as many universities’ student support budgets as possible.

The rankings asked for FOI data from 2015/16, as that is the last year where comprehensive data is available. There are no projections or estimations involved. Any university not included was due to a lack of respondents on the survey, with the exception of Southampton, who refused to provide the data asked for.

Where student survey responses were below the standard required to be statistically significant, they were not included. The rankings were built in collaboration with a professional statistician.

Improvements

This year’s rankings are an improved version of the rankings created last year. The data input to the rankings is from from a combination of a student survey and Freedom of Information (FOI) requests to universities. In order to improve the rankings’ accuracy and the number of universities involved, several changes were made.

Firstly, the survey used was improved, with questions added and existing questions made clearer. This meant it was possible to obtain far more valuable and accurate insights into student satisfaction.

Secondly, the FOI requests put to the universities were refined, ensuring that more responses and better data were received.

Due to the improved data available, last year’s system is no longer applicable or appropriate. While this year’s rankings represent a vast improvement, and a firm indicator of the rankings going forward, the consequences of this is that absolute numerical comparisons between the 2016 and 2017 rankings don’t apply. While relative comparisons retain some value, universities that have gone up or down in their overall place are likely to find this is not because they’ve done better or worse. Rather, the inclusion of significantly more universities, combined with improved data may be a more accurate explanation for an overall shift in position.

If you require further details, please contact Robin Brinkworth (data) or Greg Barradale (story).