
by Jenni Ajderian @jajderian
When was the last time you read the Terms and Conditions? Even as a calm, educated adult, before posting your little jokes on Twitter, you probably didn’t bother to read through the legalese of the Ts&Cs. Children are even less likely to, and legally can’t consent to certain things even if they do read the small print.
The mental health crisis text line Shout recently shared their conversation data with researchers from Imperial College London. The scientists used Machine Learning algorithms to read conversations, label them, and extract information about the conversations, the volunteers, and the texters themselves. As a text-only service, Shout is advertised to children as much as adults, and receives generous funding from the royal family, among others.
Crisis lines are essential front-line mental health services. Texters are looking for immediate help with their mental health, and having a confidential conversation can be the difference between life and death. How Shout and its parent organisation, Mental Health Innovations, choose to treat their texters is essentially up to them, but this latest piece of research comes with a few important questions: is sharing conversations useful? Is it legal? Is it moral?
The researchers used text from the conversations, with names and locations removed, to figure out how to improve the service for future texters. This is a noble aim, but it’s not clear that the research paper actually achieves this. After having a human read and label over 8,000 individual text messages, and training a machine to do the same, the researchers found that texters who say “I am 12” are probably 12. Likewise, those who mention “autism” and “trans” are probably autistic and trans, respectively. These findings are good enough for a research paper, but are they ground-breaking and insightful enough to justify sharing thousands of conversations that were supposed to be confidential?
On the legal question, we turned to Dr Sam Wrigley, LLD, post-doctoral researcher at the University of Helsinki, who focuses on information law and privacy issues. When an organisation processes your personal data, they have to comply with the UK General Data Protection Regulation (the GDPR) and the Data Protection Act. Under the GDPR (for both the UK’s version and the original EU law), consent needs to be a “freely given, specific, informed and unambiguous indication of the data subject’s wishes”.
So does MHI legally have consent to share your data? Kind of- they send you a link to their Terms and Conditions when you first contact Shout. But these Ts&Cs, and the related Privacy Policy, are nearly 6,000 words long. A related Frequently Asked Question page, far easier to read, assured texters that “individual conversations cannot and will not ever be shared”. But this was changed in late 2021, shortly before the research paper from Imperial was published. Rather than conversations only ever being shared in aggregated form, it now says that Shout is free to share individual messages with researchers, as long as they can justify doing so.
“Data transfers can also be justified in other ways,” explains Dr Wrigley: organisations can also have a ‘legitimate interest’ in processing your data. “If a controller (the person processing the personal data) can show that their legitimate interest (e.g. developing a better language recognition system) is not overridden by the data subject’s interests and fundamental rights or freedoms (e.g. their need or desire to keep their messages confidential, particularly given the sensitive nature of the material), then the processing could be justified under that ground”.
So if there’s a good enough reason for processing the data, you don’t necessarily need consent (though this gets messier when ‘special category personal data’ is involved). But even with a legitimate interest, the law requires would-be data controllers to tell their data subjects what’s happening, and encourages them to consider questions like “if you had asked them at the time, would the data subject have agreed?” and “does the data subject have a reasonable expectation for the data to be used like this?”
In this case, the ‘data subject’ could well be a calm, educated adult, well-versed in the complexities of GDPR and the different ways their data could be used. Or the data subject could be a scared twelve-year-old who just needs some help. They are younger than the internet itself, and aren’t legally old enough to use Instagram. The researchers used survey data to find anyone under the age of thirteen in their dataset, and then looked specifically at those conversations.
“Guidance on the use of consent also states that controllers should take extra care when asking for consent from children or other vulnerable people, which would certainly apply here.” Dr Wrigley adds, “At the very least, the age of the service users should raise real questions about how consent should be used in a particular case.”
Considering this complexity, the researchers could have simply removed those conversations from their dataset, but they decided that reading, labelling and analysing them was justifiable for their aims.
The scared twelve-year old didn’t read through the 6,000 words of legal text. They aren’t considering how their conversation might be used in three years’ time- they just need help now. Schools and GP services regularly recommend Shout to children. When you’re in the midst of a mental health crisis, a conversation like this could save your life.
“The GDPR is also very strong on the idea that we should only use personal data if it is actually necessary,” Dr Wrigley explains. “Rather than an ‘all or nothing’ situation, we should be able to say yes to some things and no to others (at least if it’s actually possible to split things up in this way). In particular, we can look at whether a controller made their provision of a service conditional on a data subject consenting to something that is unnecessary for the provision of that service. Consent given under such circumstances may be found to have not been freely given, and may therefore be invalid”.
MHI could help the scared twelve-year-old without storing their data, analysing it, and sending it on to researchers at Imperial. But they chose not to. Instead, the scared twelve-year-old is forced to either agree to be a part of some undefined research that they don’t understand, or go without help. The Privacy Policy clearly states that “if you don’t agree to the Terms you may not use the Service”.
This leads us to the moral question. The research paper itself notes that “Language-based deficits are common symptoms of mental health crises”. This implies that anyone, child or adult, is less able to understand and therefore consent to a 6,000-word set of conditions when they’re in the grips of a mental health crisis. It can be hard to put a sentence together, let alone read a legally-binding document. But that’s exactly what we’re expected to do.
When was the last time you read the Ts&Cs? Can’t we expect organisations to respect our privacy when we ask for help? Any barrier between help and the scared twelve-year-old could prove deadly. Is building a dataset for researchers at Imperial more important than providing people with help on the worst night of their lives?
Usually, when we skip the Ts&Cs, it’s because we can guess what they’ll say. In the case of mental health crisis lines, we might expect things like “don’t spam; don’t abuse our volunteers; we won’t sell your data”. When discussing social media, you might hear the phrase “if the product is free, you are the product”. Facebook and Instagram are free to you because advertisers pay for your data. The technology industry is always hungry for data, and we often pay with information for access to silly photos and online quizzes. Should we really have to pay with our data in order to get vital mental health support?
The title of that Imperial research paper is ‘Listening to Mental Health Crisis Needs at Scale’. Our needs include confidentiality, respect, clarity and trust in services. But researchers can’t listen to our needs if they don’t ask the question.
You can read about the DPA 2018 here, and the GDPR 2016 here. The Information Commissioner’s Office (the UK’s data protection authority) explains GDPR here and consent rules here.