Teens, screens and a hill of beans?

shutterstock_1119694730

Back in the 16th Century, a Scholar, Conrad Gessner, spoke out against Guttenberg’s printing press and the availability of books to the ordinary person, his concern was that too much knowledge was dangerous for ordinary people. In 1963, Albert Bandura developed Social Learning Theory, which was then well used in the debate about television violence and violent behaviour.

The pattern repeats – as we develop digital technology we are concerned that (particularly, but not exclusively) young people will be damaged by these developments. In the last couple of decades there has been a rapid change in the way that we use technology and there also appears to have been an increase in mental health issues amongst young people. Cause and effect surely? I have sat through numerous talks that reinforce the idea that the only reason why more young people are struggling with mental health is down to the use of smartphones and social media.  

As a parent of teenagers and someone who talks to parents about mental health and well-being, screens are high up on the agenda as a topic of conversation: How much? Where? What age? How do you monitor? This new paper by Amy Orben and Andrew Przybylski seemed like an opportunity to shed some light on the issue once and for all (Orben & Przybylski, 2019)!  Three large data sets and a novel and robust research method.

The current paper focuses on creating robust methods to study the use of screens. Two large data sets were used in the initial exploratory study: one from the Growing up in Ireland (GUI) study (n=4,573) and the second from the United States Panel Study of Income Dynamic (PSID) (n=790 after data exclusions). A variety of data was analysed to generate hypotheses and then, using a third UK Study (Millennium Cohort Study, MCS, n=11,884) a data-analysis plan was pre-registered and these hypotheses were tested to confirm the findings of the earlier analysis.

In the last couple of decades there has been a rapid change in the way that we use technology and there also appears to have been an increase in mental health issues amongst young people. Cause and effect surely?

In the last couple of decades there has been a rapid change in the way that we use technology and there also appears to have been an increase in mental health issues amongst young people. Cause and effect surely?

Methods: exploratory studies

The GUI data were collected between August 2011 and March 2012. The time use diaries were taken on a day designated by the head office. Other measures were the Strengths and Difficulties Questionnaire (SDQ) completed by the caretaker and the Short Mood and Feelings Questionnaire. The US data was collected between 2014 and 2015. The time use diaries were completed on a randomly assigned weekday and weekend day. The Children’s Depression Inventory and the Rosenberg Self-Esteem Scale were used to measure well-being.

The digital engagement measure was collected via retrospective self-reports and estimates from the time-use diaries. They measured whether the participant reported digital screen engagement, how much time the report engaging with screens and whether they did so 30 minutes, 1 hour and 2 hours before bed. These were then separated into weekdays and weekends, given 10 different variables.

The researchers used Specification Curve Analysis (SCA) to analyse the data. This is a relatively new approach proposed by Simonsohn, Simmons and Nelson (2015) and summarised here by Julia M. Rohrer (2018).  

If you can come up with a large number of defensible ways to analyze the data, run all of them and evaluate the results across all analyses. This allows researchers to probe whether robust effects emerge across different analyses and whether the null hypothesis of no effect can be rejected.
– Julia M. Rohrer, 2018

Results from exploratory studies

There were small correlations between time-use diaries and retrospective reports on both data sets, these were smaller for the US study than for the GUI study. In terms of the statistically significant specifications: there was a negative correlation between retrospective self-reported digital engagement and adolescent well-being in the GUI study, and for some of the time-use diary measures. In the US data set, significant associations were only found for digital engagement 1 hour before bedtime on a weekday. Looking at the patterns as a whole, the researchers developed 5 data- and theory-driven hypotheses, which could then be tested in the confirmatory UK data set.

Methods: confirmatory study

The data analysis tested the idea that higher retrospective reports of digital engagement, as well as more time reported engaging with digital screens via the time-use diaries, would correlate with lower observed adolescent well-being. Also those reporting use of digital screens 30 minutes and 1 hour before bed, via the time-use diaries would also have lower observed levels of adolescent well-being. Finally it was hypothesised that models without controls would show a more pronounced negative correlation than those with controls.

The measures for this study were the SDQ completed by the caretaker, the Rosenberg Self-Esteem Scale and the short-form Mood and Feelings Questionnaire. The adolescent technology use was again collected via retrospective self-report and via time-use diaries. 

A wide range of variables were controlled for including: mother’s education, employment, ethnicity and psychological distress; as well as household income, number of siblings present etc.

All this was pre registered using the Open Science Framework prior to the data being made available by the MCS researchers.

Results: confirmatory study

The results show that there was a correlation between retrospective reports and time-use diaries, this was higher and of better quality in the GUI and MCS than in the PSID. Again the data shows a significant association between self-reported digital engagement and adolescent well-being, and also a negative association between digital engagement as reported via time-use diaries and adolescent well-being. However in both cases, the association was smaller than the pre-specified SESOI (smallest effect size of interest). There were no significant associations between digital engagement both 30 minutes and 1 hour before bed and psychological well-being. The final analysis showed that more negative correlations do not present when controls are not included than when they are, and therefore the final hypothesis was rejected.

This study found "little evidence for substantial negative associations between digital-screen engagement (measured throughout the day or particularly before bedtime) and adolescent well-being".

This study found “little evidence for substantial negative associations between digital-screen engagement (measured throughout the day or particularly before bedtime) and adolescent well-being”.

Conclusions

This study is an example of how research literature on such an important area can be of the highest possible standard, and highlights the importance of using a pre-registered confirmatory framework and rigorous statistical testing.

In terms of associations between screen use and adolescent well-being, whilst there is a small negative association, it is too small to actually be meaningful. This supports other research demonstrating small negative associations between well-being and technology use. In addition there appears to be no link between the use of technology just before bed and well-being.

"There is little clear-cut evidence that screen time decreases adolescent well-being."

“There is little clear-cut evidence that screen time decreases adolescent well-being.”

Strengths and limitations

The methodology is quite detailed in the paper and the summary here doesn’t really do it justice. However, I would agree with the authors’ conclusion that they have demonstrated how to conduct robust and transparent research in a field that is clearly of public interest and where reports showing that there is any link between well-being and technology is pounced upon by the wider media. The data sets are large and generalisable, they use a variety of well used and valid measures (though one may argue that the SDQ doesn’t measure well-being per se). The deliberate separation of hypothesis generation and confirmation, and the use of the SCA, which means that they cannot simply play with the data until they find an interesting and positive result to report, sets a good example.

However I find the time scale really frustrating. This article was published this year – we have Instagram, SnapChat, musical.ly, YouTube and WhatsApp to name but a few social media apps. Most 11 year olds in the UK will have a smartphone, most television will be watched on demand and much of it on mobile devices. One could be forgiven for thinking that research reported in 2019 about the use of screens would refer to this ubiquitous mobile technology. I included the dates that the data was collected because I felt it was important. The data from the GUI study was collected before Instagram had been launched for Android phones (April 2012), and SnapChat had been around for less than a year (launched July 2011).  

If we suppose that all the data could have been collected at the time point of the last set (2015) it is still 4 years between collection and publication (it is worth noting that this paper was first submitted 11 months before it was published). In reality, technology is moving at such a rapid pace that the turn around for research simply needs to be faster. Apps evolve and change as users interact with them, SnapChat, Instagram and even Facebook do not look much like they did in their first days. In fact, could we not now collect data on device use through Apps, and harness the technology to keep research up-to-date with the rapidly changing world in which we live in?

The other issue for me is that the study looks at the use of technology on a single day, or via retrospective self-reports at a point in time, and correlates that with reported well-being. It does not consider the long-term effects of digital screen use. At 14 or 15 years old, after one or two years of technology use, children may perhaps be resilient enough to overcome the negative impact, but what about 3 or 4 years later? Unfortunately we then fall back into the time-trap of research not reflecting the changing world. A Catch-22.

For me the type of screen use also matters, if my children have been playing on the PS4 for an hour before bed I am pulling them off the ceiling, whereas watching an hour of ‘interesting TV’ and they will potter off to bed without a backward glance. The issue is that watching TV is not the same as engaging in an argument on WhatsApp; the interaction, the emotions and the blue light have different effects on us and how young people used screens even in 2015 is not the same as 2019. Thus to lump different types of screen use together may cloud the issue – perhaps not ‘how much’ screen time but ‘what’ screen time is a more pertinent question. The 2011 data asks about videos and not about social media use, they are simply not comparable.

Lumping different types of screen use together may cloud the issue. Not ‘how much’ screen time, but ‘what’ screen time is a more pertinent question to ask.

Lumping different types of screen use together may cloud the issue. Not ‘how much’ screen time, but ‘what’ screen time is a more pertinent question to ask.

Implications

So what should we recommend to parents who are concerned about screens and their teens? A parents’ guide published by the Royal College of Paediatrics and Child Health (Viner et al, 2019) reflects much of what this paper demonstrates; the links between screen use and poor mental health are weak.

What parents need to think about is:

  • whether screen use is replacing other activities such as exercise or socialising (things that promote good mental health),
  • whether it is replacing sleep (sleep deprivation is strongly linked with poor mental health),
  • that they are aware of what their children are watching/doing online and are safe, and
  • that they role-model sensible use of technology.

It is difficult to disentangle the myriad of variables that play into the effects of screen use on teenagers. This paper offers us the best practice on research of this complexity, where eager researchers are looking to publish with a headline grabbing result rather than use robust and transparent methods. But even then, this paper only gives us correlations so we can’t even establish a cause and effect.

It is possible that we will never be able to answer this question, and if we do, it will probably have taken too long and be too late. In the meantime, I will continue to monitor my teenagers use of screens, ban phones from the bedroom and block the wifi during dinner…just in case.

"I will continue to monitor my teenagers use of screens, ban phones from the bedroom and block the wifi during dinner...just in case."

“I will continue to monitor my teenagers use of screens, ban phones from the bedroom and block the wifi during dinner…just in case.”

Conflicts of interest

None.

Links

Primary paper

Orben A, Przybylski AK. (2019) Screens, Teens, and Psychological Well-Being: Evidence From Three Time-Use-Diary Studies. Psychol Sci. 2019 May;30(5):682-696. doi: 10.1177/0956797619830329. Epub 2019 Apr 2. https://doi.org/10.1177/0956797619830329

Other references

Bandura, Albert (1963). Social learning and personality development. New York: Holt, Rinehart, and Winston.

Rohrer, J.M. (2018) Run All the Models! Dealing With Data Analytic Flexibility. Available at https://www.psychologicalscience.org/observer/run-all-the-models-dealing-with-data-analytic-flexibility (accessed 10/9/10)

Simonsohn, U., Simmons, J. P., and Nelson, L. D. (2015). Specification curve: Descriptive and inferential statistics on all reasonable specifications. Retrieved from https://ssrn.com/abstract=2694998 or http://dx.doi.org/10.2139/ssrn.2694998

Viner, R., Davie, M., Firth, A. (2019) The health impacts of screen time: a guide for clinicians and parents. Available at https://www.rcpch.ac.uk/sites/default/files/2018-12/rcpch_screen_time_guide_-_final.pdf (accessed 10/9/19)

Share on Facebook Tweet this on Twitter Share on LinkedIn Share on Google+
Mark as read
Create a personal elf note about this blog
Profile photo of Lucinda Powell

Lucinda Powell

Lucinda Powell (BSc, PGCE, MA) Teacher|Podcaster|Speaker Lucinda Powell is Assistant Director of Teaching and Learning and a psychology teacher at Abingdon School. She has taught psychology since 2002 in a variety of schools in London and Oxfordshire. She also works to support teachers to use evidenced based psychology in all aspects of their classroom practice, her Podcast ‘Psychology in the Classroom’ brings psychological research directly to the classroom teacher. In addition Lucinda works as a coach on the School Mental Health Award at the Carnegie School of Excellence for Mental Health in Schools, runs teacher training and is the lead tutor for the Psychology PGCE for Initial Teacher Training at the National Institute of Teaching and Education (NITE). Links: Website: https://www.changingstatesofmind.com/ Twitter: @LucindaP0well Facebook: https://www.facebook.com/changingstatesofmind Instagram: @lucindap0well Podcast (iTunes, Spotify, Amazon): Psychology in the Classroom

More posts - Website

Follow me here –