Teens and social media - CathNews New Zealand https://cathnews.co.nz Catholic News New Zealand Mon, 18 Nov 2024 04:56:00 +0000 en-NZ hourly 1 https://wordpress.org/?v=6.7.1 https://cathnews.co.nz/wp-content/uploads/2020/05/cropped-cathnewsfavicon-32x32.jpg Teens and social media - CathNews New Zealand https://cathnews.co.nz 32 32 70145804 There is reliable evidence social media harms young people - debates about it are a misdirection https://cathnews.co.nz/2024/11/21/there-is-reliable-evidence-social-media-harms-young-people-debates-about-it-are-a-misdirection/ Thu, 21 Nov 2024 05:12:27 +0000 https://cathnews.co.nz/?p=177983 social media

The Australian government is developing legislation that will ban children under 16 from social media. There has been a huge public debate about whether there is sufficient direct evidence of harm to introduce this regulation. The players in this debate include academics, mental health organisations, advocacy groups and digital education providers. Few step back to Read more

There is reliable evidence social media harms young people - debates about it are a misdirection... Read more]]>
The Australian government is developing legislation that will ban children under 16 from social media.

There has been a huge public debate about whether there is sufficient direct evidence of harm to introduce this regulation.

The players in this debate include academics, mental health organisations, advocacy groups and digital education providers. Few step back to look at the entire research landscape.

Social media has become integral to everyday life.

Not many teens want to be extensively researched, so studies are pragmatic, require consent and findings are limited. As a result, we tend to hear that the effects are small or even inconclusive.

For the public it's crucial to understand all research studies have limitations, and must be interpreted within the context in which the data was collected. To understand any report, we must scrutinise the details.

Several mechanisms are at play

In recent years, anxiety has been on the rise among children and young people.

Understanding why young people are anxious, depressed or overly focused on themselves is no easy task.

When it comes to the potential negative impact of social media, several mechanisms are at play.

To unpack them, data is needed from many angles: examining mood while online, examining mental health over several years, school relationships, even brain scans, to name just a few.

Despite all this complexity, the public tends to mostly hear about it through splashy headlines.

One example is the "small and inconsistent" result from an umbrella study of several meta-analyses totalling 1.9 million children and teenagers.

However, it's important to recognise this umbrella study included many research papers from an earlier time when researchers couldn't measure social media use as accurately as they can now.

One influential data set asked people to leave out time spent "interacting with friends and family" when they estimated their time on social media.

Yet in 2014 to 2015, sharing photos, following, and interacting with people you knew was the main use of social media.

The findings appeared within a larger study a few years later, resulting in one headline that stated: "screen time may be no worse for kids than eating potatoes".

With so many sources of error, it's no wonder there is vigorous debate among researchers over the extent of social media harm. Limitations are par for the course.

Worse, researchers are often not given full access to data from social media companies. That's why we need to pay more attention to big tech whistleblowers who have inside access.

Meanwhile, these companies do have access to the data. They use it to exploit human nature.

Focusing on debates between researchers is a misdirection and makes us complacent. There is enough evidence to demonstrate excessive social media use can be harmful to young people.

Here's what the evidence shows

One argument you may hear a lot is that it's not clear whether depression and anxiety cause higher screen time use, or higher screen time causes more depression and anxiety.

This is known as a bidirectional effect - something that goes both ways.

But that's no reason to ignore potential harms. If anything, bidirectional effects matter more, not less, because factors feed into one another. Unchecked, they cause the problem to grow.

Harms of social media use are shown in studies that examine the effects of sharing selfies, the impact of algorithms, influencers, extreme content, and the growth in cyberbullying.

Social media activates envy, comparisons and fear of missing out, or FOMO. Many teens use social media while procrastinating.

It is through these mechanisms that the links to depression, anxiety, low self esteem and self harm are clear.

Finally, until the age of 16, increased time on social media is associated with feeling less satisfied with appearance and school work.

There is also reliable evidence that limiting social media use reduces levels of anxiety, depression and FOMO in 17-25-year-olds. We ignore this evidence at our peril.

The evidence is sufficient

Understanding the intricacies of how every aspect of modern life affects mental health will take a long time.

The work is difficult, particularly when there is a lack of reliable data from tech companies on screen time.

Yet there is already enough reliable evidence to limit children's exposure to social media for their benefit.

Instead of debating the nuances of research and levels of harm, we should accept that for young people, social media use is negatively affecting their development and their school communities.

In fact, the government's proposed ban of children's social media use has parallels with banning phones in schools.

In 2018, some critics argued that "banning smartphones would stop children gaining the knowledge they needed to cope online".

Yet evidence now shows that smartphone bans in schools have resulted in less need for care around mental health issues, less bullying, and academic improvements - the latter especially for socio-economically disadvantaged girls.

It's time to agree that the harms are there, that they are damaging our community, and that we need strong, thoughtful regulation of social media use in young people.

First published in The Conversation

  • Danielle Einstein is Adjunct Fellow, School of Psychological Sciences, Macquarie University
There is reliable evidence social media harms young people - debates about it are a misdirection]]>
177983
Australia will impose a 'digital duty of care' on tech companies to reduce online harm. It's a good idea - if it can be enforced https://cathnews.co.nz/2024/11/18/australia-will-impose-a-digital-duty-of-care-on-tech-companies-to-reduce-online-harm-its-a-good-idea-if-it-can-be-enforced/ Mon, 18 Nov 2024 05:10:06 +0000 https://cathnews.co.nz/?p=177975 digital duty of care

In an escalation of its battle with big tech, the federal government has announced it plans to impose a 'digital duty of care' on tech companies to reduce online harms. The announcement follows the government's controversial plans to legislate a social media ban for young people under 16. The plans also include imposing tighter rules Read more

Australia will impose a ‘digital duty of care' on tech companies to reduce online harm. It's a good idea - if it can be enforced... Read more]]>
In an escalation of its battle with big tech, the federal government has announced it plans to impose a 'digital duty of care' on tech companies to reduce online harms.

The announcement follows the government's controversial plans to legislate a social media ban for young people under 16.

The plans also include imposing tighter rules on digital platforms such as Google, Facebook, Instagram, X and TikTok to address misinformation and disinformation.

In a speech last night, Minister for Communications Michelle Rowland explained why the government was planning to introduce a digital duty of care:

What's required is a shift away from reacting to harms by relying on content regulation alone, and moving towards systems-based prevention, accompanied by a broadening of our perspective of what online harms are.

This is a positive step forward and one aligned with other jurisdictions around the world.

What is a ‘digital duty of care'?

Duty of care is a legal obligation to ensure the safety of others. It isn't limited to just not doing harm; it also means taking reasonable steps to prevent harm.

The proposed digital duty of care will put the onus on tech companies such as Meta, Google and X to protect consumers from harm on their online platforms.

It will bring social media platforms in line with companies who make physical products who already have a duty of care to do their best to make sure their products don't harm users.

The digital duty of care will require tech companies to regularly conduct risk assessments to proactively identify harmful content.

This assessment must consider what Rowland called "enduring categories of harm", which will also be legislated. Rowland said these categories could include:

  • harms to young people
  • harms to mental wellbeing
  • the instruction and promotion of harmful practices
  • other illegal content, conduct and activity.

This approach was recommended by the recent review of the Online Safety Act.

It is something that is already in effect elsewhere around the world, including in the United Kingdom as part of the Online Safety Act and under the European Union's Digital Services Act.

As well as placing the onus on tech companies to protect users of their platforms, these acts also put the power to combat harmful content into the hands of consumers.

For example, in the EU consumers can submit online complaints about harmful material directly to the tech companies, who are legally obliged to act on these complaints.

Where a tech company refuses to remove content, users can complain to a Digital Services Coordinator to investigate further.

They can even pursue a court resolution if a satisfactory outcome cannot be reached.

The EU act sets out that if tech companies breach their duty of care to consumers, they can face fines of up to 6% of their worldwide annual turnover.

The Human Rights Law Centre in Australia supports the idea of a digital duty of care. It says "digital platforms should owe a legislated duty of care to all users".

Why is it more appropriate than a social media ban?

Several experts - including myself - have pointed out problems with the government's plan to ban people under 16 from social media.

For example, the "one size fits all" age requirement doesn't consider the different levels of maturity of young people.

What's more, simply banning young people from social media just delays their exposure to harmful content online.

It also removes the ability of parents and teachers to engage with children on the platforms and to help them manage potential harms safely.

The government's proposed "digital duty of care" would address these concerns.

It promises to force tech companies to make the online world safer by removing harmful content, such as images or videos which promote self-harm.

It promises to do this without banning young people's access to potentially beneficial material or online social communities.

A digital duty of care also has the potential to address the problem of misinformation and disinformation.

The fact Australia would be following the lead of international jurisdictions is also significant.

This shows big tech there is a unified global push to combat harmful content appearing on platforms by placing the onus of care on the companies instead of on users.

How will it be enforced?

The Australian government says it will strongly enforce the digital duty of care. As Minister Rowland said last night:

Where platforms seriously breach their duty of care - where there are systemic failures - we will ensure the regulator can draw on strong penalty arrangements.

Exactly what these penalty arrangements will be is yet to be announced.

So too is the method by which people could submit complaints to the regulator about harmful content they have seen online and want to be taken down.

A number of concerns about implementation have been raised in the UK. This demonstrates that getting the details right will be crucial to success in Australia and elsewhere.

For example, defining what constitutes harm will be an ongoing challenge and may require test cases to emerge through complaints and/or court proceedings.

And as both the EU and UK introduced this legislation only within the past year, the full impact of these laws - including tech companies' levels of compliance - is not yet known.

In the end, the government's turn towards placing the onus on the tech companies to remove harmful content, at the source, is welcome. It will make social media platforms a safer place for everyone - young and old alike.

  • First published in The Conversation
  • Lisa M. Given is Professor of Information Sciences & Director, Social Change Enabling Impact Platform, RMIT University

Australia will impose a ‘digital duty of care' on tech companies to reduce online harm. It's a good idea - if it can be enforced]]>
177975