Tech Giants Face Downing Street Grilling Over Child Safety Online

April 13, 2026 · Malin Penland

Social media executives from Meta, Snap, YouTube, TikTok and X are called upon to Downing Street on Thursday for a crucial meeting with Prime Minister Sir Keir Starmer and Technology Secretary Liz Kendall over children’s safety online. The tech bosses will face questioning about what measures they are taking to safeguard young people and address parental concerns, as the government pursues its consultation on whether to implement a complete prohibition on social media for under-16s, in line with Australia’s approach. Sir Keir has stressed that the meeting will centre on ensuring “social media companies accept and demonstrate responsibility”, warning that “the consequences of failing to act are stark” and that the government has a duty to parents and the next generation to put children’s safety first.

The Downing Street Face-off

Thursday’s meeting represents a critical moment in the government’s push to hold tech giants accountable for their part in safeguarding vulnerable young users. The gathering comes at a pivotal juncture, with Parliament having rejected calls for an complete ban on social media for under-16s just hours earlier, despite backing from the House of Lords. Instead of implementing a blanket prohibition, MPs voted to give ministers authority to introduce their own restrictions, signalling the government’s preference for a more bespoke regulatory approach rather than a sweeping legislative ban.

The timing of the Downing Street summit demonstrates the administration’s commitment to appear firm on internet safety whilst navigating complex commercial and political pressures. Professor Gina Neff from the University of Cambridge’s Minderby Centre for Technology and Democracy suggested the summit allows the government to illustrate it is taking action on online harms. Downing Street has previously accepted that some platforms have progressed, deploying steps such as turning off autoplay for children by preset, and providing parents enhanced controls over screen time, though critics maintain substantially more must be completed.

  • Tech chief figures interrogated about safeguarding measures and responses to parental concerns
  • Government weighing ban on social media for under-16s following Australia’s example
  • MPs dismissed outright ban but granted ministers authority to implement controls
  • Some companies already introduced measures like disabling autoplay for younger users

Parliamentary Rejection and the Broader Debate

Wednesday evening’s House vote dealt a significant blow to campaigners advocating for a complete ban on social media for under-16s, marking the second occasion MPs have dismissed such measures despite strong support from the upper chamber. The administration’s choice to prioritise ministerial discretion over legislative action reflects a more cautious approach, with ministers arguing that an outright ban would be premature given ongoing policy considerations. This strategy provides the administration room for manoeuvre in designing tailored controls rather than introducing a sweeping ban that some worry could be hard to enforce and effectively oversee across various platforms.

The rejection has heightened discourse on whether the UK is sufficiently safeguarding its children from digital dangers. Whilst the government maintains that giving ministers authority to introduce tailored rules represents a increasingly practical solution, critics assert this approach lacks the decisive action the situation demands. Recent research from Australia, where an ban on social media for under-16s was implemented in December 2025, reveals that over 60 per cent of underage users continue accessing platforms even so, highlighting serious doubts about the success of legislative restrictions and suggesting the challenge goes well beyond straightforward bans.

Cross-Party Criticism

The parliamentary decision has drawn sharp criticism from opposition benches. Conservative shadow education secretary Laura Trott accused Labour MPs of failing parents and children by rejecting the ban, arguing that other nations are acknowledging social media’s dangers whilst the UK falls behind under the current government. Liberal Democrat education spokeswoman Munira Wilson shared these reservations, asserting that “the time for incremental steps is over” and demanding immediate measures to restrict the most destructive platforms for young users rather than piecemeal regulatory changes.

Australia’s Cautionary Example

Australia’s track record with social media restrictions provides a cautionary case study for policymakers considering similar measures in the UK. When the country implemented a prohibition on online platforms for those under 16 in December 2025, it was hailed as a significant milestone in safeguarding young users from digital risks. However, new findings from the Molly Rose Foundation has uncovered a troubling picture: more than 60 per cent of young Australians keep using online platforms despite the legal ban. This substantial non-compliance rate indicates that legal prohibitions alone may prove insufficient in preventing determined young users from using the platforms they wish to use.

The Australian results hold considerable implications for the UK’s ongoing policy deliberations. If a similar ban were introduced in Britain, the evidence indicates enforcement would pose formidable challenges, with young people likely discovering methods to circumvent age-verification systems and restrictions through various technical means. The data undermines arguments that a straightforward legal ban represents a silver-bullet solution to online safety concerns, instead pointing towards the need for a more comprehensive approach integrating regulatory frameworks, platform accountability, parental oversight tools, and digital literacy training to effectively tackle the risks young people face online.

Key Finding Implication
Over 60% of underage Australians still access social media despite ban Legislative prohibitions alone cannot effectively prevent determined young users from accessing platforms
Ban introduced in December 2025 has failed to achieve widespread compliance Enforcement mechanisms remain weak and young people find workarounds to restrictions
Blanket bans do not address underlying appeal of social media to young people Multi-faceted approach combining regulation, platform accountability, and education is necessary

Leading Specialists Urge Concrete Steps

Child safety advocates and digital rights experts have intensified calls for tech companies to implement meaningful action beyond voluntary measures. The Molly Rose Foundation, created to honour 14-year-old Molly Russell who took her own life after viewing harmful content online, has been particularly vocal in calling for structural reform. Rather than pursuing blanket bans that prove difficult to enforce, campaigners argue the focus must shift towards making companies responsible for the algorithms that promote dangerous material to at-risk individuals.

Andy Burrows, head of the Molly Rose Foundation, has emphasised that Thursday’s Downing Street meeting constitutes a pivotal juncture for state intervention. The charity has consistently argued that platforms possess the technological means to introduce robust safeguards, yet often prioritise engagement metrics over user wellbeing. Experts stress that real safeguarding requires platforms to redesign their recommendation systems, enhance moderation practices, and offer parents with meaningful tools to track their children’s online activity successfully.

The Algorithm Issue

At the centre of concerns sits the algorithmic systems that determine what content young users see. These algorithms are engineered to boost user engagement, often promoting sensational, harmful, or addictive content to vulnerable audiences. Reforming these systems constitutes one of the most pressing challenges in online safety, demanding transparency from platforms about how their algorithmic systems operate and what protective measures are in place.

  • Algorithms prioritise engagement over user safety and wellbeing
  • Platforms need to improve disclosure of how content is recommended
  • External reviews of algorithmic damage are essential for maintaining accountability

What Follows

Thursday’s summit at Downing Street will establish the tone for the government’s stance on online child safety in the coming months. Following the meeting, Sir Keir Starmer and Liz Kendall are anticipated to outline their conclusions and determine whether existing voluntary measures from tech companies prove sufficient or whether more robust legal measures becomes necessary. The government remains in the midst of its public engagement exercise on whether to introduce an Australia-style ban on social media for under-16s, with the conclusions from this week’s talks likely to influence the final policy direction.

Ministers have signalled their preference for giving themselves powers to impose restrictions rather than implementing an outright ban, citing concerns about enforceability and impact. However, growing pressure from opposition MPs, child safety groups, and parents suggests the government may come under sustained pressure for firmer measures. The coming weeks will be pivotal in ascertaining whether digital platforms can prove genuine commitment to safeguarding young people or whether Parliament will introduce new laws to compel adherence with tougher safety requirements.