Meta’s journey to Digital Services Act compliance: progress, challenges, and potential fees

Meta’s platforms, Facebook and Instagram, are among the largest online communities in the world. It’s no surprise that the European Commission, tasked with enforcing the Digital Services Act (DSA), is closely monitoring Meta’s actions.

In November 2024, Meta was fined a staggering €840 million by the European Commission for “abusive” tactics in Marketplace Ads. With more potential fines on the horizon, now is the perfect time to explore Meta’s progress — or lack thereof — on its DSA compliance journey.

In this article, we take a closer look at both sides of the story: the EU Commission’s official requests and Meta’s latest compliance reports.

What is the DSA

The DSA is a landmark regulation by the European Union, introduced on November 16, 2022, to create a safer online space for everyone.

This regulation protects users’ rights while ensuring fair competition for businesses of all sizes to thrive among their audiences.

The DSA’s scope is extensive. It applies not only to companies based in the EU or with branches there but also to any business offering digital services to EU users, regardless of location.

The scope of the DSA includes all types of internet intermediaries, such as UGC platforms, search engines, social networks, e-commerce platforms, hostings, and other online services.

The DSA focuses on moderating illegal content (including copyright violations), ensuring transparency, and promoting algorithms that are safe for minors. You can find a summary of all the requirements here.

The Digital Services Act (DSA) introduces a tailored set of obligations, placing greater responsibility on larger and more complex online services.

Platforms with at least 45 million monthly active users in the EU are classified as Very Large Online Platforms (VLOPs) or Very Large Search Engines (VLOSEs), requiring them to meet stricter compliance standards.

According to the European Commission, VLOPs include popular platforms such as Alibaba AliExpress, Amazon Store, Apple App Store, Booking.com, Facebook, Google Play, Google Maps, Google Shopping, Instagram, LinkedIn, Pinterest, Pornhub, Snapchat, Stripchat, TikTok, Twitter (X), XVideos, Wikipedia, YouTube, and Zalando. Very Large Search Engines include Bing and Google Search.

The DSA’s rules officially came into full force on February 17, 2024.

The official requests to Meta

Every time the European Commission issues an official request to a platform under the DSA, it also publishes a press release. This transparency allows us to track Meta’s compliance journey and identify the key challenges faced by the tech giant.

19 October 2023

The European Commission formally requested Meta to provide information on its compliance with the DSA. The request focused on measures to address risks related to illegal content, disinformation, election integrity, and crisis response following October’23 events in Israel.

Meta was required to respond by October 25, 2023, for crisis-related queries and by November 8, 2023, for election-related issues. Failure to provide accurate and timely responses could have led to penalties or formal proceedings under the DSA.

As a designated Very Large Online Platform, Meta was obligated to meet additional DSA requirements, including proactively addressing risks linked to illegal content and protecting fundamental rights.

10 November 2023

Less than a month later, the European Commission formally requested Meta to provide information on their measures to protect minors, including risk assessments and mitigation strategies addressing mental and physical health risks and the use of their services by minors.

Meta was required to respond by December 1, 2023. The Commission planned to evaluate their replies to determine the next steps, potentially leading to formal proceedings under Article 66 of the DSA.

Under Article 74(2) of the DSA, the Commission could impose fines for incomplete or misleading information. Failure to reply by the deadline could have resulted in further penalties or periodic payments.

1 December 2023

The third official request didn’t take long. The European Commission formally requested Meta to provide additional information on Instagram’s measures to protect minors, including handling self-generated child sexual abuse material (SG-CSAM), its recommender system, and the amplification of potentially harmful content.

18 January 2024

The European Commission formally requested Meta to provide information under the Digital Services Act (DSA) regarding its compliance with data access obligations for researchers. This requirement ensures researchers have timely access to publicly available data on platforms like Facebook and Instagram, fostering transparency and accountability, especially ahead of critical events like elections.

Meta, along with 16 other Very Large Online Platforms and Search Engines, was required to respond by February 8, 2024. The Commission planned to evaluate the replies to determine further steps.

1 March 2024

This time the request from the European Commission focused on the «Subscription for No Ads» options on Facebook and Instagram, including Meta’s compliance with obligations related to advertising practices, recommending systems, and risk assessments.

This notice also revisited topics from earlier requests sent since October 2023, such as terrorist content, election-related risks, the protection of minors, shadow-banning practices, and the launch of Threads. Meta was asked to elaborate on its risk assessment methods and mitigation measures.

Meta had deadlines of March 15 and March 22, 2024, to respond. 

30 March 2024

The European Commission opened formal proceedings against Meta for potential violations of the DSA.

The investigation targeted Meta’s policies on deceptive advertising, political content, and its removal of CrowdTangle, a key tool for real-time election monitoring. This raised concerns about transparency and the platform’s impact on democratic processes ahead of the European elections in June 2024. The Commission also questioned Meta’s «Notice-and-Action» system for flagging illegal content and its internal complaint-handling process, suspecting that these mechanisms were not user-friendly or compliant with DSA standards.

The case followed previous information requests and Meta’s September 2023 risk assessment report. If proven, these failures could lead to significant penalties up to 6% of annual revenue. 

16 May 2024

The European Commission opened formal proceedings to investigate if Meta, the company behind Facebook and Instagram, breached the DSA in protecting minors.

The Commission was concerned that Facebook and Instagram’s systems and algorithms might contribute to behavioral addiction and ‘rabbit-hole’ effects for children. There were also worries about the effectiveness of Meta’s age-assurance and verification methods. The third main area of concern was related to the default privacy settings on Facebook and Instagram and their level of minors’ security. 

The investigation was triggered by a preliminary review of Meta’s September 2023 risk assessment report, responses to previous information requests, and other publicly available data.

16 August 2024

The Commission asked Meta for details about its compliance with DSA rules, focusing on researcher access to public data on Facebook and Instagram. It also requested information about plans to update its election and civic monitoring tools. Meta was specifically asked to explain its content library and API, including how access is granted and how they work.

This request came after formal proceedings began on April 30, 2024, due to the lack of effective election monitoring tools and limited access to public data for researchers. In response, Meta launched real-time dashboards in CrowdTangle in May to help with civic monitoring before the European Parliament elections, but these features were later removed.

Responses from Meta

As a VLOP, Meta faced the added responsibility of financially supporting the enforcement of the DSA. In 2024, the EU planed to collect approximately €45 million from major online platforms to fund initiatives such as removing illegal content and improving child protection online. Platforms with over 45 million EU users were required to contribute, with the levy capped at 0.05% of their annual profit.

Shortly after the DSA took full effect in February 2024, Meta launched legal action against the European Union, challenging the financial levy. Meta argued that the system was unfair, with certain companies bearing a disproportionate share of the burden. For instance, Meta’s contribution for 2024 was €11 million—nearly a quarter of the total levy.

Meta’s legal case highlighted its belief that the calculation method placed an inequitable strain on some platforms, fueling a broader debate about the fairness of the DSA’s funding model.

Compliance of Meta

On November 28, 2024, Meta shared its progress in implementing the Digital Services Act (DSA), summarizing key reports, including Transparency Reports, Systemic Risk Assessment Results, Independent Audit Reports, and Audit Implementation Reports for both Facebook and Instagram.

According to these reports, Meta made significant strides in 2023 to meet the EU’s DSA requirements. A dedicated team of over 1,000 people worked to improve transparency and enhance user experiences on Facebook and Instagram, introducing measures to address emerging risks, such as those posed by Generative AI.

Meta tracked its progress through detailed reports, supported by a team of 40 specialists who devoted over 20,000 hours to the audit process, alongside contributions from thousands of additional team members. An independent audit revealed that Meta was fully compliant with over 90% of the 54 sub-articles assessed. The remaining 10% required minor adjustments, with no instances of full non-compliance identified.

Some improvements were already rolled out, such as enhanced context in the Ad Library by April 2024 and new features in Facebook Dating by February 2024. Meta is actively addressing other audit recommendations, including breaking down content moderation efforts in future Transparency Reports.

Meta’s commitment to DSA objectives underscores its dedication to fostering safe, transparent, and innovative online spaces. The audit results and ongoing improvements reflect Meta’s focus on user safety and accountability, with future audits planned to further enhance its systems.

As WebKyte specializes in content moderation, let’s take a closer look at the content moderation practices of Facebook and Instagram.

Content moderation measures on Facebook

Meta reported on its content moderation practices for Facebook, highlighting its blend of human oversight and automated tools. The company removed millions of violating posts and accounts daily using technology designed to detect, restrict, and review harmful content, including content related to terrorism and self-harm. AI and matching tools helped identify policy violations, and automated ad reviews ensured compliance with advertising standards.

Human moderators received specialized training and used tools to aid decision-making, with metrics showing the volume of content removed or demoted between April and September 2024. Over 49 million pieces of content were removed, with automation accounting for most actions. The report also detailed account restrictions and service terminations, underscoring Meta’s proactive moderation strategy.

Content moderation measures on Instagram

Meta has similar content moderation tools and principles implemented in Instagram. Automated tools include rate limits to curb bot activity, matching technology to identify repeated violations, and AI to enhance human review by detecting new potential violations. In ads, automated checks ensure compliance with advertising policies before approval.

Human reviewers receive training and specialized resources for content moderation. Tools include highlighting slurs and dangerous organizations and providing tooltips for word definitions.

Between April and September 2024, Instagram removed over 12 million pieces of violating content in the EU, with automation handling most actions. The platform also demoted over 1.5 million pieces of content to limit visibility, including adult and graphic content, hate speech, and misinformation. Additionally, over 16 million advertising and commerce-related content removals were carried out, and nearly 18 million accounts were restricted.

The DSA compliance with ContentCore

For platforms outside of Facebook, Instagram, or Threads, meeting the content moderation requirements of the DSA can be especially challenging.

Building and maintaining human moderation teams, training, and developing in-house software all require continuous support and resources.

For platforms dealing with user-generated videos, ContentCore offers a seamless solution. This ready-to-use tool identifies copyrighted and known harmful video content, automatically scanning every upload to detect violations without disrupting the experience for users and creators.

Summary

The European Commission has made it clear that compliance with the DSA is non-negotiable, with Meta becoming an example in its enforcement efforts. Over the past year, the Commission has escalated its pressure on the tech giant, issuing repeated requests and even initiating official proceedings. This exchange highlights not only the complexity of adhering to DSA regulations but also the amount of resources required to ensure compliance.

For platforms without the vast resources of Meta, achieving compliance can seem daunting. However, ready-to-use content moderation solutions, such as automated tools for identifying known harmful or copyrighted video content, can offer an accessible path forward. These tools enable platforms to meet regulatory obligations while minimizing the strain on their internal teams, bridging the gap between compliance and operational efficiency.

This dynamic underscores the need for scalable, efficient solutions as both the EU and platforms navigate the evolving digital regulatory landscape.

Curious about how other platforms are handling similar challenges? Check out our blog post for more insights.

The European Commission vs. social media and video platforms: the Digital Services Act in action

With the explosive growth of online platforms over the past two decades, it was only a matter of time before authorities stepped in to regulate these services. As a result, we have the EU Digital Services Act (DSA), introduced by the European Commission to set new standards for content moderation and transparency. Now, more than two years later, we can observe how platforms are adapting to this new regulatory landscape.

In this article, we dive into the basics of the DSA compliance, platforms’ obligations, and the notices issued by the European Commission to uncover the key challenges faced by online services.

As a provider of automatic content recognition for social media and video platforms, we at WebKyte primarily focus on the challenges of these types of platforms.

The Digital Services Act explained

The DSA is a European Union regulation, enacted on November 16, 2022, that is aimed at establishing a safer online environment and setting balances between the interests of users, consumers, and internet intermediaries.

This safer digital environment implies that the users’ rights are protected while small and bigger businesses have equal chances to succeed among their audiences. 

The DSA directly applies in all countries of the European Union. What’s more, not only does it apply to the companies settled or having their branches in Europe, but to all companies offering their digital services to users in the European Union.

The DSA covers all types of internet intermediaries, namely, providing ‘mere conduit’ services, caching, and hosting services. This includes social networks, video platforms, search engines, e-commerce services, and other online services.

According to the DSA, an online platform is a hosting service that, at the request of a recipient of the service, stores and disseminates information to the public, unless that activity is a minor functionality.

The rules of the DSA apply from 17 February 2024.

The DSA requirements

Mainly, the DSA sets forth provisions concerning the consideration of complaints about illegal content, obligatory clauses of the user agreements, and transparency accountability.

Requirement involve:

▪️ Provide user-friendly mechanisms to allow users or entities to report illegal content on a platform;

▪️ Prioritise the processing of reports submitted by so-called «trusted flaggers»;

▪️ Share the detailed information about the reasons with users when their content is restricted or removed;

▪️ Provide features for users to appeal content moderation decisions within a platform;

▪️ Quickly inform law enforcement authorities if platforms become aware of any information giving rise to a suspicion that a criminal offence involving a threat to the life or safety of a person has taken place, is taking place or is likely to take place;

▪️ Redesign their UX/UI elements to ensure a high level of privacy, security, and safety of minors;

▪️ Ensure that the interfaces are not designed in a way that deceives or manipulates the users, no dark patterns are allowed;

▪️ Clearly flag ads on the interface;

▪️ Stop showing targeted ads based on sensitive data (such as ethnic origin, political opinions or sexual orientation), or targeted at minors;

▪️ Have clearly written and easu-to-understand terms and conditions and act in a diligent, objective and proportionate manner when applying them;

▪️ Publish once a year transparency reports on their content moderation processes and results.

One more requirement set out by the DSA is that online platforms have to submit information about their users upon the authorities’ requests. Platforms shall notify such users about the received request.

VLOPs as the main targets

The DSA also provides a comprehensive set of obligations, where more complex and larger services have more responsibilities.

Thus, platforms with at least 45 million monthly active users in the European Union are deemed very large online platforms (VLOPs) or very large search engines (VLOSEs), and they must comply with additional obligations.

According to the European Commission Press Corner, Very Large Online Platforms are: Alibaba AliExpress, Amazon Store, Apple AppStore, Booking.com, Facebook, Google Play, Google Maps, Google Shopping, Instagram, LinkedIn, Pinterest, Pornhub, Snapchat, Stripchat, TikTok, Twitter (X), Xvideos, Wikipedia, YouTube, Zalando, and Very Large Online Search Engines are Bing and Google Search. 

Additional obligations for the VLOPs

Mitigate Risks
Implement measures to prevent illegal content (e.g., copyright infringements) and rights violations. This includes updating terms of service, user interfaces, content moderation practices, and algorithms as needed.

Assess Risks
Identify and analyze systemic risks related to illegal content and threats to fundamental rights. Submit risk assessments to the European Commission within four months of designation and make them public within one year.

Strengthen Processes
Enhance internal systems, resources, testing, and oversight to effectively detect and address systemic risks.

Undergo Audits
Ensure that risk assessments and compliance with the DSA are externally and independently audited on an annual basis.

Share Ad Data
Publish public repositories of all advertisements served on their platforms.

Provide Data Access
Grant researchers, including vetted ones, access to publicly available data to ensure transparency and accountability.

Increase Transparency
Publish biannual transparency reports covering content moderation and risk management, along with annual reports on systemic risks and audit results.

Appoint Compliance Teams
Establish dedicated compliance functions to oversee adherence to DSA obligations.

Prioritize Child Safety
Design interfaces, recommender systems, and terms to prioritize children’s well-being, including implementing age verification tools to block minors from accessing pornographic content.

Assess Risks to Minors
Incorporate the impact on children’s mental and physical health into risk assessments.

One key additional responsibility for VLOPs is to financially support the enforcement of the DSA. The EU plans to collect approximately €45 million in 2024 from major online platforms to oversee compliance with the regulation. This funding supports initiatives to remove illegal and harmful content and enhance child protection online. Platforms and search engines with over 45 million EU users are required to share these costs, capped at 0.05% of their annual profit.

VLOPs against the DSA

Soon after the DSA came into the full power in February 2024, Meta and TikTok have initiated legal actions against the European Union regarding a financial levy designed to support the enforcement of the Digital Services Act (DSA). 

Meta argues that the levy is inequitable with some companies bearing a disproportionate share. Meta’s expected contribution for 2024 is €11 million (almost a quarter of the total levy), while TikTok criticized the EU Commission’s calculation method as flawed, though it did not disclose its levy amount.

During summer 2023, Zalando and Amazon filed lawsuits challenging their designation as Very Large Online Platforms under the DSA. Zalando claimed errors in applying the DSA, vague rules, unequal treatment, and disproportionate interference with its rights. Amazon alleged discrimination and violations of fundamental rights tied to requirements like ad repositories and non-profiling recommender options. Amazon also requested interim measures to suspend obligations until the court’s decision. Both cases highlight platform resistance to DSA compliance demands.

On March 1, 2024, Aylo Freesites, Pornhub’s parent company, sued the European Commission over its designation as a «very large platform» under the DSA. Aylo argues this violates principles of fairness and infringes on business freedoms by requiring an ad repository revealing user identities. The company seeks to annul the designation, exclude itself from these obligations, and have the Commission cover legal costs. This case highlights ongoing tensions between platforms and the DSA’s stringent regulations.

UGC platforms under pressure

On 18 January 2024, the European Commission sent formal information requests to 17 VLOPs and VLOSEs under the DSA, including Pinterest, TikTok, Instagram, and Snap. The requests focused on their compliance with providing researchers access to publicly available data, a key requirement for accountability and transparency. This access was crucial for monitoring illegal content, particularly ahead of national and EU elections. The platforms had until 8 February 2024 to respond, after which the Commission would assess further steps.

On 14 March 2024, the European Commission requested information from Bing, Google Search, and six VLOPs, including Facebook, TikTok, Snap, and YouTube, about their measures to address risks from generative AI. The inquiry focused on issues like AI «hallucinations», deepfakes, and automated voter manipulation, as well as impacts on elections, illegal content, fundamental rights, and child protection.

On 2 October 2024, the European Commission requested information from YouTube, Snapchat, and TikTok under the DSA about their recommender systems. The inquiry focused on how these systems influence users’ mental health, spread harmful content, and impact elections, civic discourse, and minors’ safety. Platforms were asked to detail their algorithms, including risks like addictive behavior, content «rabbit holes», and illegal content promotion. Responses were due by 15 November 2024, with potential fines for incomplete or misleading replies and formal proceedings if non-compliance persisted.

The Commission also regularly sends notices to specific platforms. Let’s take a look at the reasons why popular social media platforms and video hosting were questioned by the EU over the past year.

Requests to Linkedin

On 14 March 2024, the European Commission requested information from LinkedIn under the DSA regarding its compliance with the ban on ads based on profiling using sensitive personal data. LinkedIn was required to respond by 5 April 2024, with potential fines for incomplete or misleading replies.

The European Commission acknowledged LinkedIn’s decision to disable the feature allowing advertisers to target EU users based on their LinkedIn Group membership on 7 June 2024. 

LinkedIn’s move marked a voluntary step toward compliance, and the Commission committed to monitoring its implementation. Commissioner Thierry Breton praised the DSA’s impact, emphasizing its role in driving meaningful change in digital advertising.

Requests to Snapchat

On 10 November 2023, the European Commission requested information from Snap under the DSA about their measures to protect minors online. The inquiry focused on risk assessments and mitigation steps addressing mental and physical health risks, as well as minors’ use of their platforms. Snap was required to respond by 1 December 2023, with potential fines for incomplete or misleading replies. 

It appears the issue was resolved, as there have been no further legal actions or public updates regarding this request.

Requests to Pornhub, Stripchat, and XVideos

On 13 June 2024, the European Commission requested information from Pornhub, XVideos, and Stripchat under the DSA. The inquiry focused on measures to protect minors, prevent the spread of illegal content and gender-based violence, and implement effective age assurance mechanisms. The platforms were also asked to detail their internal compliance structures, including independent teams and compliance officers, to address systemic risks. Responses were due by 4 July 2024, with potential fines or further action for incomplete or misleading replies. These platforms submitted their first risk assessment reports in April 2024, following their designation as Very Large Online Platforms.

On 18 October 2024, the European Commission issued a second request for information under the DSA to Pornhub, Stripchat, and XVideos, focusing on transparency reporting and advertisement repositories. The platforms were asked to clarify their content moderation practices, including court orders, notices, complaint systems, and automated tools. They were also required to detail their content moderation teams’ qualifications and linguistic expertise, as well as the accuracy of their automated systems.

Additionally, the Commission requested improvements to their public ad repositories, citing concerns that they lack search functionality, multicriteria queries, and API tools required by the DSA. The platforms must respond by 7 November 2024, or face potential fines or proceedings for non-compliance. This follows an earlier inquiry into their measures for protecting minors and addressing illegal content.

Requests to X/Twitter

The European Commission formally requested information from X (formerly Twitter) under the DSA, investigating allegations of spreading illegal content and disinformation, including terrorist content, hate speech, and violent material. The inquiry also examined X’s compliance with DSA provisions on handling illegal content notices, complaint processes, risk assessment, and mitigation measures.

As a designated Very Large Online Platform, X has been required to adhere to the full DSA framework since August 2023, addressing risks like disinformation, gender-based violence, threats to public security, and impacts on mental health and fundamental rights.

X was tasked with providing information on its crisis response protocol by 18 October 2023 and addressing broader compliance measures by 31 October 2023. 

On 18 December 2023 the European Commission launched formal proceedings against X/Twitter for suspected breaches of the DSA. The investigation focused on risk management, content moderation, dark patterns, advertising transparency, and researcher data access.

The Commission examined X’s measures to counter illegal content, transparency in ads and data access, and concerns about deceptive design linked to subscription features like Blue checks.

This marked the first formal enforcement under the DSA, three years after its proposal. The proceedings aimed to gather further evidence and determine the next steps but did not prejudge the final outcome.

On 8 May 2024, the European Commission requested detailed information from X under the DSA regarding its content moderation resources and risk assessments related to generative AI.

The inquiry followed X’s latest Transparency report, which revealed a 20% reduction in its content moderation team and a drop in linguistic coverage within the EU from 11 languages to 7. The Commission sought further details on these changes and their impact on X’s ability to address illegal content and protect fundamental rights. It also requested insights into risk assessments and mitigation measures for generative AI’s effects on elections and harmful content.

On 12 July 2024, the European Commission shared its preliminary findings with X, stating that the platform likely breached the DSA in areas related to dark patterns, advertising transparency, and data access for researchers.

The Commission’s investigation involved analyzing internal documents, consulting experts, and working with national Digital Services Coordinators. It identified potential non-compliance with Articles 25, 39, and 40(12) of the DSA, which focus on transparency and accountability in content moderation and advertising.

If confirmed, the Commission could issue a non-compliance decision, imposing fines of up to 6% of X’s global annual revenue and requiring corrective actions.

On the same day 12 July 2024, Elon Musk, X’s CEO, reacted strongly to EU accusations against X for blocking researcher data and flaws in its ad database. He claimed the European Commission proposed an «illegal secret deal» for X to censor speech in exchange for avoiding fines. Musk didn’t elaborate on whether other platforms were involved but soon announced plans to challenge the EU in court, stating, «We look forward to a very public battle in court, so that the people of Europe can know the truth».

The Commission denied all the accusations.

Requests to Telegram

The Commission wanted to set Telegram as a very large online platform. Back in May 2024, Telegram was compliant with the DSA basic obligations as an intermediary service and even had a dedicated webpage for it.

Though the EU aimed to classify Telegram as a very large online platform, joining the ranks of TikTok, LinkedIn, Pinterest, and others with over 45 million monthly active users in the EU. With Telegram reporting 41 million users in the region back in February 2024, it’s likely just a matter of months before this happens.

Once designated as a very large online platform, Telegram will face additional obligations, such as conducting annual risk assessments and paying an annual fee to the EU, capped at 0.05% of their annual profits for DSA compliance supervision.

It seems like Telegram remaining somewhat of a dark horse for the EU pushes extra pressure as the Commission seeks more control and the ability to intervene more effectively with the platform.

Requests to Meta and TikTok

The European Commission has issued several official requests to Meta and TikTok concerning their DSA compliance. We’ll delve deeper into the notices and their implications for these two major platforms in an upcoming article.

Platform that couldn't comply

The Czech Republic content-sharing platform Ulož decided to change its business model because of the enactment of the DSA.

Ulož was a website that allowed users to upload different files, including music and videos, and those files could easily be downloaded by other users. The problem was that users could upload copyrighted materials without proper rightsholders permission. According to the DSA, the ‘actual knowledge of illegal activity’ is one of the criteria for admitting liability of the platform.

Thus, Ulož announced that upon December 1, 2023, it was turning from a file-sharing service to a cloud-based storage service, where users can keep and download files that only have been uploaded themselves. Now in October 2024, it has almost 40 times less traffic than in October 2022. 

How to comply

One of the main focuses of the DSA is the moderation of illegal content, including copyright infringements. For VLOPs, the obligations are not only to remove such content upon notice but also to prevent its upload.

ContentCore by WebKyte is an automated content recognition tool that helps platforms with user-generated content to detect copyright violations and duplicates of known harmful videos. Using advanced video fingerprinting and matching algorithms, ContentCore efficiently scans uploaded videos for copyright issues and duplicates.

Summary

Tensions between the European Commission and online platforms over DSA compliance are rising. The Commission is serious about enforcing the DSA, especially for very large platforms and search engines, with the goal of making the internet safer for all. Platforms need to be proactive and transparent in their cooperation, as the Commission isn’t afraid to take action. Fortunately, practical solutions exist to simplify DSA compliance, particularly for video content moderation on UGC platforms, social media, and hosting services.

Understanding and addressing content piracy on Telegram

Telegram, a platform with nearly 1 billion monthly users and growing, has been making headlines this year. From the EU’s push to classify it as a very large online platform under the DSA to Pavel Durov facing investigations over the lack of cooperation with authorities, the spotlight is on.

Originally created as a secure messaging app, Telegram has since expanded into a versatile platform that supports large groups, public and private channels, livestreams, file sharing, and more.

However, alongside its rise has come the challenge of content protection, as some users exploit Telegram’s features to distribute copyrighted material without permission. Discover how people use a complex system of channels, groups, and bots to distribute infringements and how to use Telegram copyright detection to scan the platform for illegal copies of your content.

What is Telegram

Launched in 2013, Telegram is one of the world’s largest and fastest-growing online platforms, with nearly 1 billion monthly active users.

Originally a messenger app focused on high-level encryption, Telegram has evolved into much more. It now allows users to:

  • Exchange messages, share media and files
  • Hold private and public livestreams
  • Create public groups with up to 200,000 members
  • Share content with unlimited audiences via channels
  • Make audio and video calls, publish stories, and more


For a platform of this scale, the real challenge is monitoring content effectively. Telegram’s moderation, largely powered by AI and automated detection systems, relies heavily on volunteers. This approach leads to a lack of proactive measures in preventing the distribution of illegal copyrighted material.

On December 12th, 2024, Pavel Durov revealed new details about moderation activities on Telegram. He announced that each month, the moderation team removes 1 million channels and groups, along with 10 million users, for violating the platform’s Terms of Service. To make their efforts in combating criminal content more transparent, the Telegram team launched a dedicated webpage on content moderation. According to the page, a total of 15,432,776 groups and channels have been blocked in 2024.

Video content on Telegram

Telegram supports a variety of video formats such as MP4, MKV, AVI, and MOV. Users can upload videos up to 2 GB, while users with a premium subscription can upload files up to 4 GB. This feature is complemented by optional video compression, which speeds up uploads and downloads while allowing users to opt out and maintain original video quality.

The ability to upload videos is often misused for unauthorized distribution of copyrighted video content. These features require careful management to mitigate potential misuse within the platform.

Сopyright infringements on Telegram

To distribute illegal content on Telegram, people use channels, groups, bots, and combinations of these tools.

Public channels

One of the simplest ways to distribute illegal content on Telegram is through public channels. Designed for one-way communication, channels allow users to broadcast messages and media to large audiences.

Public channels are open to anyone and frequently share pirated content, ranging from movies to TV shows, drawing in large audiences. Even without subscribing, users can view and consume content from these channels.

These channels are often well-organized, featuring pinned content libraries, hashtags, and built-in search functions for easy navigation.

Here’s an example of public channels distributing pirated content:

Piracy on Telegram

For those distributing illegal content, public channels pose a significant risk. Since they are visible to all Telegram users, these channels are easier to detect, report, and shut down. To mitigate this risk, many opt to create private channels instead.

Private channels

Private channels are limited to invited members and operate under the radar. With these channels, users can share pirated content more discreetly and evade detection by authorities or automated systems. 

Here’s an example of a private channel distributing pirated content:

Piracy on Telegram | Private Chat

Private channels on Telegram typically require an admin to invite new members, as they aren’t open to the public. Admins can manually add members by sharing an invitation link, or they can streamline the process using a specialized bot. Using a bot, admins can maintain their anonymity and avoid potential legal outcomes.

Bots

Bots are applications that run entirely within the Telegram app. Users interact with bots through flexible interfaces that can support different tasks or services.

Telegram bots give access to video content, including pirated materials. Some bots serve merely as tools, helping users join private channels or navigate free content. Others function as intricate catalogs, organizing and providing access to a wide array of pirated media.

In order to use bots, users may need to pay or subscribe to a list of affiliate channels. 

Bots connected to private channels

Function: These bots help users gain access to private channels or groups that distribute pirated content.

Mechanism: They often require users to join partner channels or perform specific actions (like sharing links or inviting other users) as a form of payment or qualification for access.

Purpose: The bot automates the process of verifying that users have complied with the requirements before granting access to the desired content.

Bots with catalogs

Function: These bots act as searchable databases for pirated content.

Features: Users can input search queries, and the bot responds with videos to stream pirated media such as movies and TV shows.

User Interface: Typically, these bots offer a menu-based system or accept commands that let users browse categories or directly search for specific titles.

Here is an example of a bot with a catalog. Using this bot, people can select quality, language, subtitles, and other settings before watching the video right in Telegram.

Piracy on Telegram | Bots with catalogs

Groups

Telegram also features Groups, group chats that can accommodate up to 200,000 members, making them suitable for communities of any size to communicate. As you might expect, groups are also used for the distribution of illegal content.

However, this method is less popular because it is significantly more challenging to create clear navigation within Groups, where all messages and media are mixed together in the same space.

Yet sometimes groups are made only for illegal distribution of the content. You can see an example below. The group has subthreads for ongoing anime titles with available full-length episodes:

Combination

To remain under the radar while still attracting followers, admins of illegal channels and bots often combine various Telegram tools.

For example, the distribution chain can look like this:

1. Public Channel: This channel features movie announcements, news, memes, and other relevant content, but does not share any full-length copies. Some posts in the public channel promote a bot.

2. Bot: Connected to the public channel, the bot can serve as a comprehensive catalog of videos or act as a bridge to a private channel that contains all the available videos.

By using this approach, administrators take advantage of the exposure and searchability of public channels while ensuring the security offered by bots and private channels.

Now that we understand the potential methods of content distribution, let’s address another important question: How do people find it?

How users access pirated content

The methods users employ to find and consume illegal videos on Telegram can be categorized into two groups: internal and external.

Internal ways

The internal tools for users to find illegal content on Telegram include:

Telegram Search

The built-in search feature allows users to search for public groups and channels using keywords. This helps locate content related to specific interests. For example, users might enter the latest movie or TV show titles, or general terms like “watch movies.”

Telegram Search

Telegram Ads

Users may encounter ads promoting channels, groups, or bots that offer illegal content. These ads often appear in public channels related to movies, entertainment, or news.

External ways

Telegram’s built-in search functionality doesn’t always effectively surface all available content, prompting users to seek alternative methods. As a result, many turn to third-party tools and websites that offer more comprehensive search options for Telegram channels and groups.

Telegram Depositories:

Tools like TGstat provide searchable databases of Telegram groups and channels. Users can browse by categories such as “Video & Films” or use specific keywords to find groups that align with their interests.

Search engines

Users can use search engines like Google to find specific Telegram groups by entering relevant keywords associated with their interests or specific group names. Search engines may index public Telegram group links if they are shared on accessible web pages, making them searchable directly through a search query.

Social media platforms

Telegram group links are shared across various social media platforms including X (formerly Twitter), Facebook, Reddit, Quora, and YouTube. These platforms host communities and discussions about ways to consume copyrighted content for free so the links to Telegram groups can be found there.

Reddit:

Telegram Search via Reddit

YouTube:

Telegram copyright policy

Telegram’s FAQ explains that the platform reviews reports about illegal content shared through sticker sets, bots, public channels, and groups. However, private channels and groups are off-limits, as they’re considered private among the participants. 

After receiving a notice about infringement, Telegram’s team perform necessary legal checks and takes the content down if the infringement is confirmed.

Takedown notices can be submitted only by copyright owners or their authorized agents. 

Check Telegram for illegal copies of your content with Telegram Scan

Telegram Scan is a video search engine created by WebKyte. Telegram Scan is based on a proprietary video fingerprinting technology that enables rapid identification of all copies of a video file on selected platforms regardless of the audio, metadata, quality, and distortions.

Telegram Scan detects copies of a video based on a digital video fingerprint. The principle of automatic video identification is similar to that of YouTube Content ID. The entire technology behind the tool is proprietary and is developed by WebKyte engineers.

How to check Telegram for illegal copies of your content with the Telegram Scan:

1. Upload video fingerprints of your titles without sharing any files with us

2. Run the search

3. Get the list of detected copies with clickable links, number of impressions, date of upload, and other data 

Telegram Scan Example

With WebScan, you can check any number of titles for illegal copies on Telegram and other online platforms.

When infringing content is detected, the next step is to get it removed through takedown notices. While volunteer teams of Telegram handle notices, processing can take weeks. WebKyte team excels at quick suspending unauthorized channels and content on Telegram, protecting your copyrighted material from illegal distribution.

Summary

Piracy of video content on Telegram, including movies and TV shows, poses an increasing challenge for rights holders. An intricate network of channels, bots, and groups makes the distribution of illegal content harder to detect at scale, undermining official streaming services and causing significant financial losses to the entertainment industry.

With Telegram Scan, rightsholders can keep Telegram free of their content by automatically detecting unauthorized copies.

Video fingerprints vs. audio fingerprints: what to select for video recognition

Fighting piracy would be a breeze if every unauthorized copy came with official metadata and original audio. But pirated media is often distorted, renamed, or redubbed, making it hard to track. That’s where digital fingerprints come in. In this article, we’ll explore whether audio or video fingerprinting is better for identifying illegal video content.

Why compare fingerprints and not video files

When looking for copies of an original video, the first instinct is to compare the original file with potential copies. This involves analyzing the video content, such as frames, pixels, and audio, of two or more videos to determine if they are similar or identical.

While this method may work for comparing two files once, it can require a lot of time and resources when dealing with an entire library. Additionally, using video files for copy detection can compromise security since sharing video files with anti-piracy vendors is necessary.

Furthermore, comparing frame by frame can result in fewer matches, especially when dealing with distorted copies. For example, it’s possible to overlook illicit copies that were mirrored, zoomed, or cropped before the upload.

The good news is that there’s a lightweight, secure, and highly effective solution to overcome the problems of matching files. It’s digital fingerprints.

There are two main types of fingerprints: audio and video. Let’s see what’s the difference between the two types and what technology to select for copy detection. 

Advantages of video fingerprints

Video fingerprinting is an innovative technology that uses digital fingerprints to quickly and accurately identify and compare content on a large scale. A video fingerprint is a unique line of code that represents a specific video file in a lightweight, secure, and efficient way.

There are several properties of video fingerprints that make them an excellent tool for fast and precise content matching:

  • Lightweight
    Video fingerprints are small in size by design and can be processed more quickly than the entire video file. They require much less storage than storing original files.

  • Use of visual features
    Video fingerprints use perceptive features to determine the similarity between fingerprints of similar images. It solves the problem of matching not-exactly-equal videos.

  • Security
    It is not possible to go back from a fingerprint to a video. Fingerprint generation is also a fully secure process that can be set up without sharing original video files.

  • Cost-effective
    With fast and accurate matching, it’s possible to reshape a monitoring team to optimize costs and get more results at the same time.

  • Detection of distorted and renamed copies
    Even if an illegal copy was renamed, redubbed, or somehow distorted, it will be identified by a fingerprint. These copies are impossible to detect manually so video fingerprints cover these common cases.

The major downside of video fingerprints is that they are difficult to generate. Few companies have the technology to create and match them at scale. At WebKyte, we have mastered this technology so our customers can automatically fingerprint and scan their entire video library.

Audio fingerprints as a half-measure

Digital audio fingerprints are unique representations of audio and video files that capture certain features, such as rhythm, melody, and tempo. These fingerprints are generated using complex algorithms that extract specific information from an original file and turn it into a condensed, digital signature that can be used to identify and compare audio and video content. 

Digital audio fingerprints have a wide range of potential use cases, including content identification and copyright protection. 

For example, music identification services such as Shazam use them to identify songs playing in the background of a video or on the radio. Similarly, content recognition services can use digital audio fingerprints to identify instances of copyright infringement by comparing audio tracks to a database of legal content.

While audio fingerprints are highly effective for audio content identification, when it comes to video matching there are some limitations to this technology. 

As audio fingerprints only capture audio information, it means they may not be able to capture all of the visual information in a video file. This can limit their effectiveness in identifying certain types of video content. For example, if a copy is redubbed, which is a very popular case in international piracy, it will not be detected using audio fingerprints. 

To detect video copies with changed audio, you need one video fingerprint or multiple audio fingerprints for every dub version. Using audio technology multiplies the number of fingerprints and makes matching longer. 

Another field where audio fingerprints can drive false positives is content with minimal or reductive audio. For example, fashion shows, sports, and adult videos. In these cases, there may not even be enough distinct features to generate an audio fingerprint. 

Additionally to these limitations, audio fingerprints are also not easy to create. Though there are more companies providing audio fingerprinting it’s still a half-measure for video identification as many copies are undetected by this technology.

 

Summary: what to select for video recognition

When it comes to video recognition, both types of fingerprints are commonly used. However, video fingerprints are often considered the better choice for several reasons. 

Video fingerprints are more resilient to changes in audio, video quality, or resolution than audio fingerprints. This makes them more reliable for identifying copies of an original video, even if the copies have been somehow modified.

Using video fingerprinting technology will allow you to detect a higher number of copies, reducing the risks associated with using audio fingerprints. So, if you want to ensure the best results for your video matching needs, video fingerprints are the way to go.

Video Recognition Solution: Make or Buy?

Content recognition software offers video platforms greater control over uploaded content which can lead to more creators and advertisers. However, the question remains: should you build or buy the software? The answer can have a significant impact on how your video business operates and, ultimately, on your ability to grow.

What is automatic content recognition

Automatic content recognition (ACR) is a technology that enables scalable content identification. Using video recognition software, you cross-check each upload to a platform against a reference database. 

This database includes different content types such as copyright, adult, and criminal. If an uploaded video matches the reference content, a notification is triggered, allowing the platform to take appropriate action. 

ACR can be implemented in every business driven by user-generated videos:

  • Social media companies and video-sharing platforms;
  • Search engines;
  • Content delivery networks (DNS);
  • Streaming services;
  • Forums.

How to use ACR to stimulate growth

Video recognition technology provides video platforms with numerous benefits, including improved user experience, greater engagement, and increased revenue. By using a single solution, you can attract more creators, rightsholders, and advertisers to your platform.

 

For Creators

By implementing video recognition technology, you can upgrade your platform’s monetization and analytics systems, providing a more creator-friendly environment. This can attract more artists to your platform, making you stand out from the competition.

For Rightsholders

By respecting copyright laws and protecting intellectual property, you can attract rightsholders to your platform and increase the share of official content. This can turn rightsholders into active users and promote your platform as a safe and trustworthy place for content owners.

For Advertisers

Using video recognition technology, advertisers can control ad placements, improving the performance of their ads and increasing your platform’s ad load.

Another advantage of ACR technology is its cost-effectiveness. Accurate matching results eliminate the need for large moderation teams and reduce the potential for human error. You get more insights while cutting costs.

Building vs. buying

Understanding the potential benefits of ACR for video platforms, the key question remains: should you build or buy the solution?

Building video recognition software in-house may seem appealing since it offers full control over the result and its compatibility with your business. However, the potential costs, risks, and time involved make in-house development less attractive than purchasing a solution from a company with an expertise.

If you’re considering creating your own solution, it’s crucial to take all the possible downsides into account.

It is a rocket science

Developing an ACR solution is literally rocket science. It’s a complex and time-consuming process that requires a team of experienced developers and years of R&D. The developers need to have a deep understanding of algorithms, video content matching, video fingerprinting, and load management. 

The work doesn’t end with the development and launch of the software. Testing, maintenance, documentation, and implementation are all critical aspects of the process. ACR is a separate product that requires its own attention and resources. Imagine the process as a marathon, not a quick sprint. 

YouTube is an example of a company with its own content recognition solution, Content ID. The software has been in development since 2007 and costs more than $100 million. With that impressive estimate, it’s not a surprise how rare in-house ACR tools are.

The battle for resources

The in-house development of a new product can often lead to a diversion of attention away from your core business, resulting in a slower delivery of new features and less value for your target audience. 

It’s easy to get caught up in the excitement of developing a new product, but you should remember that the core product is what drives your business. 

ACR software may seem like a core product, but it’s important to recognize that it’s only a tool. The true value of ACR software lies in its ability to drive business impact with more creators, rightsholders, and advertisers. 

Purchasing a ready-to-use ACR solution allows you to keep the focus on your main product while increasing your growth as a business. 

Reference database as ACR foundation

If you want to build an Automatic Content Recognition solution, you’ll need a reference database. This database consists of different content types:

  • Copyrighted (movies, tv shows, sports, standups, concerts, etc.)
  • Criminal (CSAM, TVEC, NCII)
  • Adult

The more content you have in your database, the more potential match results you can get for users’ uploads.

Creating a reference database is no small feat. This is another challenging task that adds to the workload of your in-house team.

The good news is that a ready-to-use ACR solution comes with reference databases that have been collected over the years. The solution has already done the heavy lifting of creating and expanding a database, which means you can start verifying your user-generated content without having to worry about the reference collection.

An alternative to building in-house

Developing a content recognition solution from scratch can be a daunting and expensive undertaking. If you don’t have a spare decade and hundreds of millions of dollars lying around, you’re probably better off considering a ready-to-use solution for your video business. 

When selecting an ACR solution, there are several factors to consider:

  • Cost
  • Scalability 
  • Accuracy
  • Compatibility with your platform
  • Reference database

 

At WebKyte, we understand the importance of these factors. That’s why we’ve developed a ready-to-use ACR solution that hits all the checkboxes. 

Our product has one of the largest reference databases in the industry, and a proven track record of fast, scalable, and flawless video matching. That’s not all as our product is also customizable for every platform out there. Powered by WebKyte, our current customers gain a better understanding of what’s available on their platform. 

Summary

While the decision to build or buy ACR software ultimately depends on the specific needs and resources of each business, there are many real upsides to using an experienced ACR provider for your video-sharing platform.

 

  • ACR solution helps video platforms to know their content and bring in more creators, rightsholders, and advertisers;

  • Developing ACR in-house is time and resource intensive taking focus from your main product;

  • When selecting an ACR solution pay attention to a reference database, scalability, and accuracy among other things;

  • WebKyte is an advanced solution for video recognition that can be customized for a specific platform.

Social media content moderation: what is it and how does it work?

In the wide world of social media, every post, tweet, and upload becomes part of a global conversation. Yet, not all contributions are helpful or suitable. This calls for a vital process called video content moderation. This practice ensures that what you see on social media meets legal rules, community norms, and ethical standards.

What is content moderation in social media?

Content moderation involves checking and managing user-generated posts on social media platforms. The aim is to block harmful posts like hate speech, false info, and explicit content from going public. This work is key to protecting users from bad interactions while still allowing free speech in a controlled way.

Moderation is more than just deleting content. It includes a detailed decision-making process where posts get checked against specific rules. These rules help maintain the platform’s integrity and keep its community safe. For example, one platform might strictly ban any aggressive language, while another might focus on stopping false info.

Moderation also includes proactive steps to create a positive online culture. This might mean boosting content that encourages good interactions and lowering the visibility of content that could cause trouble or upset.

Content moderation is vital for two reasons. First, it keeps the online environment safe, lowering the chance of users coming across or taking part in harmful activities. Second, good moderation protects the platform’s reputation, which is important in a competitive market where users have plenty of choices. Platforms that balance free speech with safety attract and keep more users, helping them grow and succeed.

As the online world changes, content moderation becomes more complex. That’s why companies like WebKyte keep developing better software to meet the shifting needs of social media moderation. Their tools use the latest tech, including AI and machine learning. They help quickly and precisely check and manage huge amounts of videos.

Types of content that require moderation

Social media combines different types of content, such as text, images, videos, and audio. Each type brings unique moderation challenges. Text includes comments, posts, and articles and can carry harmful language or false information. It can also hide more subtle issues like hate speech or harassment within what seems like normal conversation or jokes.

Images and videos might show inappropriate or graphic scenes not clear from text alone. This isn’t just about obvious explicit content but also altered media that can spread lies or cause worry. For example, edited images or deepfake videos might wrongly present facts or pretend to be someone else, posing serious challenges for moderators.

Audio content, growing with podcasts and voice notes, faces similar issues. It can be hard to catch the tone or subtle hints in audio that might be offensive or risky. For instance, sarcasm or hidden meanings are tough to spot. Also, background noise in audio must be checked to make sure nothing inappropriate slips through.

Live streaming requires extra careful moderation. Real-time monitoring is essential since live content goes directly to the audience without any edits. Live streams can quickly go from harmless to inappropriate, so platforms need to act fast to keep up with community standards.

How does content moderation work?

Content moderation on social media combines human oversight and automated technology. At first, automated tools scan and check data using algorithms that pick up patterns of harmful language, images, and other media types. These algorithms can spot not just obvious but also subtle inappropriate content like biased words or altered images and videos.

However, automated systems aren’t perfect, and that’s where human moderators come in. They take over when understanding the context matters, such as cultural subtleties or the intent behind a post—areas where AI might struggle. Human moderators also check content that users have flagged or that automation has marked as borderline for a more detailed review.

Together, automated tools and human moderators form a stronger shield against inappropriate content. This mix allows for quick and accurate moderation that keeps up with new trends and challenges in user-generated content, helping platforms manage their communities effectively.

Automated versus human moderation

The world of content moderation on social media is shaped by two main forces: automated tools and human judgment. Both are crucial for keeping social media platforms clean and respectful. Automated tools use algorithms and machine learning to quickly go through huge amounts of content, spotting clear rule breaks like explicit images or banned words. These tools are great for their speed and ability to handle big data loads, which is essential given the constant stream of new content.

Yet, these automated systems aren’t perfect. They often miss the context and subtleties of language, like irony, satire, or cultural references. This is where human moderators come in. They add critical thinking and cultural awareness to the mix. Human moderators are key for sorting out complex situations where automated systems might not get it right. They pick up on subtle hints and make important calls on content that machines might misunderstand.

The cooperation between these two approaches leads to a more balanced and detailed moderation system. Automation takes care of the straightforward tasks, freeing up resources, while humans handle the more intricate issues. This ensures that moderation is not only efficient but also culturally sensitive and fair.

Content moderation tools and technologies

In the field of content moderation, various tools and technologies are essential for addressing the challenges of different types of data. Key among these technologies are Artificial Intelligence (AI) and machine learning algorithms, which have transformed how platforms handle user-generated content.

AI systems learn from vast datasets to spot patterns and oddities in text, images, and videos. For example, image recognition algorithms identify inappropriate content by comparing it to previously flagged images, while natural language processing (NLP) tools scan text for harmful language. These systems are always learning and getting better, which boosts their accuracy and efficiency.

Machine learning is vital in improving these processes. It learns from previous moderation actions, which helps predict and spot content that might break guidelines. Further, developments in deep learning have enhanced the way multimedia content is understood and processed, allowing for immediate analysis and decisions.

Other technologies include digital fingerprinting, which tracks and stops the spread of known illegal content, and automation workflows. These workflows help streamline the moderation process by automatically sorting and directing content based on its risk level.

Best practices in content moderation

Effective content moderation strikes a delicate balance between safeguarding user freedom and ensuring a safe online environment. Here are some best practices that can guide platforms in achieving this balance:

1. Transparency: Platforms should communicate their content policies to users, explaining what is allowed and why certain content may be removed. This transparency helps build trust and understanding between users and the platform.

2. Consistency: Consistency in applying moderation rules is key to fairness. All users should be subject to the same rules, applied in the same way, to prevent any perceptions of bias or unfair treatment.

3. Accuracy: Improving the accuracy of both automated tools and human judgments minimizes errors such as wrongful content removal or overlooked violations, which can significantly impact user experience.

4. Timeliness: Quick response times in moderation are crucial, especially when dealing with harmful content that can spread rapidly online. Efficient processes and effective use of technology can help achieve this.

5. Appeals Process: Users should have the opportunity to appeal moderation decisions, providing a feedback mechanism that can help refine and improve moderation practices.

6. Support for Moderators: Human moderators perform stressful and sometimes traumatic work. Providing them with proper support, including training and mental health resources, is vital.

7. Adaptability: Social media is constantly evolving, so moderation practices must be flexible to adapt to new challenges, such as emerging forms of misinformation or changes in user behavior.

Conclusion

The importance of managing user-submitted content on social media platforms is immense. As we’ve explored, effective management is essential for maintaining the integrity and safety of online communities. It also helps create spaces where free expression thrives alongside respect and understanding. Each type of media, from text and images to videos and live streams, presents unique challenges that need a careful approach.

Implementing best practices such as transparency, consistency, and strong support for regulators is crucial for building user trust and engagement. These practices do more than protect; they also boost the liveliness and health of social media environments, promoting diverse and rich interactions while minimizing risks.

As social media continues to change, so too will the methods and technologies for managing user content. Platforms face the challenge of continually improving these tools to meet new demands and to innovate in ways that respect user rights while ensuring a safe community. In today’s digital age, finding the right balance between freedom and safety is essential. These management efforts are key in shaping the future of digital communication.

What is Automatic Content Recognition (ACR)

Automatic Content Recognition (ACR) technology lets you identify media content in a file or device. ACR technology operates by sampling a piece of content and corresponding that sample to content storage to identify any matches using digital fingerprints or other technologies. Applications include video hosting platforms such as YouTube, which employ ACR to identify and remove copyrighted material, and mobile apps that use ACR to identify a song by processing a short music sample. Recognition software such as Shazam uses ACR to identify songs played in public places. YouTube’s Content ID uses ACR to track the use of copyrighted audio in videos.

Defining Automatic Content Recognition

What is automatic content recognition? Recognition of content being played involves recording and transmitting data about the content on the display. Moreover, ACR always works while watching TV channels, streaming services, players, using a browser, and playing on a console.

Everything is transferred to the manufacturers’ servers, then decrypted, and data about preferred content is sold to advertisers. Based on the information received, suitable advertisements are provided to users.

Advertising data is also analyzed with information obtained from smartphones, search engines, and other sources, thanks to which advertisers build a very detailed – and often accurate – picture of a person.

So, the technology’s operation principle is that the image is captured every second. But not the entire frame, but only 15-25 pixels located in different places. Since each pixel contains a specific color, ACR records specific colors in different screen parts.

This data is converted into a sequence of numbers and compared with a database containing almost any content. The coincidence of pixels with a specific frame informs the name of the content in a video/music file or game. The whole process is automated and is very similar to the principle of operation of the famous Shazam service, which recognizes music.

The technical mechanism behind ACR

There are two key methods: audio-based ACR and visual-based ACR. Both methods involve the use of high-tech pattern-matching technologies. The smart TV sends an audio or visual signal matched against a library of audio and visual signals from other shows, pictures, movies, and advertisements to find the perfect match.

Other data (information that may be collected through ACR):

  • Platform Type – it’s possible to understand whether the ad used a linear device, MVPD (Multichannel Video Programming Distributor), CTV, or VOD (video on demand) device.

  • Location data for both desktop and mobile screens

  • IP addresses

  • Browsing Behavior – User content preferences, average viewing time, surfing patterns, completion rate, ad views, etc.

ACR and copyright protection on video and social media platforms

The digital age has exacerbated the challenges of protecting intellectual property. For video platforms, these challenges are twofold: ensuring that content is used legally and ethically and protecting the rights of content creators. Although digital rights management (DRM) systems have traditionally been used to solve these problems, they often fail to cope with the complex nature of digital media. Issues such as piracy and unauthorized use of content continue to be a major concern for content creators and distributors.

Automatic content recognition service technology significantly advances security and content management. Apart from what we have mentioned, here are additional aspects that highlight its importance and application on video and streaming platforms:

  • Copyright Compliance Support: ACR helps content owners and distributors comply with copyright laws by accurately detecting and eliminating unauthorized use of content across platforms.

  • Future-proof content security. As the digital landscape evolves, ACR technology continually adapts to provide solutions to emerging security and content management challenges.

  • Advanced Viewer Analytics: ACR technology provides broadcasters and content creators on YouTube and TikTok with detailed information about viewer behavior. This data is critical to understanding audience preferences, which can guide content creation and marketing strategies.

  • Targeted Advertising: By recognizing the consumed content, ACR allows for more accurate and relevant advertising placement. This results in higher levels of engagement and potential increased revenue for platforms and advertisers.

  • Live broadcast monitoring. For live broadcasts, ACR technology can monitor content in real-time, ensuring that all streamed content complies with broadcast standards and regulations.

  • Multi-platform integration: ACR technology adapts to various platforms, including YouTube, mobile devices, and online streaming services. This flexibility makes it an invaluable tool in today’s multi-screen viewing environment.

The role of ACR in targeted advertising

ACR technology is transforming advertising in a way that has never been seen before. ACR offers a personalized and captivating advertising encounter by displaying relevant and interactive ads tailored to the audience’s content. This innovative approach benefits platforms seeking to increase advertising reach without sacrificing user satisfaction, while also empowering marketers to precisely target their ads.

By tracking what users watch, advertisers can serve ads more appropriate to their viewing content. If a user is watching a cooking show, they may see advertisements for kitchen gadgets or food products. This type of targeted advertising can be more effective than traditional advertising methods because it is more likely to be of interest to the observer.

The future of ACR technology

As ACR technology continues to evolve, content creators and providers need to consider several factors:

  • Improved data security. Strengthening cybersecurity measures to protect user data from hacks is critical.

  • Improved algorithmic transparency. Providing transparency into how algorithms work and how data influences content recommendations can build trust among users.

  • Promoting data ethics. Developing and adhering to ethical data collection and use principles will be key to maintaining user trust and compliance with regulatory requirements.

  • Investments in technology modernization. Continued investment in improving the accuracy and efficiency of ACR technology will help overcome its current limitations.

Conclusion

Automatic Content Recognition (ACR) technology is at the forefront of significant changes in media consumption, balancing technological innovation with consumer trust. As platforms continue to embrace ACR, the future of ACR media consumption looks increasingly tailored to individual preferences, offering highly personalized and interactive experiences.

FAQ about ACR

How does automatic content recognition work?

ACR works by analyzing the unique «fingerprint» or «signature» of a piece of content, such as an audio signal or visual frames, and comparing it to an extensive database of fingerprints. Once the technology detects a match, the associated metadata is extracted and displayed or used for various purposes, such as content identification, copyright protection, recommendation, ad tracking, or audience insights.

How does automatic content recognition work?

ACR works by analyzing the unique «fingerprint» or «signature» of a piece of content, such as an audio signal or visual frames, and comparing it to an extensive database of fingerprints. Once the technology detects a match, the associated metadata is extracted and displayed or used for various purposes, such as content identification, copyright protection, recommendation, ad tracking, or audience insights.
What data does ACR collect?

Why is ACR data important?

ACR is an important technology term as it refers to the innovative process by which software and devices can identify and understand the nature of various multimedia content such as audio, video, and image files. This helps to prevent any illegal copying and distribution and helps to create better-targeted marketing ads.

What is ACR in technology?

ACR technology works by sampling a piece of content and comparing that sample to a content repository to identify any matches using digital fingerprints or watermarks. Applications of this technology include video hosting platforms such as YouTube using ACR to identify and remove copyrighted material, and mobile applications such as Shazam using ACR to identify a song by processing a short piece of music.

YouTube Content ID system: what is it, and how does it work?

Social networks and video hosting sites distribute a variety of content, the popularity of which has led to a rise in crimes connected to intellectual property stealing. It is impossible to track your creation among the entire mass of audio and video. But YouTube developers have found a way to solve the problem. Today’s article will tell you what Content ID is on YouTube and how this system functions.

The genesis of YouTube Content ID

The momentum for creating the Content ID system came from complaints from major music labels about the illegal use of copyrighted music on YouTube. In the future, these complaints may escalate into lawsuits by Universal Music, Sony Music, and other music giants against the largest video hosting sites because they provide a platform to unscrupulous users, pandering to pirates. Thus, in 2007, the Content ID system was born.

Later, media networks joined the program, for which it is no less essential to defend famous video bloggers and their content from copying and use by third parties to earn money. YouTube currently works with many partners whose music and video content is shielded by Content ID.

What is the YouTube Content ID system?

What is a YouTube ID? Content ID is YouTube’s digital fingerprinting system for recognizing and managing copyrighted content. When TuneCore broadcasts music to YouTube, the Content ID system automatically generates an asset. Each asset is stored in YouTube’s Content ID database, which scans all new and existing videos for matching content upon upload.

All assets can only exist in the database once. Suppose two different users attempt to allocate the same content in the same territory. In that case, this is considered a conflict of ownership and must be resolved before the content can be successfully applied on YouTube.

An asset may have:

  • Contact file: Actual copyrighted content, like a music video.

 

  • Metadata: Data about the content, such as its title, authors, etc.

 

  • License Information: Details of where you own the rights to the content and how much of the content you own (i.e., if you own the content in certain territories rather than all territories and/or if other artists and contributors share creative credit ownership of content).

 

  • Policies: Instructions that point to YouTube what to do if it uncovers rounds to your content.

How the Content ID system works

Even now, it’s difficult for most of us to imagine how YouTube developers created a system that tracks all the content posted on the video hosting site. From the outside, this may seem impossible, because the number of videos on YouTube is so large that it would take more than one human life to watch even a small fraction of the stories. Therefore, there is no way to do this without an automated system.

Copyright-protected content in the system is examined by software bots, which capture the unique “fingerprints” of the track and store them in a database.

All videos uploaded by users are automatically scanned; the system’s bots also read their fingerprints and compare them with those already in the database. In this way, you can detect not only a composition or video that completely matches the one registered in the Content ID YouTube system but also one that differs in speed and playback time. This means that even covers and distorted-sounding tracks will be found. So there is no point in hiding a stolen melody by changing the playback speed.

Roles and responsibilities in Content ID

Content ID determines copyrighted content and shows several possibilities for copyright holders and creators. When copyrighted material is caught in a video, copyright holders have different options. They can choose to observe the video, allowing it to remain publicly available and gain valuable insight into its performance.

Alternatively, they can block the video, preventing it from spreading and ensuring their content is not used without permission. In addition, rights holders can take advantage of the monetization feature, allowing them to share in the revenue caused by promotion displayed alongside their content.

Advantages and importance of Content ID

Content creators may also gain some influential advantages from implementing Content ID. By operating copyrighted material with proper permission or adhering to the laws set by copyright holders, creators may improve the quality and appeal of their videos. Including relevant copyrighted content can help creators connect with their audiences on a deeper level by providing additional context, entertainment value, and creative opportunities. However, creators must balance operating copyrighted material and constructing authentic content.

Challenges and criticisms of Content ID

While Content ID has undoubtedly revolutionized copyright management in the music industry, it has not been without its challenges and controversies.

One recurring problem involves false lawsuits and disagreements. In the automated world of Content ID, there have been cases where copyrighted materials have been misidentified or legitimate uses of copyrighted content have been flagged as infringing. These incorrect positives can lead to disputes between content creators and copyright holders, directing to content removal or monetization disputes that may take a period to resolve.

Platforms implementing content identification have had to strike a tender balance between copyright security and fair usage. Determining what constitutes fair use when copyrighted material is used for criticism, comment, or teaching can be demanding. As a result, some content creators have become embroiled in disagreements over the legal use of copyrighted music in their videos.

Some critics contend that the system lacks transparency, making it challenging for content creators to understand why their content was flagged or how to resolve disagreements.

Comparative analysis with other platforms

Several classes can be learned from the DSA that are worth considering in other countries.

To the credit of the DSA’s drafters, many of its content restraint and clarity controls reflect long-standing problems of the international polite community. The DSA also bypassed difficult “processing time” requirements like those adopted in Germany or needed beneath the EU Terrorist Content Regulation and offered in other countries, including Nigeria, which require disposal with 24 hours’ notice.

Lawmakers in other countries should think about the DSA’s approach but also be aware of the possible harm from unnecessary global fragmentation in the elements of laws. Venues of any size, especially smaller ones, will work with similar but not identical conditions across countries—wasting operational resources, harming competition, and risking further Internet balkanization. One solution to this problem could be the modular standard offered by former FCC commissioners Susan Ness and Chris Riley.
Observing this process, legislators could opt for some standardized legal language or requirements to ensure international uniformity while embracing their regulations where there is room for national variation.

The future of content management on YouTube

Content ID has had a significant effect, in the present, yet additionally in forming the eventual fate of music privileges the executives in the computerized period. Its impact reaches out past the music and video business, starting significant conversations on points, for example, copyright assurance, licensed innovation privileges, and content development.

The rise of Content ID has started an upset in the domain of copyright security. By exhibiting the capability of innovation to protect makers’ privileges and create income, it made ready for a change in perspective towards innovation-driven copyright control. This huge advancement has touched off provocative conversations about the importance of traditional copyright security approaches in the computerized scene.

Content ID started a transformation in the music and video industry. This transformation has also had a significant impact on the ongoing discussion about digital copyright and intellectual property protection. Content ID continues to be a brilliant example of how development can assist platforms and creators in protecting their works and thriving in the constantly expanding computerized scene.

Conclusion

YouTube video hosting is not just a platform for posting and viewing videos from users from all over the planet. This full-fledged multifunctional video service helps popularize video content and makes working with it convenient, profitable, and safe. YouTube Content ID shows how this can protect the rights of authors and bona fide users who legally use music and video materials.

EU Digital Services Act: definition and changes in the world of UGC platforms

The DSA is a suitable legal framework for digital service providers in the European Union (EU), designed to ensure open and safe online conditions. The goal of the European DSA is to create a standard set of rules for EU partner states to govern the clarity and responsibility of online platforms.

Background and Development of the Digital Services Act

The legislative journey of the DSA

Even though the law is only valid in the EU, its consequences will reverberate globally. By it, firms will keep changing all their policies. The main goal of DSA EU is to create a safer online environment. Platforms are needed to find ways to control or release posts related to illicit goods or services or contain unlawful range and to provide users with the ability to report this content. The law prohibits targeting advertising based on a person’s intimate preferences, religion, ethnicity, or political beliefs and also limits advertising targeting children. Online platforms need to be transparent about how their recommendation algorithms work.

Additional rules apply to so-called “huge online platforms”. Their administrations are required to provide users with the opportunity to opt out of recommendation and profiling systems, platforms are needed to transfer data with investigators and rules, cooperate in rapid response efforts, and also show external and independent work audits.

Historical context

The European Parliament adopted the DSA in July 2022. While the EU does not require full compliance by small companies – the list of large online platforms was approved in April, and they were given four months to change their policies. Large online platforms are those with more than 45 million European users. Currently, there are 19 services included in this category, including:

  • Facebook
  • Instagram
  • LinkedIn
  • Pinterest
  • Snap Inc.
  • TikTok
  • Twitter / X
  • YouTube

What is the EU Digital Services Act?

In this digital age, governments and regulators are actively working to bring order to our online lives and move the Internet into a more regulated environment.

Both the European Union Digital Services Act (DSA) and the UK Online Safety Act (OSA) aim to strike a balance between promoting innovation and protecting the Internet for future generations.

The UK’s Online Safety Act has just gone out of print and is in the final stages of royal review. The deadline for compliance is mid-2024.

While both the OSA and DSA aim to create a safer digital space, the two bills are not carbon copies of each other. They vary in scope, specificity, and obligations imposed on digital platforms.

“The Digital Services Act regulates the obligations of digital assistance as intermediaries in supplying consumers with access to interests, assistance, and content. This includes, but is not limited to, online marketplaces.”

EU Digital Services Act

Key objectives and components of the Act

In particular, the European Digital Services Act must:

  • Provide better defense for online users’ rights. This includes provisions allowing users to challenge conclusions made by platforms about their content, data portability, and notification and removal mechanisms for illegal content.

 

  • Harmonize regulations for the Digital Service Act. The DSA aims to show harmonized regulations on content moderation, advertising transparency, algorithm transparency, online marketplaces, and online advertising.

 

  • Increase internet platform accountability and openness. By making social media, e-commerce, and internet intermediaries accountable for the services and material they offer, the DSA suggests tougher regulations. This includes taking the appropriate actions to stop harmful activities, unlawful content, and false information from appearing online.

 

  • The DSA Europe is crucial in promoting collaboration among EU member states to combat disinformation, illegal content, and other cyber threats. To further strengthen this effort, stricter enforcement tactics, such as imposing fines and penalties for those who do not comply, are being implemented.

 

  • Strengthen market surveillance. The EU DSA proposes the creation of a new European digital services coordinator and introduces new oversight measures for platforms with substantial market authority.

How the Digital Services Act Works

Accountability for unlawful content: Online platforms must control the distance of illicit content. This includes content that initiates violence, hostility, or bias, infringes intellectual property rights, or violates privacy or consumer safety regulations. The law of the affected Member State determines illegality.

Increased transparency: Online platforms will be required to provide clear and transparent information about the advertisements they display on their platforms. This includes information about who paid for the ad, the targeting criteria, and performance metrics. There are also broader information requirements for service providers at all levels.

New rules for large online platforms: Large online platforms (whose users comprise more than 10% of the EU population) will be subject to additional regulations, including transparency obligations, data sharing requirements, and audit requirements.

New powers of national authorities. National authorities will have new powers to enforce the rules set out in the DSA, including the power to impose fines and sanctions on non-compliant platforms.

Impact on tech companies and users

Now that the law has come into force, users in the EU will be able to see that content on 19 listed digital platforms is moderated and understand how this happens.

“For the first time, users will be given complete information about why the content was moderated, removed, or banned, ensuring transparency,” an EU official told reporters.

The official added that consumers and consumer rights groups would also be able to use various mechanisms to appeal the decisions if their content is moderated by February next year.

But Renda explained that most changes would be invisible to users: “Those changes that are visible and rely too heavily on end-user notification are likely to either be a bit of a hassle or irrelevant. On platforms with these notification banners will be posted until the law is clarified.”

Challenges and criticisms

Lawmakers worldwide are eagerly awaiting the adoption of their regulations for platforms. We advise them to stay a few years before giving rules similar to the DSA. There is much other regulatory assignment to be done. The US, for example, is in dire necessity of an actual national privacy law. We could also employ important legal reforms to supply ” competitive interoperability” or “competitive interoperability,” permitting new technologies to interact with, create, and attract users away from today’s incumbents. There is also room for effective legal discussion and reform concerning more enterprising “middleware” or “protocols, not platforms” about content restraint. Any “DSA 2.0” in other nations will be better served if it builds on the demonstrated victories and unavoidable losses of individual DSA provisions once the law is up and running.

Comparison with global digital regulations

Since the bill was first presented, people across the political range have frequently argued that the existing ruling would damage the usefulness of encryption in personal contacts, reduce internet protection for UK residents and businesses, and threaten freedom of address. That’s because the state added a new clause over the summer that needs tech companies to deliver end-to-end encrypted messages to be checked for child sexual abuse material (CSAM) so it can be reported to management. Nevertheless, the only method to guarantee that a message does not have illegal material is to employ client-side scanning and review the contents of the news before encrypting them.

DSA and similar legislation in other regions

Several classes can be learned from the DSA that are worth considering in other countries.

To the credit of the DSA’s drafters, many of its content restraint and clarity controls reflect long-standing problems of the international polite community. The DSA also bypassed difficult “processing time” requirements like those adopted in Germany or needed beneath the EU Terrorist Content Regulation and offered in other countries, including Nigeria, which require disposal with 24 hours’ notice.

Lawmakers in other countries should think about the DSA’s approach but also be aware of the possible harm from unnecessary global fragmentation in the elements of laws. Venues of any size, especially smaller ones, will work with similar but not identical conditions across countries—wasting operational resources, harming competition, and risking further Internet balkanization. One solution to this problem could be the modular standard offered by former FCC commissioners Susan Ness and Chris Riley.
Observing this process, legislators could opt for some standardized legal language or requirements to ensure international uniformity while embracing their regulations where there is room for national variation.

Future of the Digital Services Act

Online platforms operating in the EU will be required to publish the number of their active users by February 17, 2023. This information will be published in a public section of their online interface and must be updated at least once every six months.

Suppose a platform or search engine has over 45 million users (10% of the European population). In that case, the European Commission will define the service as a “huge online platform or huge online search engine.” These services will be given four months from their designation as a “huge online platform or huge online search engine” to comply with DSA obligations, including conducting and submitting their first annual report to the European Commission. Risk assessment. Among other things, when such platforms recommend content, users can change the criteria used, opt out of receiving personalized recommendations, and publish their terms and conditions in the official languages of all Member States where they offer their services.

Long-term impact of the DSA

EU Member States will have to appoint Digital Service Coordinators (DSCs) by February 17, 2024. The DSC will be the national body responsible for ensuring national coordination and promoting the practical and consistent application and enforcement of the DSA. February 17, 2024, is also the date all regulated entities must comply with all DSA rules.

As we have seen with GDPR and other laws, companies that violate these laws will likely be subject to significant fines and penalties. Over time, affected companies will become more compliant to achieve compliance. Data protection, user privacy, and consent-based marketing can be expected to continue to become increasingly essential for companies that want to grow and maintain good relationships with their customers.

The role of the DSA in shaping future digital policies

It may take time, but changes in digital markets must be accompanied by increased transparency and encouragement of competition and innovation, which will benefit consumers and small companies and force regulators to work harder to provide platforms and services that people want rather than simply relying on their size, revenue, lobbying power, and market dominance to stay on top. These changes will likely have meaningful global implications as the scope of privacy law expands.

Anyone can upload videos to a variety of video services. These downloads can occasionally occur at a rate of thousands per second. What is manually downloaded there cannot be followed. Platforms, however, are in charge of the material they host. WebKyte’s ContentCore for video platforms facilitates the identification of copyrighted and criminal content among user-generated uploads.

Conclusion

An essential regulator of the EU’s digital market is the DSA. In this way, there is a guarantee that online platforms hold accountability, for the content they display regardless of their location. There is the growing impact of the EU and the necessity, for a strategy. With its potential to greatly shape the digital economy not just within the EU, but also globally, US companies operating in the EU must be prepared for the implementation of new, comprehensive legal requirements soon.

A guide to the UK Online Safety Act: what it is and how video platforms can comply

The Online Safety Bill is a new set of laws protecting juniors and grown-ups online. This will force social media services and video-sharing platforms to be more accountable for the safety of their users on their platforms.

Background of the UK Online Safety Act

A rather excessive and jumbled interpretation of itself, the bill, descended from the legislative plan following Boris Johnson’s removal in July, has given the last report stage, telling the House of Commons that it now has one last opportunity to discuss its bill. Content and vote to approve it.

Nevertheless, the ruling must pass unharmed via the House of Lords before obtaining royal assent and evolving law. Although the bill’s final plan has yet to be issued if it is not given by April 2023, the law will be abolished entirely according to parliamentary regulations, and the process will also have to begin in a new parliament.

What is the UK Online Safety Act (Bill)?

The UK Online Safety Bill is designed to ensure that different types of online services are free, from harmful content while also safeguarding freedom. The bill seeks to protect internet users from potentially harmful material as well as prevent children from accessing dangerous content. It does these by-passing conditions on how social media and other online platforms consider and remove unlawful material and content they deem dangerous. According to the government, the decision is “a commitment to making the UK the most unassailable place in the world to access the internet.”

Detailed explanation of the Act

Internet search engines and online platforms that let people generate and share content are covered by the legislation. This includes discussion forums, certain online games, and websites that distribute or showcase content.

Parts of the legislation mimic rules in the EU’s newly passed Digital Services Act (DSA), which prohibits targeting users online based on their faith, gender, or sexual preference and demands large online platforms disclose what steps they undertake—measures to combat disinformation or propaganda.

The UK communications regulator will be appointed as the regulator of the online security regime and will be given a degree of power to collect data to help its oversight and enforcement actions.

Differences from previous online safety laws

The EU Digital Services Act and the UK Online Safety Act share the same goal of regulating the digital world, but each has different characteristics.

The DSA takes a comprehensive approach, addressing a wide range of online user concerns, while the OSA focuses more closely on combating illegal content that causes great harm. In addition, OSA emphasizes the importance of proactive monitoring as opposed to DSA’s response procedures for notices and removals.

online safety bill UK

How the act protects online users

The bill would make social media groups legally accountable for providing children’s and young people’s security online.

It will save children by making social media platforms:

  • Quickly remove illegal content or prevent it from appearing at all. This includes removing content that promotes self-harm.
  • Discourage children from accessing dangerous and age-inappropriate content.
  • Enforce age restrictions and age verification measures.
  • Publishing risk assessments provides greater transparency about children’s threats and hazards on major social media platforms.
  • Provide clear and accessible ways for parents and children to register issues online when they occur.


The UK Online Safety Act would protect adults in three ways through the “triple shield.”

All services in question will need to take steps to prevent their services from being used for illegal activities and to remove illegal content when it does appear.

Category 1 services (the most extensive services with the highest level of risk) must remove content prohibited by their terms.

Category 1 services must also supply adult users with tools that give them greater control over the content they visit and with whom they interact.

The bill now includes adult user empowerment responsibilities with a list of forms of content that will be identified as harmful and to which the user must have access to tools to monitor their exposure. This definition includes encouragement, promotion, or instruction in suicide, self-harm, and eating disorders; or content that is offensive or incites hatred towards people with protected characteristics. Given recent events such as the removal, subsequently rescinded, of suicide prevention prompts on Twitter (now X) in December 2022, the LGA welcomes the specific inclusion of suicide and self-harm in the Bill.

UK online bill

Responsibilities of digital platforms

Over 200 sections of the UK Online Safety Bill outline the duties of digital platforms regarding the content that is published on their channels. It is a thorough piece of law. These platforms have a “duty of care” under the law, which makes the internet a safer place for users—especially younger ones.


By establishing age restrictions and age verification processes, this law would shield children from age-inappropriate content. It would also hold internet service providers more accountable by requiring the prompt removal of illegal content.


The UK has initially sought to be a pioneer in addressing digital safety issues, particularly about children’s exposure to inappropriate content online. However, despite various delays, the European Union took the lead in implementing the Digital Services Act in August.


Proposed initially more than four years ago, the bill shifts the focus from cracking down on “legal but harmful” content to prioritizing the protection of children and the eradication of illegal content online. Technology Minister Michelle Donelan touted the Online Bill UK as “game-changing” legislation in line with the government’s ambitions to make the UK the safest place online.

Penalties for non-compliance

Three years and four excellent ministers since the UK country first issued the Internet Harms whitepaper — the cause of the existing Internet Safety Bill — the Conservative Party’s ambitious attempt to regulate the Internet has returned to Parliament after numerous revisions.

If the bill becomes law, it will apply to any service or site with users in the UK or target the UK as a market, even if it is not based in the country. Failure to comply with the proposed rules would expose communities to penalties of up to 10% of international annual turnover or £18 million ($22 million), whichever is more prominent.

Critiques and controversies

Since the bill was first presented, people across the political range have frequently argued that the existing ruling would damage the usefulness of encryption in personal contacts, reduce internet protection for UK residents and businesses, and threaten freedom of address. That’s because the state added a new clause over the summer that needs tech companies to deliver end-to-end encrypted messages to be checked for child sexual abuse material (CSAM) so it can be reported to management. Nevertheless, the only method to guarantee that a message does not have illegal material is to employ client-side scanning and review the contents of the news before encrypting them.

Penalties for non-compliance

In an open letter marked by 70 organizations, cybersecurity professionals, and elected officials after Prime Minister Rishi Sunak reported he was producing the bill to Parliament, the signatories argued that “encryption is critical to keeping internet users protected online, to build financial security through a business-friendly UK economy that can weather the cost of living crisis and ensure national security.”

“UK businesses will have less defense for their data discharges than their peers in the United States or the European Union, making them more vulnerable to cyber-attacks and intellectual property theft,” the letter notes.

Balancing online safety with freedom of expression

Matthew Hodgson, the co-founder of Element, a decentralized UK messaging app, said that while there is no doubt that platforms need to provide tools to protect users from any content — be it offensive or simply something they don’t do — I don’t want to see: The idea of effectively requiring the use of backdoors to access private content, such as encrypted messages, in case it turns out to be harmful content, is controversial.

“The second you put in any kind of backdoor that can be used to break the encryption, it will be exploited by attackers,” he said. “And by opening it up as a means for corrupt actors or villains of any stripe to be able to subvert encryption, you might as well have no encryption at all, and the whole thing would collapse.”

“The two statements are completely contradictory, and unfortunately, those in power do not always understand the contradiction,” he said, adding that the UK could end up in a situation similar to Australia, where the government passed legislation allowing government enforcement agencies to require businesses to hand over user information and data, even if they are protected by cryptography.

Hodgson argues that the UK government should not promote privacy-destroying infrastructure but rather prevent it from becoming a reality that more authoritarian regimes might adopt, using the UK as a moral example.

Response from tech companies and civil liberties groups

There are also concerns about how some UK Online Bill provisions will be enforced. Francesca Reason, a lawyer in the regulatory and corporate defense group at law firm Birketts LLP, said many tech companies are concerned about the more demanding requirements that could be imposed on them.

Reason said there were also issues of practicality and empathy that would need to be addressed. For example, is the government going to prosecute a vulnerable teenager for posting self-harm images online?

Comparative perspective

It is worth comparing the UK Online Safety Bill with its international equivalents, as legislators in several jurisdictions have sought to regulate content moderation on social media platforms. These proposed legislative measures provide us with a helpful set of criteria by which to evaluate a security bill.

These comparators help identify the different degrees to which governments have chosen to intervene in monitoring and moderating the content of services. The US and EU models focus on design choices that enhance the user experience by making user experience and procedures transparent and accessible. The Indian and Brazilian models, by contrast, are much more explicitly focused on channeling authorized content into peer-to-peer services. The UK Government has stated its preference for the first approach, but it still needs to be developed in the Bill, as discussed in the following sections.

Implementation and enforcement

Platforms will be needed to show that they have strategies to meet the conditions set out in the bill. Ofcom will examine how these processes protect internet users from harm.

Ofcom will have the ability to take action against companies that fail to comply with their new responsibilities. Visitors will be fined up to £18 million or 10 percent of their annual international turnover, whichever is more prominent. Criminal prosecution will be carried against senior managers who fail to respond to Ofcom’s data requests. Ofcom will also be able to hold companies and old managers (if at fault) criminally responsible if a provider fails to comply with Ofcom enforcement information about typical child protection duties or child sexual abuse and exploitation of its services.

In the most severe cases, with the court’s consent, Ofcom can order cost providers, advertisers, and internet service providers to stop using the site, stopping it from receiving cash or being accessed from the UK.

What tips platforms can advise for users to stay safe online under the new regulations

  • Do not post any personal information online, such as your address, email address, or mobile phone number.
  • Think carefully before posting your photos or videos. Once you post your photo online, most people will see it and be able to download it, it will no longer be just yours.
  • Keep your privacy settings as high as possible
  • Never give out your passwords
  • Don’t be friends with people you don’t know

Conclusion

The new rules introduced by the Online Safety Act are significant, and businesses will have to spend a lot of extra time, money, and resources to ensure compliance, especially given the severe consequences of violating these laws.

Due to the stringent enforcement powers and consequences of violating these laws, it is critical that Internet service providers quickly take steps to understand their responsibilities under the Online Safety Act and modify their processes to comply with it.

There are many video platforms where anyone can upload videos. Sometimes there can be thousands of such downloads per second. It is impossible to track what is downloaded there manually. However, platforms are responsible for the content they store. ContentCore by WebKyte for video platforms helps to identify copyrighted and criminal content among user-generated uploads.

It’s best to speak to IT and data protection professionals if you need advice on this topic and how to prepare for the consequences when the Online Safety Act comes into effect.