0%

Social media content moderation: what is it and how does it work?

In the wide world of social media, every post, tweet, and upload becomes part of a global conversation. Yet, not all contributions are helpful or suitable. This calls for a vital process called video content moderation. This practice ensures that what you see on social media meets legal rules, community norms, and ethical standards.

What is content moderation in social media?

Content moderation involves checking and managing user-generated posts on social media platforms. The aim is to block harmful posts like hate speech, false info, and explicit content from going public. This work is key to protecting users from bad interactions while still allowing free speech in a controlled way.

Moderation is more than just deleting content. It includes a detailed decision-making process where posts get checked against specific rules. These rules help maintain the platform’s integrity and keep its community safe. For example, one platform might strictly ban any aggressive language, while another might focus on stopping false info.

Moderation also includes proactive steps to create a positive online culture. This might mean boosting content that encourages good interactions and lowering the visibility of content that could cause trouble or upset.

Content moderation is vital for two reasons. First, it keeps the online environment safe, lowering the chance of users coming across or taking part in harmful activities. Second, good moderation protects the platform’s reputation, which is important in a competitive market where users have plenty of choices. Platforms that balance free speech with safety attract and keep more users, helping them grow and succeed.

As the online world changes, content moderation becomes more complex. That’s why companies like WebKyte keep developing better software to meet the shifting needs of social media moderation. Their tools use the latest tech, including AI and machine learning. They help quickly and precisely check and manage huge amounts of videos.

Types of content that require moderation

Social media combines different types of content, such as text, images, videos, and audio. Each type brings unique moderation challenges. Text includes comments, posts, and articles and can carry harmful language or false information. It can also hide more subtle issues like hate speech or harassment within what seems like normal conversation or jokes.

Images and videos might show inappropriate or graphic scenes not clear from text alone. This isn’t just about obvious explicit content but also altered media that can spread lies or cause worry. For example, edited images or deepfake videos might wrongly present facts or pretend to be someone else, posing serious challenges for moderators.

Audio content, growing with podcasts and voice notes, faces similar issues. It can be hard to catch the tone or subtle hints in audio that might be offensive or risky. For instance, sarcasm or hidden meanings are tough to spot. Also, background noise in audio must be checked to make sure nothing inappropriate slips through.

Live streaming requires extra careful moderation. Real-time monitoring is essential since live content goes directly to the audience without any edits. Live streams can quickly go from harmless to inappropriate, so platforms need to act fast to keep up with community standards.

How does content moderation work?

Content moderation on social media combines human oversight and automated technology. At first, automated tools scan and check data using algorithms that pick up patterns of harmful language, images, and other media types. These algorithms can spot not just obvious but also subtle inappropriate content like biased words or altered images and videos.

However, automated systems aren’t perfect, and that’s where human moderators come in. They take over when understanding the context matters, such as cultural subtleties or the intent behind a post—areas where AI might struggle. Human moderators also check content that users have flagged or that automation has marked as borderline for a more detailed review.

Together, automated tools and human moderators form a stronger shield against inappropriate content. This mix allows for quick and accurate moderation that keeps up with new trends and challenges in user-generated content, helping platforms manage their communities effectively.

Automated versus human moderation

The world of content moderation on social media is shaped by two main forces: automated tools and human judgment. Both are crucial for keeping social media platforms clean and respectful. Automated tools use algorithms and machine learning to quickly go through huge amounts of content, spotting clear rule breaks like explicit images or banned words. These tools are great for their speed and ability to handle big data loads, which is essential given the constant stream of new content.

Yet, these automated systems aren’t perfect. They often miss the context and subtleties of language, like irony, satire, or cultural references. This is where human moderators come in. They add critical thinking and cultural awareness to the mix. Human moderators are key for sorting out complex situations where automated systems might not get it right. They pick up on subtle hints and make important calls on content that machines might misunderstand.

The cooperation between these two approaches leads to a more balanced and detailed moderation system. Automation takes care of the straightforward tasks, freeing up resources, while humans handle the more intricate issues. This ensures that moderation is not only efficient but also culturally sensitive and fair.

Content moderation tools and technologies

In the field of content moderation, various tools and technologies are essential for addressing the challenges of different types of data. Key among these technologies are Artificial Intelligence (AI) and machine learning algorithms, which have transformed how platforms handle user-generated content.

AI systems learn from vast datasets to spot patterns and oddities in text, images, and videos. For example, image recognition algorithms identify inappropriate content by comparing it to previously flagged images, while natural language processing (NLP) tools scan text for harmful language. These systems are always learning and getting better, which boosts their accuracy and efficiency.

Machine learning is vital in improving these processes. It learns from previous moderation actions, which helps predict and spot content that might break guidelines. Further, developments in deep learning have enhanced the way multimedia content is understood and processed, allowing for immediate analysis and decisions.

Other technologies include digital fingerprinting, which tracks and stops the spread of known illegal content, and automation workflows. These workflows help streamline the moderation process by automatically sorting and directing content based on its risk level.

Best practices in content moderation

Effective content moderation strikes a delicate balance between safeguarding user freedom and ensuring a safe online environment. Here are some best practices that can guide platforms in achieving this balance:

1. Transparency: Platforms should communicate their content policies to users, explaining what is allowed and why certain content may be removed. This transparency helps build trust and understanding between users and the platform.

2. Consistency: Consistency in applying moderation rules is key to fairness. All users should be subject to the same rules, applied in the same way, to prevent any perceptions of bias or unfair treatment.

3. Accuracy: Improving the accuracy of both automated tools and human judgments minimizes errors such as wrongful content removal or overlooked violations, which can significantly impact user experience.

4. Timeliness: Quick response times in moderation are crucial, especially when dealing with harmful content that can spread rapidly online. Efficient processes and effective use of technology can help achieve this.

5. Appeals Process: Users should have the opportunity to appeal moderation decisions, providing a feedback mechanism that can help refine and improve moderation practices.

6. Support for Moderators: Human moderators perform stressful and sometimes traumatic work. Providing them with proper support, including training and mental health resources, is vital.

7. Adaptability: Social media is constantly evolving, so moderation practices must be flexible to adapt to new challenges, such as emerging forms of misinformation or changes in user behavior.

Conclusion

The importance of managing user-submitted content on social media platforms is immense. As we’ve explored, effective management is essential for maintaining the integrity and safety of online communities. It also helps create spaces where free expression thrives alongside respect and understanding. Each type of media, from text and images to videos and live streams, presents unique challenges that need a careful approach.

Implementing best practices such as transparency, consistency, and strong support for regulators is crucial for building user trust and engagement. These practices do more than protect; they also boost the liveliness and health of social media environments, promoting diverse and rich interactions while minimizing risks.

As social media continues to change, so too will the methods and technologies for managing user content. Platforms face the challenge of continually improving these tools to meet new demands and to innovate in ways that respect user rights while ensuring a safe community. In today’s digital age, finding the right balance between freedom and safety is essential. These management efforts are key in shaping the future of digital communication.

What is Automatic Content Recognition (ACR)

Automatic Content Recognition (ACR) technology lets you identify media content in a file or device. ACR technology operates by sampling a piece of content and corresponding that sample to content storage to identify any matches using digital fingerprints or other technologies. Applications include video hosting platforms such as YouTube, which employ ACR to identify and remove copyrighted material, and mobile apps that use ACR to identify a song by processing a short music sample. Recognition software such as Shazam uses ACR to identify songs played in public places. YouTube’s Content ID uses ACR to track the use of copyrighted audio in videos.

Defining Automatic Content Recognition

What is automatic content recognition? Recognition of content being played involves recording and transmitting data about the content on the display. Moreover, ACR always works while watching TV channels, streaming services, players, using a browser, and playing on a console.

Everything is transferred to the manufacturers’ servers, then decrypted, and data about preferred content is sold to advertisers. Based on the information received, suitable advertisements are provided to users.

Advertising data is also analyzed with information obtained from smartphones, search engines, and other sources, thanks to which advertisers build a very detailed – and often accurate – picture of a person.

So, the technology’s operation principle is that the image is captured every second. But not the entire frame, but only 15-25 pixels located in different places. Since each pixel contains a specific color, ACR records specific colors in different screen parts.

This data is converted into a sequence of numbers and compared with a database containing almost any content. The coincidence of pixels with a specific frame informs the name of the content in a video/music file or game. The whole process is automated and is very similar to the principle of operation of the famous Shazam service, which recognizes music.

The technical mechanism behind ACR

There are two key methods: audio-based ACR and visual-based ACR. Both methods involve the use of high-tech pattern-matching technologies. The smart TV sends an audio or visual signal matched against a library of audio and visual signals from other shows, pictures, movies, and advertisements to find the perfect match.

Other data (information that may be collected through ACR):

  • Platform Type – We can understand whether the ad used a linear device, MVPD (Multichannel Video Programming Distributor), CTV, or VOD (video on demand) device.

 

  • Location data for both desktop and mobile screens

 

  • IP addresses

 

  • Browsing Behavior – User content preferences, average viewing time, surfing patterns, completion rate, ad views, etc.

ACR and copyright protection on video and social media platforms

The digital age has exacerbated the challenges of protecting intellectual property. For video platforms, these challenges are twofold: ensuring that content is used legally and ethically and protecting the rights of content creators. Although digital rights management (DRM) systems have traditionally been used to solve these problems, they often fail to cope with the complex nature of digital media. Issues such as piracy and unauthorized use of content continue to be a major concern for content creators and distributors.

Automatic content recognition service technology significantly advances security and content management. Apart from what we have mentioned, here are additional aspects that highlight its importance and application on video and streaming platforms:

  • Copyright Compliance Support: ACR helps content owners and distributors comply with copyright laws by accurately detecting and eliminating unauthorized use of content across platforms.

 

  • Future-proof content security. As the digital landscape evolves, ACR technology continually adapts to provide solutions to emerging security and content management challenges.

 

  • Advanced Viewer Analytics: ACR technology provides broadcasters and content creators on YouTube and TikTok with detailed information about viewer behavior. This data is critical to understanding audience preferences, which can guide content creation and marketing strategies.

 

  • Targeted Advertising: By recognizing the consumed content, ACR allows for more accurate and relevant advertising placement. This results in higher levels of engagement and potential increased revenue for platforms and advertisers.

 

  • Live broadcast monitoring. For live broadcasts, ACR technology can monitor content in real-time, ensuring that all streamed content complies with broadcast standards and regulations.

 

  • Multi-platform integration: ACR technology adapts to various platforms, including YouTube, mobile devices, and online streaming services. This flexibility makes it an invaluable tool in today’s multi-screen viewing environment.

The role of ACR in targeted advertising

ACR technology is transforming advertising in a way that has never been seen before. ACR offers a personalized and captivating advertising encounter by displaying relevant and interactive ads tailored to the audience’s content. This innovative approach benefits platforms seeking to increase advertising reach without sacrificing user satisfaction, while also empowering marketers to precisely target their ads.

By tracking what users watch, advertisers can serve ads more appropriate to their viewing content. If a user is watching a cooking show, they may see advertisements for kitchen gadgets or food products. This type of targeted advertising can be more effective than traditional advertising methods because it is more likely to be of interest to the observer.

The future of ACR technology

As ACR technology continues to evolve, content creators and providers need to consider several factors:

  • Improved data security. Strengthening cybersecurity measures to protect user data from hacks is critical.

 

  • Improved algorithmic transparency. Providing transparency into how algorithms work and how data influences content recommendations can build trust among users.

 

  • Promoting data ethics. Developing and adhering to ethical data collection and use principles will be key to maintaining user trust and compliance with regulatory requirements.

 

  • Investments in technology modernization. Continued investment in improving the accuracy and efficiency of ACR technology will help overcome its current limitations.

Conclusion

Automatic Content Recognition (ACR) technology is at the forefront of significant changes in media consumption, balancing technological innovation with consumer trust. As platforms continue to embrace ACR, the future of ACR media consumption looks increasingly tailored to individual preferences, offering highly personalized and interactive experiences.

FAQ about ACR

How does automatic content recognition work?

ACR works by analyzing the unique «fingerprint» or «signature» of a piece of content, such as an audio signal or visual frames, and comparing it to an extensive database of fingerprints. Once the technology detects a match, the associated metadata is extracted and displayed or used for various purposes, such as content identification, copyright protection, recommendation, ad tracking, or audience insights.

How does automatic content recognition work?

ACR works by analyzing the unique «fingerprint» or «signature» of a piece of content, such as an audio signal or visual frames, and comparing it to an extensive database of fingerprints. Once the technology detects a match, the associated metadata is extracted and displayed or used for various purposes, such as content identification, copyright protection, recommendation, ad tracking, or audience insights.
What data does ACR collect?

Why is ACR data important?

ACR is an important technology term as it refers to the innovative process by which software and devices can identify and understand the nature of various multimedia content such as audio, video, and image files. This helps to prevent any illegal copying and distribution and helps to create better-targeted marketing ads.

What is ACR in technology?

ACR technology works by sampling a piece of content and comparing that sample to a content repository to identify any matches using digital fingerprints or watermarks. Applications of this technology include video hosting platforms such as YouTube using ACR to identify and remove copyrighted material, and mobile applications such as Shazam using ACR to identify a song by processing a short piece of music.

YouTube Content ID system: what is it, and how does it work?

Social networks and video hosting sites distribute a variety of content, the popularity of which has led to a rise in crimes connected to intellectual property stealing. It is impossible to track your creation among the entire mass of audio and video. But YouTube developers have found a way to solve the problem. Today’s article will tell you what Content ID is on YouTube and how this system functions.

The genesis of YouTube Content ID

The momentum for creating the Content ID system came from complaints from major music labels about the illegal use of copyrighted music on YouTube. In the future, these complaints may escalate into lawsuits by Universal Music, Sony Music, and other music giants against the largest video hosting sites because they provide a platform to unscrupulous users, pandering to pirates. Thus, in 2007, the Content ID system was born.

Later, media networks joined the program, for which it is no less essential to defend famous video bloggers and their content from copying and use by third parties to earn money. YouTube currently works with many partners whose music and video content is shielded by Content ID.

What is the YouTube Content ID system?

What is a YouTube ID? Content ID is YouTube’s digital fingerprinting system for recognizing and managing copyrighted content. When TuneCore broadcasts music to YouTube, the Content ID system automatically generates an asset. Each asset is stored in YouTube’s Content ID database, which scans all new and existing videos for matching content upon upload.

All assets can only exist in the database once. Suppose two different users attempt to allocate the same content in the same territory. In that case, this is considered a conflict of ownership and must be resolved before the content can be successfully applied on YouTube.

An asset may have:

  • Contact file: Actual copyrighted content, like a music video.

 

  • Metadata: Data about the content, such as its title, authors, etc.

 

  • License Information: Details of where you own the rights to the content and how much of the content you own (i.e., if you own the content in certain territories rather than all territories and/or if other artists and contributors share creative credit ownership of content).

 

  • Policies: Instructions that point to YouTube what to do if it uncovers rounds to your content.

How the Content ID system works

Even now, it’s difficult for most of us to imagine how YouTube developers created a system that tracks all the content posted on the video hosting site. From the outside, this may seem impossible, because the number of videos on YouTube is so large that it would take more than one human life to watch even a small fraction of the stories. Therefore, there is no way to do this without an automated system.

Copyright-protected content in the system is examined by software bots, which capture the unique “fingerprints” of the track and store them in a database.

All videos uploaded by users are automatically scanned; the system’s bots also read their fingerprints and compare them with those already in the database. In this way, you can detect not only a composition or video that completely matches the one registered in the Content ID YouTube system but also one that differs in speed and playback time. This means that even covers and distorted-sounding tracks will be found. So there is no point in hiding a stolen melody by changing the playback speed.

Roles and responsibilities in Content ID

Content ID determines copyrighted content and shows several possibilities for copyright holders and creators. When copyrighted material is caught in a video, copyright holders have different options. They can choose to observe the video, allowing it to remain publicly available and gain valuable insight into its performance.

Alternatively, they can block the video, preventing it from spreading and ensuring their content is not used without permission. In addition, rights holders can take advantage of the monetization feature, allowing them to share in the revenue caused by promotion displayed alongside their content.

Advantages and importance of Content ID

Content creators may also gain some influential advantages from implementing Content ID. By operating copyrighted material with proper permission or adhering to the laws set by copyright holders, creators may improve the quality and appeal of their videos. Including relevant copyrighted content can help creators connect with their audiences on a deeper level by providing additional context, entertainment value, and creative opportunities. However, creators must balance operating copyrighted material and constructing authentic content.

Challenges and criticisms of Content ID

While Content ID has undoubtedly revolutionized copyright management in the music industry, it has not been without its challenges and controversies.

One recurring problem involves false lawsuits and disagreements. In the automated world of Content ID, there have been cases where copyrighted materials have been misidentified or legitimate uses of copyrighted content have been flagged as infringing. These incorrect positives can lead to disputes between content creators and copyright holders, directing to content removal or monetization disputes that may take a period to resolve.

Platforms implementing content identification have had to strike a tender balance between copyright security and fair usage. Determining what constitutes fair use when copyrighted material is used for criticism, comment, or teaching can be demanding. As a result, some content creators have become embroiled in disagreements over the legal use of copyrighted music in their videos.

Some critics contend that the system lacks transparency, making it challenging for content creators to understand why their content was flagged or how to resolve disagreements.

Comparative analysis with other platforms

Several classes can be learned from the DSA that are worth considering in other countries.

To the credit of the DSA’s drafters, many of its content restraint and clarity controls reflect long-standing problems of the international polite community. The DSA also bypassed difficult “processing time” requirements like those adopted in Germany or needed beneath the EU Terrorist Content Regulation and offered in other countries, including Nigeria, which require disposal with 24 hours’ notice.

Lawmakers in other countries should think about the DSA’s approach but also be aware of the possible harm from unnecessary global fragmentation in the elements of laws. Venues of any size, especially smaller ones, will work with similar but not identical conditions across countries—wasting operational resources, harming competition, and risking further Internet balkanization. One solution to this problem could be the modular standard offered by former FCC commissioners Susan Ness and Chris Riley.
Observing this process, legislators could opt for some standardized legal language or requirements to ensure international uniformity while embracing their regulations where there is room for national variation.

The future of content management on YouTube

Content ID has had a significant effect, in the present, yet additionally in forming the eventual fate of music privileges the executives in the computerized period. Its impact reaches out past the music and video business, starting significant conversations on points, for example, copyright assurance, licensed innovation privileges, and content development.

The rise of Content ID has started an upset in the domain of copyright security. By exhibiting the capability of innovation to protect makers’ privileges and create income, it made ready for a change in perspective towards innovation-driven copyright control. This huge advancement has touched off provocative conversations about the importance of traditional copyright security approaches in the computerized scene.

Content ID started a transformation in the music and video industry. This transformation has also had a significant impact on the ongoing discussion about digital copyright and intellectual property protection. Content ID continues to be a brilliant example of how development can assist platforms and creators in protecting their works and thriving in the constantly expanding computerized scene.

Conclusion

YouTube video hosting is not just a platform for posting and viewing videos from users from all over the planet. This full-fledged multifunctional video service helps popularize video content and makes working with it convenient, profitable, and safe. YouTube Content ID shows how this can protect the rights of authors and bona fide users who legally use music and video materials.

EU Digital Services Act: definition and changes in the world of UGC platforms

The DSA is a suitable legal framework for digital service providers in the European Union (EU), designed to ensure open and safe online conditions. The goal of the European DSA is to create a standard set of rules for EU partner states to govern the clarity and responsibility of online platforms.

Background and Development of the Digital Services Act

The legislative journey of the DSA

Even though the law is only valid in the EU, its consequences will reverberate globally. By it, firms will keep changing all their policies. The main goal of DSA EU is to create a safer online environment. Platforms are needed to find ways to control or release posts related to illicit goods or services or contain unlawful range and to provide users with the ability to report this content. The law prohibits targeting advertising based on a person’s intimate preferences, religion, ethnicity, or political beliefs and also limits advertising targeting children. Online platforms need to be transparent about how their recommendation algorithms work.

Additional rules apply to so-called “huge online platforms”. Their administrations are required to provide users with the opportunity to opt out of recommendation and profiling systems, platforms are needed to transfer data with investigators and rules, cooperate in rapid response efforts, and also show external and independent work audits.

Historical context

The European Parliament adopted the DSA in July 2022. While the EU does not require full compliance by small companies – the list of large online platforms was approved in April, and they were given four months to change their policies. Large online platforms are those with more than 45 million European users. Currently, there are 19 services included in this category, including:

  • Facebook
  • Instagram
  • LinkedIn
  • Pinterest
  • Snap Inc.
  • TikTok
  • Twitter / X
  • YouTube

What is the EU Digital Services Act?

In this digital age, governments and regulators are actively working to bring order to our online lives and move the Internet into a more regulated environment.

Both the European Union Digital Services Act (DSA) and the UK Online Safety Act (OSA) aim to strike a balance between promoting innovation and protecting the Internet for future generations.

The UK’s Online Safety Act has just gone out of print and is in the final stages of royal review. The deadline for compliance is mid-2024.

While both the OSA and DSA aim to create a safer digital space, the two bills are not carbon copies of each other. They vary in scope, specificity, and obligations imposed on digital platforms.

“The Digital Services Act regulates the obligations of digital assistance as intermediaries in supplying consumers with access to interests, assistance, and content. This includes, but is not limited to, online marketplaces.”

EU Digital Services Act

Key objectives and components of the Act

In particular, the European Digital Services Act must:

  • Provide better defense for online users’ rights. This includes provisions allowing users to challenge conclusions made by platforms about their content, data portability, and notification and removal mechanisms for illegal content.

 

  • Harmonize regulations for the Digital Service Act. The DSA aims to show harmonized regulations on content moderation, advertising transparency, algorithm transparency, online marketplaces, and online advertising.

 

  • Increase internet platform accountability and openness. By making social media, e-commerce, and internet intermediaries accountable for the services and material they offer, the DSA suggests tougher regulations. This includes taking the appropriate actions to stop harmful activities, unlawful content, and false information from appearing online.

 

  • The DSA Europe is crucial in promoting collaboration among EU member states to combat disinformation, illegal content, and other cyber threats. To further strengthen this effort, stricter enforcement tactics, such as imposing fines and penalties for those who do not comply, are being implemented.

 

  • Strengthen market surveillance. The EU DSA proposes the creation of a new European digital services coordinator and introduces new oversight measures for platforms with substantial market authority.

How the Digital Services Act Works

Accountability for unlawful content: Online platforms must control the distance of illicit content. This includes content that initiates violence, hostility, or bias, infringes intellectual property rights, or violates privacy or consumer safety regulations. The law of the affected Member State determines illegality.

Increased transparency: Online platforms will be required to provide clear and transparent information about the advertisements they display on their platforms. This includes information about who paid for the ad, the targeting criteria, and performance metrics. There are also broader information requirements for service providers at all levels.

New rules for large online platforms: Large online platforms (whose users comprise more than 10% of the EU population) will be subject to additional regulations, including transparency obligations, data sharing requirements, and audit requirements.

New powers of national authorities. National authorities will have new powers to enforce the rules set out in the DSA, including the power to impose fines and sanctions on non-compliant platforms.

Impact on tech companies and users

Now that the law has come into force, users in the EU will be able to see that content on 19 listed digital platforms is moderated and understand how this happens.

“For the first time, users will be given complete information about why the content was moderated, removed, or banned, ensuring transparency,” an EU official told reporters.

The official added that consumers and consumer rights groups would also be able to use various mechanisms to appeal the decisions if their content is moderated by February next year.

But Renda explained that most changes would be invisible to users: “Those changes that are visible and rely too heavily on end-user notification are likely to either be a bit of a hassle or irrelevant. On platforms with these notification banners will be posted until the law is clarified.”

Challenges and criticisms

Lawmakers worldwide are eagerly awaiting the adoption of their regulations for platforms. We advise them to stay a few years before giving rules similar to the DSA. There is much other regulatory assignment to be done. The US, for example, is in dire necessity of an actual national privacy law. We could also employ important legal reforms to supply ” competitive interoperability” or “competitive interoperability,” permitting new technologies to interact with, create, and attract users away from today’s incumbents. There is also room for effective legal discussion and reform concerning more enterprising “middleware” or “protocols, not platforms” about content restraint. Any “DSA 2.0” in other nations will be better served if it builds on the demonstrated victories and unavoidable losses of individual DSA provisions once the law is up and running.

Comparison with global digital regulations

Since the bill was first presented, people across the political range have frequently argued that the existing ruling would damage the usefulness of encryption in personal contacts, reduce internet protection for UK residents and businesses, and threaten freedom of address. That’s because the state added a new clause over the summer that needs tech companies to deliver end-to-end encrypted messages to be checked for child sexual abuse material (CSAM) so it can be reported to management. Nevertheless, the only method to guarantee that a message does not have illegal material is to employ client-side scanning and review the contents of the news before encrypting them.

DSA and similar legislation in other regions

Several classes can be learned from the DSA that are worth considering in other countries.

To the credit of the DSA’s drafters, many of its content restraint and clarity controls reflect long-standing problems of the international polite community. The DSA also bypassed difficult “processing time” requirements like those adopted in Germany or needed beneath the EU Terrorist Content Regulation and offered in other countries, including Nigeria, which require disposal with 24 hours’ notice.

Lawmakers in other countries should think about the DSA’s approach but also be aware of the possible harm from unnecessary global fragmentation in the elements of laws. Venues of any size, especially smaller ones, will work with similar but not identical conditions across countries—wasting operational resources, harming competition, and risking further Internet balkanization. One solution to this problem could be the modular standard offered by former FCC commissioners Susan Ness and Chris Riley.
Observing this process, legislators could opt for some standardized legal language or requirements to ensure international uniformity while embracing their regulations where there is room for national variation.

Future of the Digital Services Act

Online platforms operating in the EU will be required to publish the number of their active users by February 17, 2023. This information will be published in a public section of their online interface and must be updated at least once every six months.

Suppose a platform or search engine has over 45 million users (10% of the European population). In that case, the European Commission will define the service as a “huge online platform or huge online search engine.” These services will be given four months from their designation as a “huge online platform or huge online search engine” to comply with DSA obligations, including conducting and submitting their first annual report to the European Commission. Risk assessment. Among other things, when such platforms recommend content, users can change the criteria used, opt out of receiving personalized recommendations, and publish their terms and conditions in the official languages of all Member States where they offer their services.

Long-term impact of the DSA

EU Member States will have to appoint Digital Service Coordinators (DSCs) by February 17, 2024. The DSC will be the national body responsible for ensuring national coordination and promoting the practical and consistent application and enforcement of the DSA. February 17, 2024, is also the date all regulated entities must comply with all DSA rules.

As we have seen with GDPR and other laws, companies that violate these laws will likely be subject to significant fines and penalties. Over time, affected companies will become more compliant to achieve compliance. Data protection, user privacy, and consent-based marketing can be expected to continue to become increasingly essential for companies that want to grow and maintain good relationships with their customers.

The role of the DSA in shaping future digital policies

It may take time, but changes in digital markets must be accompanied by increased transparency and encouragement of competition and innovation, which will benefit consumers and small companies and force regulators to work harder to provide platforms and services that people want rather than simply relying on their size, revenue, lobbying power, and market dominance to stay on top. These changes will likely have meaningful global implications as the scope of privacy law expands.

Anyone can upload videos to a variety of video services. These downloads can occasionally occur at a rate of thousands per second. What is manually downloaded there cannot be followed. Platforms, however, are in charge of the material they host. WebKyte’s ContentCore for video platforms facilitates the identification of copyrighted and criminal content among user-generated uploads.

Conclusion

An essential regulator of the EU’s digital market is the DSA. In this way, there is a guarantee that online platforms hold accountability, for the content they display regardless of their location. There is the growing impact of the EU and the necessity, for a strategy. With its potential to greatly shape the digital economy not just within the EU, but also globally, US companies operating in the EU must be prepared for the implementation of new, comprehensive legal requirements soon.

A guide to the UK Online Safety Act: what it is and how video platforms can comply

The Online Safety Bill is a new set of laws protecting juniors and grown-ups online. This will force social media services and video-sharing platforms to be more accountable for the safety of their users on their platforms.

Background of the UK Online Safety Act

A rather excessive and jumbled interpretation of itself, the bill, descended from the legislative plan following Boris Johnson’s removal in July, has given the last report stage, telling the House of Commons that it now has one last opportunity to discuss its bill. Content and vote to approve it.

Nevertheless, the ruling must pass unharmed via the House of Lords before obtaining royal assent and evolving law. Although the bill’s final plan has yet to be issued if it is not given by April 2023, the law will be abolished entirely according to parliamentary regulations, and the process will also have to begin in a new parliament.

What is the UK Online Safety Act (Bill)?

The UK Online Safety Bill is designed to ensure that different types of online services are free, from harmful content while also safeguarding freedom. The bill seeks to protect internet users from potentially harmful material as well as prevent children from accessing dangerous content. It does these by-passing conditions on how social media and other online platforms consider and remove unlawful material and content they deem dangerous. According to the government, the decision is “a commitment to making the UK the most unassailable place in the world to access the internet.”

Detailed explanation of the Act

Internet search engines and online platforms that let people generate and share content are covered by the legislation. This includes discussion forums, certain online games, and websites that distribute or showcase content.

Parts of the legislation mimic rules in the EU’s newly passed Digital Services Act (DSA), which prohibits targeting users online based on their faith, gender, or sexual preference and demands large online platforms disclose what steps they undertake—measures to combat disinformation or propaganda.

The UK communications regulator will be appointed as the regulator of the online security regime and will be given a degree of power to collect data to help its oversight and enforcement actions.

Differences from previous online safety laws

The EU Digital Services Act and the UK Online Safety Act share the same goal of regulating the digital world, but each has different characteristics.

The DSA takes a comprehensive approach, addressing a wide range of online user concerns, while the OSA focuses more closely on combating illegal content that causes great harm. In addition, OSA emphasizes the importance of proactive monitoring as opposed to DSA’s response procedures for notices and removals.

online safety bill UK

How the act protects online users

The bill would make social media groups legally accountable for providing children’s and young people’s security online.

It will save children by making social media platforms:

  • Quickly remove illegal content or prevent it from appearing at all. This includes removing content that promotes self-harm.
  • Discourage children from accessing dangerous and age-inappropriate content.
  • Enforce age restrictions and age verification measures.
  • Publishing risk assessments provides greater transparency about children’s threats and hazards on major social media platforms.
  • Provide clear and accessible ways for parents and children to register issues online when they occur.


The UK Online Safety Act would protect adults in three ways through the “triple shield.”

All services in question will need to take steps to prevent their services from being used for illegal activities and to remove illegal content when it does appear.

Category 1 services (the most extensive services with the highest level of risk) must remove content prohibited by their terms.

Category 1 services must also supply adult users with tools that give them greater control over the content they visit and with whom they interact.

The bill now includes adult user empowerment responsibilities with a list of forms of content that will be identified as harmful and to which the user must have access to tools to monitor their exposure. This definition includes encouragement, promotion, or instruction in suicide, self-harm, and eating disorders; or content that is offensive or incites hatred towards people with protected characteristics. Given recent events such as the removal, subsequently rescinded, of suicide prevention prompts on Twitter (now X) in December 2022, the LGA welcomes the specific inclusion of suicide and self-harm in the Bill.

UK online bill

Responsibilities of digital platforms

Over 200 sections of the UK Online Safety Bill outline the duties of digital platforms regarding the content that is published on their channels. It is a thorough piece of law. These platforms have a “duty of care” under the law, which makes the internet a safer place for users—especially younger ones.


By establishing age restrictions and age verification processes, this law would shield children from age-inappropriate content. It would also hold internet service providers more accountable by requiring the prompt removal of illegal content.


The UK has initially sought to be a pioneer in addressing digital safety issues, particularly about children’s exposure to inappropriate content online. However, despite various delays, the European Union took the lead in implementing the Digital Services Act in August.


Proposed initially more than four years ago, the bill shifts the focus from cracking down on “legal but harmful” content to prioritizing the protection of children and the eradication of illegal content online. Technology Minister Michelle Donelan touted the Online Bill UK as “game-changing” legislation in line with the government’s ambitions to make the UK the safest place online.

Penalties for non-compliance

Three years and four excellent ministers since the UK country first issued the Internet Harms whitepaper — the cause of the existing Internet Safety Bill — the Conservative Party’s ambitious attempt to regulate the Internet has returned to Parliament after numerous revisions.

If the bill becomes law, it will apply to any service or site with users in the UK or target the UK as a market, even if it is not based in the country. Failure to comply with the proposed rules would expose communities to penalties of up to 10% of international annual turnover or £18 million ($22 million), whichever is more prominent.

Critiques and controversies

Since the bill was first presented, people across the political range have frequently argued that the existing ruling would damage the usefulness of encryption in personal contacts, reduce internet protection for UK residents and businesses, and threaten freedom of address. That’s because the state added a new clause over the summer that needs tech companies to deliver end-to-end encrypted messages to be checked for child sexual abuse material (CSAM) so it can be reported to management. Nevertheless, the only method to guarantee that a message does not have illegal material is to employ client-side scanning and review the contents of the news before encrypting them.

Penalties for non-compliance

In an open letter marked by 70 organizations, cybersecurity professionals, and elected officials after Prime Minister Rishi Sunak reported he was producing the bill to Parliament, the signatories argued that “encryption is critical to keeping internet users protected online, to build financial security through a business-friendly UK economy that can weather the cost of living crisis and ensure national security.”

“UK businesses will have less defense for their data discharges than their peers in the United States or the European Union, making them more vulnerable to cyber-attacks and intellectual property theft,” the letter notes.

Balancing online safety with freedom of expression

Matthew Hodgson, the co-founder of Element, a decentralized UK messaging app, said that while there is no doubt that platforms need to provide tools to protect users from any content — be it offensive or simply something they don’t do — I don’t want to see: The idea of effectively requiring the use of backdoors to access private content, such as encrypted messages, in case it turns out to be harmful content, is controversial.

“The second you put in any kind of backdoor that can be used to break the encryption, it will be exploited by attackers,” he said. “And by opening it up as a means for corrupt actors or villains of any stripe to be able to subvert encryption, you might as well have no encryption at all, and the whole thing would collapse.”

“The two statements are completely contradictory, and unfortunately, those in power do not always understand the contradiction,” he said, adding that the UK could end up in a situation similar to Australia, where the government passed legislation allowing government enforcement agencies to require businesses to hand over user information and data, even if they are protected by cryptography.

Hodgson argues that the UK government should not promote privacy-destroying infrastructure but rather prevent it from becoming a reality that more authoritarian regimes might adopt, using the UK as a moral example.

Response from tech companies and civil liberties groups

There are also concerns about how some UK Online Bill provisions will be enforced. Francesca Reason, a lawyer in the regulatory and corporate defense group at law firm Birketts LLP, said many tech companies are concerned about the more demanding requirements that could be imposed on them.

Reason said there were also issues of practicality and empathy that would need to be addressed. For example, is the government going to prosecute a vulnerable teenager for posting self-harm images online?

Comparative perspective

It is worth comparing the UK Online Safety Bill with its international equivalents, as legislators in several jurisdictions have sought to regulate content moderation on social media platforms. These proposed legislative measures provide us with a helpful set of criteria by which to evaluate a security bill.

These comparators help identify the different degrees to which governments have chosen to intervene in monitoring and moderating the content of services. The US and EU models focus on design choices that enhance the user experience by making user experience and procedures transparent and accessible. The Indian and Brazilian models, by contrast, are much more explicitly focused on channeling authorized content into peer-to-peer services. The UK Government has stated its preference for the first approach, but it still needs to be developed in the Bill, as discussed in the following sections.

Implementation and enforcement

Platforms will be needed to show that they have strategies to meet the conditions set out in the bill. Ofcom will examine how these processes protect internet users from harm.

Ofcom will have the ability to take action against companies that fail to comply with their new responsibilities. Visitors will be fined up to £18 million or 10 percent of their annual international turnover, whichever is more prominent. Criminal prosecution will be carried against senior managers who fail to respond to Ofcom’s data requests. Ofcom will also be able to hold companies and old managers (if at fault) criminally responsible if a provider fails to comply with Ofcom enforcement information about typical child protection duties or child sexual abuse and exploitation of its services.

In the most severe cases, with the court’s consent, Ofcom can order cost providers, advertisers, and internet service providers to stop using the site, stopping it from receiving cash or being accessed from the UK.

What tips platforms can advise for users to stay safe online under the new regulations

  • Do not post any personal information online, such as your address, email address, or mobile phone number.
  • Think carefully before posting your photos or videos. Once you post your photo online, most people will see it and be able to download it, it will no longer be just yours.
  • Keep your privacy settings as high as possible
  • Never give out your passwords
  • Don’t be friends with people you don’t know

Conclusion

The new rules introduced by the Online Safety Act are significant, and businesses will have to spend a lot of extra time, money, and resources to ensure compliance, especially given the severe consequences of violating these laws.

Due to the stringent enforcement powers and consequences of violating these laws, it is critical that Internet service providers quickly take steps to understand their responsibilities under the Online Safety Act and modify their processes to comply with it.

There are many video platforms where anyone can upload videos. Sometimes there can be thousands of such downloads per second. It is impossible to track what is downloaded there manually. However, platforms are responsible for the content they store. ContentCore by WebKyte for video platforms helps to identify copyrighted and criminal content among user-generated uploads.

It’s best to speak to IT and data protection professionals if you need advice on this topic and how to prepare for the consequences when the Online Safety Act comes into effect.