A Strengthened EU ‘Code of Practice on Disinformation’ will require big tech companies to take more action to ensure that purveyors of disinformation do not benefit from advertising revenues.
Broader Range Of Commitments
The strengthened ‘Code of Practice on Disinformation’ will aim to achieve the objectives of the European Commission’s Guidance presented in May 2021, by setting a broader range of 44 commitments and 128 specific measures to counter online disinformation.
What Commitments?
The bolstered code aims to galvanize (via the threat of penalties for non-compliance) more action from tech companies to tackle online issues including:
– Transparency of political advertising, i.e. introducing better transparency measures, allowing users to easily recognise political ads by providing more effective labelling, committing to reveal the sponsor, ad-spend and display period.
– Ensuring the integrity of services by measures such as acting to reduce manipulative behaviour used to spread disinformation, e.g. fake accounts, bot-driven amplification, impersonation, and malicious deep fakes.
– Empowering users by protecting them from disinformation, giving them enhanced tools to recognise, understand and flag disinformation, to access authoritative sources, and through media literacy initiatives.
– Empowering researchers by providing better support to research on disinformation, e.g. by ensuring automated access to non-personal, anonymised, aggregated or manifestly made public data.
– Empowering the fact-checking community across all EU Member States and languages, ensuring that platforms will make a more consistent use of fact-checking on their service.
– Setting up a Transparency Centre, accessible to all citizens, and a permanent Taskforce to keep the Code future-proof and fit-for-purpose.
– A Strengthened Monitoring framework.
Which Tech Companies Have Signed Up?
34 signatories have already joined the revision process of the 2018 Code. The EC has noted that they come from a broad area of the online environment and include companies from the advertising ecosystem, advertisers, ad-tech companies, fact-checkers, emerging or specialised platforms, civil society, and third-party organisations with specific expertise on disinformation.
Specifically, and significantly, they include Google, Meta, TikTok, Microsoft, Twitter, and Clubhouse. Twitter is reported to have the updated code.
Penalties
The penalties for companies that do not comply with the strengthened code could be anything up to 6 per cent of their global turnover.
Part Of A Broader Framework
The 2022 strengthened Code of Practice on Disinformation will become part of a broader regulatory framework, in combination with the legislation on Transparency and Targeting of Political Advertising and the Digital Services Act.
Deepfakes : A Growing Problem
Deepfakes have been a growing problem in recent years. For example, in April 2022, researchers from the Stanford Internet Observatory reported finding more than one thousand deepfake ‘virtual’ employees on the LinkedIn platform.
The invasion of Ukraine by Russia has also emphasised the threat posed by deepfakes. For example, in March 2022, deepfake videos of both Russian President Vladimir Putin and Ukrainian President Volodymyr Zelensky started appearing online, with the President Volodymyr Zelensky designed to distort public perception of the invasion
What Does This Mean For Your Business?
Disinformation and deepfakes have plagued big tech (social media platforms) in recent years and the effects may have influenced political outcomes and (as demonstrated in Russia’s invasion of Ukraine) could be used to distort facts in a way that could have serious implications. The events of the Trump era and Capitol Hill in the U.S. also emphasise some of the dangers of disinformation and deepfakes. It is no surprise, therefore, that governments (especially in the case of the EU) are keen to introduce more regulation to put greater pressure on big tech and social media companies to act and be more proactive to tackle this threat. The significant fines for non-compliance may also give it the teeth that it needs to have a chance being more effective. Social media companies especially will have expected this strengthened code and are likely to be all-too-aware of the greater focus upon them and how they police their platforms and protect their users in recent years.
A Strengthened EU ‘Code of Practice on Disinformation’ will require big tech companies to take more action to ensure that purveyors of disinformation do not benefit from advertising revenues.
Broader Range Of Commitments
The strengthened ‘Code of Practice on Disinformation’ will aim to achieve the objectives of the European Commission’s Guidance presented in May 2021, by setting a broader range of 44 commitments and 128 specific measures to counter online disinformation.
What Commitments?
The bolstered code aims to galvanize (via the threat of penalties for non-compliance) more action from tech companies to tackle online issues including:
– Transparency of political advertising, i.e. introducing better transparency measures, allowing users to easily recognise political ads by providing more effective labelling, committing to reveal the sponsor, ad-spend and display period.
– Ensuring the integrity of services by measures such as acting to reduce manipulative behaviour used to spread disinformation, e.g. fake accounts, bot-driven amplification, impersonation, and malicious deep fakes.
– Empowering users by protecting them from disinformation, giving them enhanced tools to recognise, understand and flag disinformation, to access authoritative sources, and through media literacy initiatives.
– Empowering researchers by providing better support to research on disinformation, e.g. by ensuring automated access to non-personal, anonymised, aggregated or manifestly made public data.
– Empowering the fact-checking community across all EU Member States and languages, ensuring that platforms will make a more consistent use of fact-checking on their service.
– Setting up a Transparency Centre, accessible to all citizens, and a permanent Taskforce to keep the Code future-proof and fit-for-purpose.
– A Strengthened Monitoring framework.
Which Tech Companies Have Signed Up?
34 signatories have already joined the revision process of the 2018 Code. The EC has noted that they come from a broad area of the online environment and include companies from the advertising ecosystem, advertisers, ad-tech companies, fact-checkers, emerging or specialised platforms, civil society, and third-party organisations with specific expertise on disinformation.
Specifically, and significantly, they include Google, Meta, TikTok, Microsoft, Twitter, and Clubhouse. Twitter is reported to have the updated code.
Penalties
The penalties for companies that do not comply with the strengthened code could be anything up to 6 per cent of their global turnover.
Part Of A Broader Framework
The 2022 strengthened Code of Practice on Disinformation will become part of a broader regulatory framework, in combination with the legislation on Transparency and Targeting of Political Advertising and the Digital Services Act.
Deepfakes : A Growing Problem
Deepfakes have been a growing problem in recent years. For example, in April 2022, researchers from the Stanford Internet Observatory reported finding more than one thousand deepfake ‘virtual’ employees on the LinkedIn platform.
The invasion of Ukraine by Russia has also emphasised the threat posed by deepfakes. For example, in March 2022, deepfake videos of both Russian President Vladimir Putin and Ukrainian President Volodymyr Zelensky started appearing online, with the President Volodymyr Zelensky designed to distort public perception of the invasion
What Does This Mean For Your Business?
Disinformation and deepfakes have plagued big tech (social media platforms) in recent years and the effects may have influenced political outcomes and (as demonstrated in Russia’s invasion of Ukraine) could be used to distort facts in a way that could have serious implications. The events of the Trump era and Capitol Hill in the U.S. also emphasise some of the dangers of disinformation and deepfakes. It is no surprise, therefore, that governments (especially in the case of the EU) are keen to introduce more regulation to put greater pressure on big tech and social media companies to act and be more proactive to tackle this threat. The significant fines for non-compliance may also give it the teeth that it needs to have a chance being more effective. Social media companies especially will have expected this strengthened code and are likely to be all-too-aware of the greater focus upon them and how they police their platforms and protect their users in recent years.