Social media are powerful tools for scammers to chase victims. Facebook, Twitter, and other social media platforms have agreed to abide by tougher EU standards for policing online postings, WSJ reports. The EU’s new code of practice on disinformation aims to prevent advertising from appearing alongside posts deemed to be intentionally false or misleading. Under the new EU code of practice, social media platforms will be expected to take steps to prevent advertising intentionally false or misleading information. Platforms will also be expected to provide users with more tools for identifying such content online.
Read more about the US FTC report on social media and crypto fraud.
The platforms that have volunteered to abide by the new code before elements of it become mandatory. The Digital Services Act is set to introduce a variety of requirements for online platforms, including standards for taking down illegal content, a ban on targeted advertising aimed at children, and new obligations on vetting third-party sellers. Very large platforms, defined as those serving more than 45 million users in the EU, would also be expected to complete risk assessments and allow regulators to access the algorithms they use to determine what content users see.
The new EU rules, as proposed will have an impact on how social media companies such as Facebook, Twitter, or TikTok respond to concerns about harmful posts or a decision to lock a user’s account.
In the U.S., the Biden administration has put its support behind antitrust legislation that seeks to restrict the power of dominant tech companies. A proposal in the U.K. aims to force companies to address harmful content such as material related to eating disorders or self-harm.