Google has sparked controversy by rejecting the new regulations of the European Union (EU) that require tech companies to incorporate fact-checking features into their platforms, such as search results and YouTube. This decision comes as the EU seeks to strengthen the Digital Services Act (DSA) to effectively combat online misinformation. Context: Commitment to voluntary adoption as law In 2022, the EU issued a voluntary code of practice on disinformation, calling on technology companies to take measures to counter the spread of disinformation. However, these commitments are now being formalized under the DSA, making them legally binding. This law encourages platforms to collaborate with fact-checkers across the EU, providing fact-checking services in all official languages, labeling political advertisements, and dealing with fake accounts and harmful content such as deepfakes. Over 40 platforms, including Microsoft, TikTok, Twitch, and Meta, have initially signed onto this law. However, enforcement is still inconsistent, with some platforms showing limited compliance. Google opposes requests for real-world verification Google has publicly criticized the EU’s push for mandatory real-time checks. In a letter to the European Commission, Kent Walker, Google’s global affairs chief, stated that these requests are “inappropriate and ineffective” for the company’s services. Walker argued that Google’s current content moderation system, which has been notably successful in the 2022 elections, remains sufficient. Walker also announced that Google will withdraw from voluntary fact-checking commitments before they are formalized under the DSA, emphasizing the company’s opposition to changing its approval policy. A broader debate within the industry Google’s perspective reflects a broader debate on the role of technology platforms in managing online information. As the EU shifts to stricter regulations, other companies are also beginning to react. Meta recently narrowed its fact-checking efforts in the US, and Elon Musk’s X ownership (formerly Twitter) has seen a loosening of content moderation policies. This trend raises the question of whether tech companies are willing or even capable of taking responsibility for online content moderation. Views and challenges of the EU The European Fact-Checking Network has criticized platforms for their lax approach in implementing signed commitments. The EU emphasizes the need for stronger measures to minimize the harm of misinformation, especially when considering its impact on elections, public health, and social stability. Legislators are currently considering which aspects of the voluntary code will have legal effect under the DSA, with new provisions expected to take effect next month. The role of the United States in politics The resistance of the technology industry to EU regulations also intersects with US politics. Prominent technology CEOs, including Google’s Sundar Pichai, are said to have sought support from political leaders such as former President Donald Trump to resist EU regulatory pressures. This lobbying effort highlights the global implications of EU governance promotion. The Road Ahead The debate over misinformation and the role of tech companies remains unresolved. While the EU is seeking stricter enforcement, major tech companies argue that mandatory real-time monitoring may not be practical or effective. As regulations evolve, balancing freedom of speech, corporate responsibility, and public trust continues to be a contentious issue. Google’s refusal to comply with the proposed EU regulations highlights the challenges in addressing misinformation on a global scale. The question of who should control online content remains unanswered, leaving a significant gap in the fight against misinformation.