Rotary Club Meeting at Palazzo Trinci, Foligno

The intersection of information and power has long been a focal point of political science and sociology, but in the mid-2020s, it has evolved into a critical ethical crisis. This tension was recently highlighted during a public forum in Foligno, Italy, where members of the community gathered at Palazzo Trinci for a discussion organized by the Rotary Club, led by president Marina Busti. While the meeting was local, the theme—Information and Power: The Ethical Challenge of Our Time—mirrors a global struggle to redefine truth, accountability, and agency in an era dominated by algorithmic curation and generative artificial intelligence.

As a financial journalist who has spent nearly two decades observing the shift from industrial capital to data capital, I have watched the “information asymmetry” that once only plagued financial markets migrate into every facet of civic life. When a small group of corporations or political entities controls the flow of information, they do not merely influence opinion; they shape the perceived reality of the global population. This concentration of power creates a precarious environment where the ethical management of data is no longer a corporate social responsibility goal, but a requirement for systemic stability.

The challenge is compounded by the rapid integration of Large Language Models (LLMs) and synthetic media into the public discourse. We are moving beyond a period of simple “misinformation”—where false data is spread—into an era of “epistemic fragmentation,” where the very tools we use to verify the truth are the same tools used to fabricate it. For business leaders, policymakers, and civic organizations, the question is no longer how to distribute information, but how to ensure that the infrastructure of information remains trustworthy.

The Architecture of Information Asymmetry

In economic terms, information is a primary asset. Traditionally, those with the best information held the most power. However, the current digital economy has shifted this dynamic from the possession of information to the control of the filters through which information is accessed. Today, the power lies with the entities that design the algorithms determining what a user sees, reads, and believes.

This asymmetry is most evident in the “black box” nature of proprietary algorithms. When algorithmic systems determine creditworthiness, job eligibility, or the visibility of political speech, the lack of transparency becomes an ethical failure. The power to categorize and rank human behavior without public oversight creates a new form of invisible governance. Here’s why the discussions held in venues like Palazzo Trinci are vital; they represent a grassroots demand for the democratization of information and a return to transparent, human-centric discourse.

From a global market perspective, this concentration of power poses a risk to competition. When a few dominant platforms control the discovery of new businesses or products, the “barrier to entry” is no longer just capital, but algorithmic favor. This has led to increased scrutiny from regulators worldwide who argue that the control of information is now a matter of antitrust concern, as data monopolies can stifle innovation by simply altering a search ranking or a recommendation feed.

Generative AI and the Erosion of Shared Reality

The emergence of generative AI has accelerated the ethical challenge of information power. We are now witnessing the industrialization of persuasion. AI can generate hyper-personalized content designed to exploit specific psychological vulnerabilities, making the manipulation of public opinion more efficient and less detectable than ever before.

The primary danger is not necessarily the “deepfake” that fools a few people, but the “liar’s dividend.” This occurs when the mere existence of synthetic media allows powerful actors to dismiss real, verified evidence as being AI-generated. When the public can no longer distinguish between a genuine recording and a fabrication, the default response often becomes a total distrust of all information. This cynicism is the ultimate goal of those who seek to exercise power through chaos rather than consensus.

For the business community, this erosion of truth introduces significant operational risks. Corporate reputation, once built over decades of consistent action, can now be dismantled in hours by a viral, AI-generated falsehood. The cost of verifying information has risen sharply, forcing companies to invest heavily in “information integrity” teams and blockchain-based provenance tools to prove the authenticity of their communications.

Regulatory Guardrails: The EU AI Act and Beyond

Governments are attempting to catch up with these technological shifts through legislative frameworks. The most comprehensive attempt to date is the European Union’s AI Act, which represents a landmark shift toward a risk-based approach to technology regulation. Rather than banning AI, the Act categorizes AI applications by the level of risk they pose to fundamental rights and safety.

From Instagram — related to Regulatory Guardrails, Act and Beyond Governments

Under this framework, AI systems deemed to pose an unacceptable risk—such as those used for social scoring by governments—are prohibited. Systems categorized as high-risk, including those used in critical infrastructure, education, and law enforcement, are subject to strict obligations regarding data quality, transparency, and human oversight. The legislation includes significant penalties for non-compliance, with fines that can reach up to €35 million or 7% of total worldwide annual turnover, whichever is higher, for the most severe violations. This regulatory structure is detailed in the official EU AI Act portal.

While the EU AI Act provides a blueprint, the global nature of the internet means that fragmented regulation is often ineffective. The challenge remains in creating an international standard for “information provenance.” This would involve a digital “watermark” for all AI-generated content, allowing users to instantly identify the origin of a piece of media. Without such a global standard, the power to manipulate information will simply migrate to jurisdictions with the fewest ethical constraints.

The Ethical Imperative for Civic and Corporate Leadership

The responsibility for maintaining a healthy information ecosystem cannot fall solely on regulators. There is a profound ethical imperative for civic organizations and corporate leaders to champion “information literacy.” The goal is to move the public from passive consumption to active verification.

CONFERENZA ROTARY 28-12-2022 PALAZZO TRINCI FOLIGNO

For business leaders, this means adopting a policy of radical transparency. In an age of suspicion, the only antidote is the provision of verifiable evidence. This includes moving away from opaque “black box” AI models toward “explainable AI” (XAI), where the reasoning behind an algorithmic decision can be audited and understood by a human being. When a company can explain why a decision was made, it restores the trust that is currently being eroded by automated systems.

Civic organizations, such as the Rotary Club, play a crucial role by providing physical spaces for nuanced, face-to-face deliberation. In a digital world designed to silo us into “echo chambers” via confirmation bias algorithms, the act of meeting in a public square to discuss ethics is a subversive and necessary act. It re-establishes the human element of power—the ability to listen, disagree, and reach a consensus based on shared values rather than shared data points.

Key Takeaways for the Information Age

  • Algorithmic Power: Influence has shifted from those who own the information to those who control the algorithms that filter it.
  • The Liar’s Dividend: The rise of synthetic media allows terrible actors to dismiss real evidence as “fake,” leading to systemic epistemic instability.
  • Regulatory Shift: The EU AI Act introduces a risk-based framework with fines up to 7% of global turnover to enforce transparency and safety.
  • Corporate Responsibility: Trust is now a competitive advantage; companies must transition to “explainable AI” to maintain legitimacy.
  • Civic Resistance: Local, human-centric forums are essential for breaking the algorithmic silos that fragment public discourse.

The Path Forward: Toward an Ethics of Truth

The challenge of information and power is not a technical problem to be solved with more code, but a philosophical problem to be solved with better values. If we treat information merely as a commodity to be optimized for engagement, we will continue to see the erosion of the social contract. If, instead, we treat information as a public good—essential for the functioning of a free society—we can commence to build systems that prioritize accuracy over velocity and transparency over profit.

The Path Forward: Toward an Ethics of Truth
Rotary Club Meeting Foligno Italy

The struggle described in Foligno is a microcosm of a global necessity. Whether in a town hall in Italy or a boardroom in London, the objective remains the same: to ensure that power is exercised not through the manipulation of the truth, but through the transparent and ethical application of knowledge.

The next critical checkpoint for this global effort will be the continued rollout of the EU AI Act’s implementation phases throughout 2026, which will determine how “high-risk” systems are audited in real-world settings. As these regulations take hold, the global business community will be forced to decide whether they will meet the minimum legal requirements or lead the way in establishing a new gold standard for digital ethics.

Do you believe that algorithmic transparency is possible under current corporate structures, or is a complete overhaul of data ownership required? Share your thoughts in the comments below or join the conversation on our social channels.

Leave a Comment