Today, firms develop machine learning algorithms in nearly every industry to control human decisions, creating a structural tension between commercial opacity and democratic transparency. In many forms of business applications, advanced algorithms are technically complicated and privately owned, hiding from legal regimes and preventing public scrutiny. However, they may demonstrate their erosion of democratic norms, damages to financial gains, and extending harms to stakeholders without warning. Nevertheless, because the inner workings and applications of algorithms are generally incomprehensible and protected as trade secrets, they can be completely shielded from public surveillance. One of the solutions to this conflict between algorithmic opacity and democratic transparency is an effective mechanism that incentivizes firms to engage in information disclosure for their algorithms.

The pressing problem of algorithmic opacity is due to the regulatory void of US disclosure regulations that fail to consider the informational needs of stakeholders in the age of AI. In a world of privately-owned algorithms, advanced algorithms as the primary source of decision-making power have produced various perils for the public and firms themselves, particularly in the context of the capital market. While the current disclosure framework has not considered the informational needs associated with algorithmic opacity, algorithmic disclosure under corporate securities law should be a tool to promote algorithmic accountability and foster social interest in sustainability.

Firstly, advanced machine learning algorithms have been widely applied in AI systems in many critical industries, including financial services, medical services, and transportation services. Second, despite the growing pervasiveness of algorithms, the laws, particularly intellectual property laws, continue to encourage the existence of algorithmic opacity. Although the protection of trade secrecy in algorithms seems beneficial for firms to create competitive advantage, it has proven deleterious for society, where democratic norms such as privacy, equality, and safety are compromised by invisible algorithms that no one can ever scrutinize. Third, although the emerging perils of algorithmic opacity are much more catastrophic and messier than before, the current disclosure framework in the context of corporate securities laws fails to consider the informational needs of the stakeholders for advanced algorithms in AI systems.

In this vein, through the lens of the US Securities and Exchange Commission (SEC) disclosure framework, there should be a new disclosure framework for machine-learning-algorithm-based AI systems that considers the technical traits of advanced algorithms, potential dangers of AI systems, and regulatory governance systems in light of increasing AI incidents. To reach this goal,  disclosure topics, key disclosure reports, and new principles to help reduce algorithmic opacity, including stakeholder consideration, sustainability consideration, comprehensible disclosure, and minimum necessary disclosure, can ultimately strike a balance between democratic values in transparency and private interests in opacity.

— Sylvia Lu

Leave a Reply

Your email address will not be published. Required fields are marked *