Lessons Learned from Financial Services Regulation for the Era of Advanced AI

At the 2025 Central Park AI Forum, former CFTC Commissioners Kristin Johnson and Dan Berkovitz explored what a century of financial regulation can teach policymakers confronting the rise of artificial intelligence. Both argued that technology-neutral principles remain essential but that AI’s potential impact will stretch how those principles are applied.

Berkovitz opened by reminding the audience that critical financial markets related laws emerged after crises, from the Grain Futures Act after World War I, to the post-Great Depression securities laws of the 1930s, and Dodd-Frank after the 2008 financial crisis. “It’s very difficult to get prospective legislation, forward looking ahead, anticipating issues, and the political will to address them,” he said. “But after a crisis, there’s motivation.” The challenge for AI, he added, will be whether regulators can act before the next disruption rather than after it.

He traced how financial oversight has historically balanced principles-based mandates, e.g., acting “in the public interest,” with more prescriptive rules. Recent court decisions, he said, have questioned whether old statutes can be interpreted to cover new technologies. That tension, Berkovitz suggested, is relevant to debate over AI: rely on existing frameworks or create new, specific authorities.

Johnson agreed that flexibility is vital, citing the Howey case to illustrate how legal principles are applied to unforeseen situations, while noting that AI is different: it functions as infrastructure, not merely as a product. It already underpins trading, data aggregation, surveillance, and enforcement, and regulators themselves use AI to identify anomalies in trading. To show how such questions evolve, Berkovitz revisited the CFTC’s attempt to regulate automated trading after the 2010 flash crash. The original Reg AT proposal would have licensed algorithmic traders and imposed controls on users and even programmers. The agency ultimately took a more principles-based route, reaffirming that exchanges and intermediaries bear responsibility for orderly markets. This moment previewed a future challenge of AI: when a system itself generates outcomes, whose intent matters?

Johnson urged closer interagency and international coordination—citing the forward-looking work streams of IOSCO—and suggested task forces that could keep standards consistent and current. She also raised the idea of AI supervising AI, where regulators use machine systems to oversee firms. Without new approaches, she warned, oversight will remain backward-looking.

In closing, Berkovitz made a case for a stronger exchange of expertise between government and industry as essential to attract AI talent into public service. Johnson agreed that regulators have long trained many of the market’s top legal minds, and the same must hold true for AI.

The panel ended on a shared theme we saw across the 2025 Central Park AI Forum: regulation should remain principles-based and nimble enough to stay relevant.

See Legal & Compliance AI in action

We’ll reach out to schedule a personalized demo.