<aside>

**Post IV of the Causal Discovery Series by https://diksha-shrivastava13.github.io/**

</aside>

Dear Reader,

Welcome to the adventure!

If I do my job even with 0.1% perfection, you should be left with a background of approaches running in the back of your mind to solve the problem of causal discovery. To recap quickly, we saw me attempt to develop a agent-based product for BMZ for complex world policy decisions which can result in maximal gain. While some creative hacks gave unreasonably satisfying results, we came across a gaping hole in AI capability: Models cannot understand the subtle, implicit connections in the abstract data of complex, evolving systems. And as the complexity increases, multi-hop reasoning becomes increasingly fruitless.

I’ve had the pleasure to since talk with some really kind researchers at HuggingFace and Google DeepMind, and have had this problem validated.

However, in the last post, I had mentioned that attempting to develop this ability within the current agents can accidentally increase the X-risk tremendously.

Now we’ll attempt to understand how was that conclusion reached.

Cheers!

Diksha

(Yes, I’m unreasonably cheerful while discussing this topic. Writing about the past takes the edge of the panic of the moments of understanding.)

The Problem with Holistic, Interrelated Systems

During the development of the product at BMZ, I was a little disappointed at times since the problem required a merged vector-graph solution but whatever I developed would have only stayed at the ministry, unlike my product at SAP which could be distributed. I was conducting some good research but I didn’t have an outlet for it. My friend then calmed me down and said “you’re solving a good problem, now try to generalise it”. And that changed things, since these systems exist everywhere.

So, what do I mean by holistic, interrelated systems?

I’ve been told before that the terminology is a little hard to understand for someone not intimately familiar with the use-case. For ease of understanding before we look at practical use-cases, I take you back to the 60-player Chess (or Taikyoku Shogi) analogy we’ve been using.

  1. Each of the 60 armies represents a subsystem with its own unique goals, strategies, and trade-offs. It has intra-operations independently but is still part of the larger system.
  2. Within each subsystem, the pieces are interconnected through dependencies. A change to any single piece—whether a loss or gain—can trigger shifts across the entire subsystem, affecting every other piece.
  3. The actions of one subsystem can impact all others. When an army makes a move, the ripple effects influence the strategies and outcomes of every other army within the overarching system.
  4. Individual pieces from one army can form cross-system connections with pieces from other armies. These interdependencies mean that any action—no matter how small—can send rippling effects through the entire network, influencing every dependency in the entire overarching system.

image.png