<aside>

**Post VII of the Causal Discovery Series by https://diksha-shrivastava13.github.io/**

</aside>

Dear Reader,

At this point we have completed reviewing my work from 2024 and now we turn our eyes to early 2025. In this section, I discuss bits of open-endedness, why “causal discovery” and set up the premise for the next research statement. I also discuss the parallels between Theoretical ML and Theoretical Physics when it comes to Causality.

This is the part where the series turns more theoretical and philosophical instead of the empirical approach we had till now. It’s also largely due to researching independently, as well as the difficulty in communicating why this direction must be pursued.

As always, enjoy! And if by the end, I seem to be crazy—you’re not alone.

I’ve tried to follow logic as strongly as possible whenever I move from established literature to speculation. However, digesting and accepting risks is a long and hard process. It needs continuous thought over long periods of time.

So far, I’ve found at least three research scientists who think strongly in the same direction and I suggest you to read these works for more background:

Superintelligent Agents Pose Catastrophic Risks: Can Scientist AI...

Causal Discovery in Astrophysics: Unraveling Supermassive Black...

Provably safe systems: the only path to controllable AGI

The Mathematical Universe


Discussion of ‘Open-Endedness is Essential for Artificial Superhuman Intelligence’

Open-Endedness is Essential for Artificial Superhuman Intelligence

In this position paper, they argue that the ingredients are now in place to achieve open-endedness in AI systems with respect to a human observer. Furthermore, they claim that such open-endedness is an essential property of any artificial superhuman intelligence (ASI).

3.1 Reinforcement Learning