to AI45, a safety ecosystem platform developed by Shanghai Artificial Inteiligence Laboratory.
The platform is guided by the AI-45° Law. From a long-term perspective, AI safety and performance should ideally advance in parallel along a 45° line. Short-term fluctuations are permissible, but in the long run, this balance should neither fall below 45° (as at present) nor exceed it (to avoid constraining development).
Multiple technical pathways may achieve this “AI-45° Law”. We are exploring a causality-centered approach—“the Causal Ladder of Trustworthy AGI"—spanning three progressive layers: Approximate Alignment Layer, Intervenable Layer, and Reflectable Layer.'