Current location: Home > News > Company News > Main text

DataCanvas Has Two Papers Accepted to ICLR, Advancing Core Research in AI Explainability and Dynamic Causal Inference

Release date:2025.04.27 Browse:731 Font size:BigMediumSmall

A new milestone for DataCanvas in the global AI arena! The DataCanvas research team’s two original works — “A Solvable Attention for Neural Scaling Laws” and “DyCAST: Learning Dynamic Causal Structure from Time Series” — have been officially accepted by the International Conference on Learning Representations (ICLR), one of the world’s top three AI conferences. These achievements represent progress in two fundamental directions: understanding neural networks and modeling dynamic causal systems, marking a significant leap in DataCanvas’s innovation at the AI infrastructure level and its influence on the international academic stage.

image.png

Top Conference Recognition Showcases DataCanvas’s AI Research Strength

ICLR, along with NeurIPS and ICML, is recognized as one of the three most prestigious academic conferences in the field of artificial intelligence. Founded in 2013 by deep learning pioneers Yoshua Bengio and Yann LeCun, ICLR has become the go-to platform for AI researchers worldwide to present milestone achievements. Known for its focus on deep learning fundamentals, rigorous academic standards, and open collaborative community, ICLR holds the second-highest h5-index in AI publications on Google Scholar.In 2025, ICLR received a record 11,565 submissions with an acceptance rate of 32.08%, making it an intense proving ground for foundational AI research. 

image.png

This year’s double acceptance for DataCanvas highlights the company’s robust research capabilities in AI’s core areas. Notably, this is not the company’s first recognition by top-tier AI conferences:

2022 (ICLR): Implicit Bias of Adversarial Training for Deep Neural Networks

2023 (NeurIPS): Implicit Bias of (Stochastic) Gradient Descent for Rank-1 Linear Neural Network

2024 (AAAI): Effects of Momentum in Implicit Bias of Gradient Flow for Diagonal Linear Networks

 

From Theoretical Foundations to System-Level Capabilities: Full-Stack Innovation

The two ICLR 2025 papers reflect a systematic approach in DataCanvas’s AI software research — driving progress through both “theoretical interpretability” and “dynamic causal inference” to push AI toward a more reliable, intelligent next-generation paradigm.

On the theoretical side: Breaking down Transformer scaling laws to address large-model efficiency bottlenecks. A Solvable Attention for Neural Scaling Laws is the first to show, from a computational perspective and under certain conditions, the inevitability of scaling laws in simplified self-attention — the core of Transformer architecture — and how these laws are influenced by factors such as data distribution. The results indicate that self-attention follows similar scaling laws to other models, aligning with findings from large-scale experiments.

image.png

On the systems side: Using NeuralODE to model dynamic causal networks, opening the black box of complex systems. DyCAST: Learning Dynamic Causal Structure from Time Series addresses, for the first time, both the relationships between variables within a causal graph at the same time step and the relationships between graphs across different time steps in temporal causal inference. Built on NeuralODE, the method outperforms or matches state-of-the-art approaches across multiple datasets.

image.png

If the two lines of research are deeply integrated, they could help establish an AI framework that is theoretically verifiable and causally traceable:

Quantifying model performance limits via scaling laws to avoid blind parameter expansion, ensuring training controllability.

Revealing the decision-making logic of AI through dynamic causal networks to address “black-box anxiety” in sensitive fields like healthcare and finance, ensuring decision explainability.

Combining theoretical modeling with dynamic inference to enhance AI stability in complex open problems like climate change forecasting and supply chain risk management, ensuring system generalization.

The continuous recognition of DataCanvas’s research by top international conferences underscores its sustained leadership in AI innovation. Moving forward, the company will continue to deepen its research in deep learning theory, large model optimization, and causal inference — injecting fresh innovation into the global AI community.