- Learning is fundamentally an inverse problem
- Neural network is learnable numerical computation
- 学习在本质上是一个反问题
- 神经网络是可学习的数值计算


Deep Manifold Part 1: Anatomy of Neural Network Manifold, arXiv:2409.17592
Deep Manifold Part 2: Neural Network Mathematics, arXiv:2512.06563
In the history of neural networks, Deep Manifold is the first to provide:
Deep Manifold further proposes that neural networks are:
Neural Network: A Powerful Learning Network
A neural network possesses only learnability; it has no inherent capacity for reasoning or intelligence. All such abilities are learned.
Property-lessness
Neural networks computation reduces to the most primitive counting operations (addition and subtraction). It is precisely this property-lessness that unifies everything into a single computational framework:
Stacked Picewise Manifold
Enable reduce high order nonlinearity in data and create many convergence pathways
Coordinate Change
The simplest base function, and change itself to learn the data (“data fitting”)
Forward and Inverse Combined Iteration
Enable coordinate change each iteration and increase data learning efficiency
Neural Network Geometry
Connected and stacked piecewise-smooth manifolds jointly form the geometric structure of the representation space. Node covers act as the local units of these piecewise-smooth manifolds, and their orientations change at every iteration. These piecewise-smooth manifolds are differentiable and integrable.
Neural Network Algebra
The coordinate system evolves with each iteration; counting serves as the most primitive algebraic unit, expressed through the iterated-integral structure of forward propagation; at the same time, activations themselves carry no inherent attributes.
Neural Network Equations
The fixed-point residual is the most fundamental equation of a neural network; the residual itself serves as a boundary condition, and its Lagrangian form further yields the iterative method for computing the fixed point, thereby giving the neural network dynamic, iterative computability.
Neural Network Stochasticity
Stochasticity is expressed through inequalities and group statistics based on summation; this statistical structure enables neural networks to learn the inherently stochastic real world and naturally form stochastic fixed points.
Neural Network Fixed Points
Iterations traverse billions of piecewise-smooth manifolds, giving rise to innumerable fixed points and convergence paths within foundation models. Because the training data themselves contain high-order nonlinearity, curvature (second derivatives), and moderate perturbations, the model can distinguish the correct convergence direction toward a fixed point.
Neural Network Boundary Conditions
Boundary conditions are the sole source of iterative direction and determine the convergence path during training. When a foundation model lacks static fixed points, symmetric, weak, and discrete boundary conditions become necessary to guide the convergence of a high-order nonlinear system.
Deep Manifold Part 1: Anatomy of Neural Network Manifold, arXiv:2409.17592
Deep Manifold Part 2: Neural Network Mathematics, arXiv:2512.06563
在神经网络历史上, 深度流形(Deep Manifold)首次给出了神经网络
提出神经网络是:
神经网络:一种强大的学习网络
神经网络本身只有可学习性,并不具备推理与智能能力;这些都是学来的.
无属性
神经网络的计算可还原为最原始的计数操作: 加减. 正是这种无属性, 将一切统一到同一个计算框架之中:
堆叠的分片流形
能够降低数据中的高阶非线性, 并生成多条收敛路径
坐标变换
最简单的基函数形式, 并通过自身的不断变化来学习数据(即“数据拟合”)
正问题与反问题联合迭代
在每一次迭代中实现坐标变换,从而提升数据的学习效率
神经网络几何
连接的、堆叠的分片光滑流形共同构成表示空间的几何结构;节点覆盖作为这些分片光滑流形的局部单元,其取向在每一次迭代中都会发生变化。分片平滑的流形是可微且可积。
神经网络代数
坐标系统随迭代不断变化;计数是最原始的代数单元,通过前向传播中的迭代积分结构体现;同时,激活值本身不携带任何固有属性。
神经网络方程
不动点残差是神经网络最原始的方程形式;残差本身是边界条件,其拉格朗日形式进一步给出了不动点的迭代方法,从而赋予神经网络:动态迭代可计算性。
神经网络随机性
通过不等式和基于求和的群统计方式表现随机性;这种统计结构使神经网络能够学习具有随机本质的真实世界,并自然形成随机不动点。
神经网络不动点
迭代跨越上亿分片光滑流形,从而形成基础模型不可胜数的不动点及收敛路径;由于训练数据本身具有高阶非线性,曲率(二阶导数)与适度扰动,才能分辨不动点收敛方向。
神经网络边界条件
边界条件是迭代方向的唯一来源,决定训练收敛路径;在基础模型不存在静态不动点的情况下,对称、软弱和离散边界是指导高阶非线性系统收敛所必需的。
1960s: Theory of fixed point classes
1970s: KeyBlock Theory
1980s: Discontinuous Deformation Analysis
1990s: Numerical Manifold Method
2000s: Contact Theory (inequality theory)
The theory of Fixed-point classes resolved the existence, uniqueness, and stability of solutions to differential equations, more than two centuries after Newton introduced calculus
1980s, Numerical Manifold Method
Professor Shiing-Shen Chern, widely regarded as the father of modern differential geometry, served as one of the three members of the Shi PhD dissertation committee (UC Berkeley), providing academic validation for the mathematical framework of numerical manifold method and laying the foundation for the subsequent development of the theory. Chern’s only question was: 'Can stacked piecewise manifolds be extended to any complex domain? He would have been delighted to see the progress in neural networks, as their geometry can be understood as stacked piecewise manifolds.

1960s: Theory of fixed point classes
1970s: KeyBlock Theory
1980s: Discontinuous Deformation Analysis
1990s: Numerical Manifold Method
2000s: Contact Theory (inequality theory)
不动点类理论把微积分解的: 存在性, 唯一性, 稳定性 都解决了, 那是牛顿提出微积分后二百多年
数值流形
陈省身教授(Professor Shiing-Shen Chern), 被广泛视为现代微分几何之父,曾担任石根华博士论文委员会(加州.伯克利) 的三位成员之一,为数值流形的数学框架提供了学术认可,并为该理论的后续发展奠定了基础。陈教授唯一的问题是:"多层分片光滑覆盖(流形)能否扩展到任意复域?". 陈教授一定会很欣慰地看到神经网络的进展,因为其几何本质是多层分片光滑覆盖(流形)

We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.