Skip to content
New issue

Have a question about this project? # for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “#”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? # to your account

[Docathon][Add CN Doc No.28-29] #6382

Merged
merged 11 commits into from
Jan 16, 2024
24 changes: 24 additions & 0 deletions docs/api/paddle/amp/debugging/check_layer_numerics_cn.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
.. _cn_api_paddle_amp_debugging_check_layer_numerics:

check_layer_numerics
-------------------------------

.. py:function:: paddle.amp.debugging.check_layer_numerics(func)

这个装饰器用于检查层的输入和输出数据的数值。


参数
:::::::::

- **func** (callable) – 将要被装饰的函数。

返回
:::::::::
返回一个被装饰后的函数(callable)。这个新的函数会在原来的函数基础上加上数值检查的功能。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
返回一个被装饰后的函数(callable)。这个新的函数会在原来的函数基础上加上数值检查的功能。
一个被装饰后的函数(callable)。这个新的函数会在原来的函数基础上加上数值检查的功能。



代码示例
::::::::::::

COPY-FROM: paddle.amp.debugging.check_layer_numerics
30 changes: 30 additions & 0 deletions docs/api/paddle/incubate/nn/fused_linear_activation_cn.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
.. _cn_api_paddle_incubate_nn_functional_fused_linear_activation:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这个文件的添加路径不对~
docs/api/paddle/incubate/nn/functional/fused_linear_activation_cn.rst


fused_linear_activation
-------------------------------

.. py:function:: paddle.incubate.nn.functional.fused_linear_activation(x, y, bias, trans_x=False, trans_y=False, activation=None)

全连接线性和激活变换操作符。该方法要求 CUDA 版本大于等于 11.6。


参数
:::::::::

- **x** (Tensor) – 需要进行乘法运算的输入 Tensor 。
- **y** (Tensor) – 需要进行乘法运算的权重 Tensor 。它的阶数必须为2。
- **bias** (Tensor) – 输入的偏差Tensor,该偏差会加到矩阵乘法的结果上。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
- **bias** (Tensor) – 输入的偏差Tensor,该偏差会加到矩阵乘法的结果上
- **bias** (Tensor) – 输入的偏置 Tensor,该偏置会加到矩阵乘法的结果上

- **trans_x** (bool, 可选) - 是否在乘法之前对 x 进行矩阵转置。
- **trans_y** (bool, 可选) - 是否在乘法之前对 y 进行矩阵转置。
- **activation** (str, 可选) - 目前,可用的激活函数仅限于“GELU”(高斯误差线性单元)和“ReLU”(修正线性单元)。这些激活函数应用于偏置加和的输出上。默认值:None。
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这些激活函数应用于添加偏置之后的输出上 这样表达会不会更好~

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

image
参数记得缩进~

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好哩!


返回
:::::::::

返回类型为 Tensor。
Copy link
Collaborator

@ooooo-create ooooo-create Dec 22, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

返回类型 + 描述的格式的
例如:
Tensor,变换之后的 Tensor



代码示例
::::::::::::

COPY-FROM: paddle.incubate.nn.functional.fused_linear_activation