Skip to content

Commit 05e2e09

Browse files
Erf (#7512)
* [Term Entry] PyTorch Tensor Operations: .erf() * Rename docs/content/pytorch/concepts/tensor-operations/terms/erf/erf.md to content/pytorch/concepts/tensor-operations/terms/erf/erf.md * Update content/pytorch/concepts/tensor-operations/terms/erf/erf.md * Update content/pytorch/concepts/tensor-operations/terms/erf/erf.md * Update content/pytorch/concepts/tensor-operations/terms/erf/erf.md ---------
1 parent f390949 commit 05e2e09

File tree

1 file changed

+151
-0
lines changed
  • content/pytorch/concepts/tensor-operations/terms/erf

1 file changed

+151
-0
lines changed
Lines changed: 151 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,151 @@
1+
---
2+
Title: 'erf()'
3+
Description: 'Computes the error function element-wise for each element in the input tensor'
4+
Subjects:
5+
- 'Data Science'
6+
- 'Machine Learning'
7+
Tags:
8+
- 'Functions'
9+
- 'Math'
10+
- 'PyTorch'
11+
- 'Tensors'
12+
CatalogContent:
13+
- 'learn-pytorch'
14+
- 'paths/machine-learning'
15+
---
16+
17+
The **`.erf()`** computes the Gauss error function element-wise for each element in the input tensor. The error function is a mathematical function that appears frequently in probability, statistics, and partial differential equations, particularly in the context of the normal distribution.
18+
19+
## Syntax
20+
21+
```pseudo
22+
torch.erf(input, *, out=None) → Tensor
23+
```
24+
25+
**Parameters:**
26+
27+
- `input` (Tensor): The input tensor containing the values for which to compute the error function
28+
- `out` (Tensor, optional): The output tensor to store the result
29+
30+
**Return value:**
31+
32+
A new tensor with the same shape as `input`, containing the computed error function values for each element.
33+
34+
## Example 1: Basic `.erf()` Usage
35+
36+
This example demonstrates the basic implementation of the `.erf()` function with a simple 1D tensor:
37+
38+
```py
39+
import torch
40+
41+
# Create a 1D tensor with sample values
42+
input_tensor = torch.tensor([0.0, 1.0, -1.0, 2.0, -2.0])
43+
print("Input tensor:", input_tensor)
44+
45+
# Compute the error function
46+
result = torch.erf(input_tensor)
47+
print("Error function result:", result)
48+
```
49+
50+
The output of this code is:
51+
52+
```shell
53+
Input tensor: tensor([ 0., 1., -1., 2., -2.])
54+
Error function result: tensor([ 0.0000, 0.8427, -0.8427, 0.9953, -0.9953])
55+
```
56+
57+
The error function produces values between -1 and 1, with `erf(0) = 0`, and the function approaches ±1 for large positive or negative values.
58+
59+
## Example 2: Batch Processing with `.erf()`
60+
61+
This example shows how to apply the error function to multi-dimensional tensors for batch processing scenarios commonly used in machine learning:
62+
63+
```py
64+
import torch
65+
66+
# Create a batch of 2D tensors for processing multiple samples
67+
batch_tensor = torch.tensor([
68+
[[-0.5, 0.8, 1.2], [0.3, -0.9, 1.5]],
69+
[[0.7, -0.3, 2.1], [-1.1, 0.6, -0.4]]
70+
])
71+
print("Batch tensor shape:", batch_tensor.shape)
72+
print("Input batch:\n", batch_tensor)
73+
74+
# Apply error function to the entire batch
75+
erf_result = torch.erf(batch_tensor)
76+
print("Error function applied to batch:\n", erf_result)
77+
```
78+
79+
The output of this code is:
80+
81+
```shell
82+
Batch tensor shape: torch.Size([2, 2, 3])
83+
Input batch:
84+
tensor([[[-0.5000, 0.8000, 1.2000],
85+
[ 0.3000, -0.9000, 1.5000]],
86+
87+
[[ 0.7000, -0.3000, 2.1000],
88+
[-1.1000, 0.6000, -0.4000]]])
89+
Error function applied to batch:
90+
tensor([[[-0.5205, 0.7421, 0.9103],
91+
[ 0.3286, -0.7969, 0.9661]],
92+
93+
[[ 0.6778, -0.3286, 0.9970],
94+
[-0.8802, 0.6039, -0.4284]]])
95+
```
96+
97+
This example demonstrates how `erf()` processes each element independently while maintaining the tensor's original shape, making it ideal for neural network operations where batch processing is essential.
98+
99+
## Example 3: `.erf()` in Activation Functions
100+
101+
This example illustrates using the error function as part of the GELU (Gaussian Error Linear Unit) activation function, which is commonly used in transformer models and modern deep learning architectures:
102+
103+
```py
104+
import torch
105+
import torch.nn as nn
106+
107+
# Create a custom GELU activation using erf()
108+
def gelu_erf(x):
109+
# GELU implementation using error function
110+
# GELU(x) = 0.5 * x * (1 + erf(x / sqrt(2)))
111+
return 0.5 * x * (1.0 + torch.erf(x / torch.sqrt(torch.tensor(2.0))))
112+
113+
# Sample input representing neural network activations
114+
activations = torch.tensor([-2.0, -1.0, 0.0, 1.0, 2.0])
115+
print("Input activations:", activations)
116+
117+
# Apply GELU using erf
118+
gelu_output = gelu_erf(activations)
119+
print("GELU output using erf:", gelu_output)
120+
121+
# Compare with PyTorch's built-in GELU
122+
pytorch_gelu = nn.GELU()
123+
builtin_output = pytorch_gelu(activations)
124+
print("PyTorch GELU output:", builtin_output)
125+
print("Difference:", torch.abs(gelu_output - builtin_output))
126+
```
127+
128+
The output of this code is:
129+
130+
```shell
131+
Input activations: tensor([-2., -1., 0., 1., 2.])
132+
GELU output using erf: tensor([-0.0455, -0.1587, 0.0000, 0.8413, 1.9545])
133+
PyTorch GELU output: tensor([-0.0455, -0.1587, 0.0000, 0.8413, 1.9545])
134+
Difference: tensor([0.0000, 0.0000, 0.0000, 0.0000, 0.0000])
135+
```
136+
137+
This example shows how the error function integrates seamlessly with activation function implementations, providing mathematical precision for advanced neural network architectures.
138+
139+
## Frequently Asked Questions
140+
141+
### 1. What is torch erf?
142+
143+
`torch.erf()` is PyTorch's implementation of the mathematical error function (also known as the Gauss error function). It computes the error function element-wise for input tensors.
144+
145+
### 2. How do you get the value out of a tensor in PyTorch?
146+
147+
To extract values from a PyTorch tensor, you can use `.item()` for single-element tensors, `.tolist()` to convert to Python lists, `.numpy()` to convert to NumPy arrays, or standard indexing like `tensor[0]` for specific elements. For example: `value = torch.erf(torch.tensor(1.0)).item()` returns the scalar value.
148+
149+
### 3. What does torch tensor() do?
150+
151+
`torch.tensor()` creates a new tensor from data such as lists, NumPy arrays, or scalar values. It copies the data and allows you to specify data type and device.

0 commit comments

Comments
 (0)