torch batch_norm计算举例
Kong Liangqian Lv6

1. BN

batchnorm的顾名思义是对batch(即多个数据)之间的norm。下面以torch 的batchnorm计算举例来说明

1.1 nn.BatchNorm1d(BN)

1
nn.BatchNorm1d 指定的的值必须第二维度的数值,即通道数2

1.1.1 二维数据

用一组数据做例子

数据内容为 51个点,每一个点都有6个维度,分别是(x,y,yaw,v_yaw,acc,kappa)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import torch
import torch.nn as nn

in_put = torch.rand((51, 6)) # 51个点,6维度,(x,y,yaw,v_yaw,acc,kappa)

# 创建 BatchNorm1d 层,其中 6 是特征的数量
batch_norm = nn.BatchNorm1d(6)

# 对数据进行 Batch Normalization
normalized_data = batch_norm(in_put)

## 这里针对的是各个点的维度之间求平均
# 所有的x做norm, 所有的y做norm
print(f"x: mean: {normalized_data[:,0].mean():.2f}, var: {normalized_data[:,0].var():.2f})")
print(f"y: mean: {normalized_data[:,1].mean():.2f}, var: {normalized_data[:,0].var():.2f})")
print(f"yaw: mean: {normalized_data[:,2].mean():.2f}, var: {normalized_data[:,0].var():.2f})")
print(f"v_yaw: mean: {normalized_data[:,3].mean():.2f}, var: {normalized_data[:,0].var():.2f})")
print(f"acc: mean: {normalized_data[:,4].mean():.2f}, var: {normalized_data[:,0].var():.2f})")
print(f"kappa: mean: {normalized_data[:,4].mean():.2f}, var: {normalized_data[:,0].var():.2f})")

运行结果表示,51个点的第一个维度所有x,第二个维度 所有y 。。。的平均值

1
2
3
4
5
6
x: mean: 0.00, var: 1.02)
y: mean: 0.00, var: 1.02)
yaw: mean: -0.00, var: 1.02)
v_yaw: mean: 0.00, var: 1.02)
acc: mean: -0.00, var: 1.02)
kappa: mean: -0.00, var: 1.02)

所有被norm过的维度norm成均值为0,方差为1的分布

1.1.2 三维数据

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
import torch
import torch.nn as nn

in_put = torch.rand((25, 51, 6)) # 25条轨迹, 51个点,6维度,(x,y,yaw,v_yaw,acc,kappa)

# 创建 BatchNorm1d 层,其中 51 是特征的数量
batch_norm = nn.BatchNorm1d(51)

# 对数据进行 Batch Normalization
normalized_data = batch_norm(in_put)

## 这里是25条轨迹的第i个点的平均值和方差,注意每一个点都有6个维度,
print(f"normalized_data[:,0,:].shape={normalized_data[:,0,:].shape}")

print(f"first point mean: {normalized_data[:,0,:].mean()}, var:{normalized_data[:,0,:].var()}")
print(f"second point mean: {normalized_data[:,1,:].mean()}, var:{normalized_data[:,1,:].var()}")
print(f"third point mean: {normalized_data[:,2,:].mean()}, var:{normalized_data[:,2,:].var()}")

运行结果表示,25条轨迹的第一个点的所有维度的平均值和方差,25条轨迹的第二个点的平均值和方差,25条轨迹的第三个点的平均值和方差…

1
2
3
4
5
normalized_data[:,0,:].shape=torch.Size([25, 6])

first point mean: -1.5894572324981482e-09, var:1.0066999197006226
second point mean: 1.2715657859985185e-08, var:1.0066999197006226
third point mean: -1.5894572324981482e-09, var:1.0066999197006226

1.2 nn.BatchNorm2d(BN)

注意,这里数据是4维的。norm的对象所有帧中的第i条轨迹

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
import torch
import torch.nn as nn

mean = 2
std_dev = 3
in_put = torch.rand((16, 25, 51, 6)) * std_dev + mean # 16条数据, 25条轨迹, 51个点,6维度,(x,y,yaw,v_yaw,acc,kappa)
# in_put = torch.rand((16, 25, 51, 6)) # 16条数据, 25条轨迹, 51个点,6维度,(x,y,yaw,v_yaw,acc,kappa)

# 创建 BatchNorm2d 层,其中 25 是特征的数量
batch_norm = nn.BatchNorm2d(25)

# 对数据进行 Batch Normalization
normalized_data = batch_norm(in_put)

## 这里针对的是各个轨迹点之间求平均
print(f"normalized_data[:,0].shape={normalized_data[:,0].shape}")
print(f"first traj mean: {normalized_data[:,0].mean():.4f}, var:{normalized_data[:,0].var():.4f}")
print(f"second traj mean: {normalized_data[:,1].mean():.4f}, var:{normalized_data[:,1].var():.4f}")
print(f"third traj mean: {normalized_data[:,2].mean():.4f}, var:{normalized_data[:,2].var():.4f}")

这里表示,所有16条数据的中所有的第i条轨迹的均值和方差

1
2
3
4
normalized_data[:,0].shape=torch.Size([16, 51, 6])
first traj mean: -0.0000, var:1.0002
second traj mean: -0.0000, var:1.0002
third traj mean: 0.0000, var:1.0002

LayerNorm

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
import torch
import torch.nn as nn
# in_put 2x4x4
in_put = torch.tensor([
[[3.0,4,5,10],
[1,2,3,4],
[-3,-6,-4,-2],
[3,6,4,2]],
[[3.0,4,5,10],
[1,2,3,4],
[3,6,4,2],
[-3,-6,-4,-2]]
], dtype=torch.float32)

# 对第二个维度分为2组
group_norm = nn.GroupNorm(2, 4)

# 对数据进行 Batch Normalization
normalized_data = group_norm(in_put)

print(normalized_data[0,:2].mean())
print(normalized_data[0,2:4].mean())
 Comments