在使用pytorch定义神经网络时,经常会看到类似如下的.view()用法,这里对其用法做出讲解与演示。
view()相当于reshape、resize,重新调整Tensor的形状。
import torch a1 = torch.arange(0,16) print(a1)
tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15])
a2 = a1.view(8, 2) a3 = a1.view(2, 8) a4 = a1.view(4, 4) print(a2) print(a3) print(a4)
tensor([[ 0, 1], [ 2, 3], [ 4, 5], [ 6, 7], [ 8, 9], [10, 11], [12, 13], [14, 15]]) tensor([[ 0, 1, 2, 3, 4, 5, 6, 7], [ 8, 9, 10, 11, 12, 13, 14, 15]]) tensor([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]])二、特殊用法:参数-1 (自动调整size)
view中一个参数定为-1,代表自动调整这个维度上的元素个数,以保证元素的总数不变。
import torch a1 = torch.arange(0,16) print(a1)
tensor([ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15])
a2 = a1.view(-1, 16) a3 = a1.view(-1, 8) a4 = a1.view(-1, 4) a5 = a1.view(-1, 2) a6 = a1.view(4*4, -1) a7 = a1.view(1*4, -1) a8 = a1.view(2*4, -1) print(a2) print(a3) print(a4) print(a5) print(a6) print(a7) print(a8)
tensor([[ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15]]) tensor([[ 0, 1, 2, 3, 4, 5, 6, 7], [ 8, 9, 10, 11, 12, 13, 14, 15]]) tensor([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) tensor([[ 0, 1], [ 2, 3], [ 4, 5], [ 6, 7], [ 8, 9], [10, 11], [12, 13], [14, 15]]) tensor([[ 0], [ 1], [ 2], [ 3], [ 4], [ 5], [ 6], [ 7], [ 8], [ 9], [10], [11], [12], [13], [14], [15]]) tensor([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11], [12, 13, 14, 15]]) tensor([[ 0, 1], [ 2, 3], [ 4, 5], [ 6, 7], [ 8, 9], [10, 11], [12, 13], [14, 15]])
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)