1 回答

TA貢獻1816條經(jīng)驗 獲得超4個贊
不幸的是,在當前的實現(xiàn)中,該with-device語句不能以這種方式工作,它只能用于在 cuda 設(shè)備之間切換。
您仍然必須使用該device參數(shù)來指定使用哪個設(shè)備(或.cuda()將張量移動到指定的 GPU),在以下情況下使用如下術(shù)語:
# allocates a tensor on GPU 1
a = torch.tensor([1., 2.], device=cuda)
所以要訪問cuda:1:
cuda = torch.device('cuda')
with torch.cuda.device(1):
# allocates a tensor on GPU 1
a = torch.tensor([1., 2.], device=cuda)
并訪問cuda:2:
cuda = torch.device('cuda')
with torch.cuda.device(2):
# allocates a tensor on GPU 2
a = torch.tensor([1., 2.], device=cuda)
但是沒有device參數(shù)的張量仍然是 CPU 張量:
cuda = torch.device('cuda')
with torch.cuda.device(1):
# allocates a tensor on CPU
a = torch.tensor([1., 2.])
把它們加起來:
不 - 不幸的是,在當前的with-device 語句實現(xiàn)中,無法以您在問題中描述的方式使用。
以下是文檔中的更多示例:
cuda = torch.device('cuda') # Default CUDA device
cuda0 = torch.device('cuda:0')
cuda2 = torch.device('cuda:2') # GPU 2 (these are 0-indexed)
x = torch.tensor([1., 2.], device=cuda0)
# x.device is device(type='cuda', index=0)
y = torch.tensor([1., 2.]).cuda()
# y.device is device(type='cuda', index=0)
with torch.cuda.device(1):
# allocates a tensor on GPU 1
a = torch.tensor([1., 2.], device=cuda)
# transfers a tensor from CPU to GPU 1
b = torch.tensor([1., 2.]).cuda()
# a.device and b.device are device(type='cuda', index=1)
# You can also use ``Tensor.to`` to transfer a tensor:
b2 = torch.tensor([1., 2.]).to(device=cuda)
# b.device and b2.device are device(type='cuda', index=1)
c = a + b
# c.device is device(type='cuda', index=1)
z = x + y
# z.device is device(type='cuda', index=0)
# even within a context, you can specify the device
# (or give a GPU index to the .cuda call)
d = torch.randn(2, device=cuda2)
e = torch.randn(2).to(cuda2)
f = torch.randn(2).cuda(cuda2)
# d.device, e.device, and f.device are all device(type='cuda', index=2)
添加回答
舉報