寧波網(wǎng)站建設論壇日喀則網(wǎng)站seo
友愛的目錄
- 自動求導機制
- 從后向中排除子圖
- 自動求導如何編碼歷史信息
- Variable上的In-place操作
- In-place正確性檢查
- CUDA語義
- 最佳實踐
- 使用固定的內(nèi)存緩沖區(qū)
- 使用 nn.DataParallel 替代 multiprocessing
- 擴展PyTorch
- 擴展 torch.autograd
- 擴展 torch.nn
- 多進程最佳實踐
- 共享CUDA張量
- 最佳實踐和提示
- 避免和抵制死鎖
- 重用經(jīng)過隊列的緩沖區(qū)
- 異步多進程訓練(例如Hogwild)
- 序列化語義
- PACKAGE參考
- 參考文獻
自動求導機制
了解這些并不是絕對必要的,但我們建議您熟悉它,因為它將幫助您編寫更高效,更簡潔的程序,并可幫助您進行調(diào)試。
從后向中排除子圖
每個變量都有兩個標志:requires_grad和volatile。它們都允許從梯度計算中精細地排除子圖,并可以提高效率。
>>> x = Variable(torch.randn(5, 5))
>>> y = Variable(torch.randn(5, 5))
>>> z = Variable(torch.randn(5, 5), requires_grad=True)
>>> a = x + y
>>> a.requires_grad
False
>>> b = a + z
>>> b.requires_grad
True
model = torchvision.models.resnet18(pretrained=True)
for param in model.parameters():param.requires_grad = False
# Replace the last fully-connected layer
# Parameters of newly constructed modules have requires_grad=True by default
model.fc = nn.Linear(512, 100)# Optimize only the classifier
optimizer = optim.SGD(model.fc.parameters(), lr=1e-2, momentum=0.9)
>>> regular_input = Variable(torch.randn(5, 5))
>>> volatile_input = Variable(torch.randn(5, 5), volatile=True)
>>> model = torchvision.models.resnet18(pretrained=True)
>>> model(regular_input).requires_grad
True
>>> model(volatile_input).requires_grad
False
>>> model(volatile_input).volatile
True
>>> model(volatile_input).creator is None
True
自動求導如何編碼歷史信息
Variable上的In-place操作
In-place正確性檢查
CUDA語義
torch.cuda會記錄當前選擇的GPU,并且分配的所有CUDA張量將在上面創(chuàng)建。可以使用torch.cuda.device上下文管理器更改所選設備。
x = torch.cuda.FloatTensor(1)
# x.get_device() == 0
y = torch.FloatTensor(1).cuda()
# y.get_device() == 0with torch.cuda.device(1):# allocates a tensor on GPU 1a = torch.cuda.FloatTensor(1)# transfers a tensor from CPU to GPU 1b = torch.FloatTensor(1).cuda()# a.get_device() == b.get_device() == 1c = a + b# c.get_device() == 1z = x + y# z.get_device() == 0# even within a context, you can give a GPU id to the .cuda calld = torch.randn(2).cuda(2)# d.get_device() == 2
最佳實踐
使用固定的內(nèi)存緩沖區(qū)
當副本來自固定(頁鎖)內(nèi)存時,主機到GPU的復制速度要快很多。CPU張量和存儲開放了一個pin_memory() 方法,它返回該對象的副本,而它的數(shù)據(jù)放在固定區(qū)域中。
另外,一旦固定了張量或存儲,就可以使用異步的GPU副本。只需傳遞一個額外的async=True參數(shù)到 cuda() 的調(diào)用。這可以用于將數(shù)據(jù)傳輸與計算重疊。
通過將pin_memory=True 傳遞給其構造函數(shù),可以使DataLoader將batch返回到固定內(nèi)存中。
使用 nn.DataParallel 替代 multiprocessing
大多數(shù)涉及批量輸入和多個GPU的情況應默認使用DataParallel來使用多個GPU。盡管有GIL的存在,單個python進程也可能使多個GPU飽和。
從0.1.9版本開始,大量的GPU(8+)可能未被充分利用。然而,這是一個已知的問題,也正在積極開發(fā)。和往常一樣,測試你的用例吧。
調(diào)用multiprocessing來利用CUDA模型存在重要的注意事項;使用具有多處理功能的CUDA模型有重要的注意事項; 除非就是需要謹慎地滿足數(shù)據(jù)處理需求,否則您的程序很可能會出現(xiàn)錯誤或未定義的行為。
擴展PyTorch
本篇文章中包含如何擴展 torch.nn, torch.autograd和使用我們的 C 庫 編寫自定義的C 擴展。
擴展 torch.autograd
# Inherit from Function
class Linear(Function):# bias is an optional argumentdef forward(self, input, weight, bias=None):self.save_for_backward(input, weight, bias)output = input.mm(weight.t())if bias is not None:output += bias.unsqueeze(0).expand_as(output)return output# This function has only a single output, so it gets only one gradientdef backward(self, grad_output):# This is a pattern that is very convenient - at the top of backward# unpack saved_tensors and initialize all gradients w.r.t. inputs to# None. Thanks to the fact that additional trailing Nones are# ignored, the return statement is simple even when the function has# optional inputs.input, weight, bias = self.saved_tensorsgrad_input = grad_weight = grad_bias = None# These needs_input_grad checks are optional and there only to# improve efficiency. If you want to make your code simpler, you can# skip them. Returning gradients for inputs that don't require it is# not an error.if self.needs_input_grad[0]:grad_input = grad_output.mm(weight)if self.needs_input_grad[1]:grad_weight = grad_output.t().mm(input)if bias is not None and self.needs_input_grad[2]:grad_bias = grad_output.sum(0).squeeze(0)return grad_input, grad_weight, grad_bias
現(xiàn)在,為了可以更簡單的使用自定義的operation,我們建議將其用一個簡單的 helper function 包裝起來。 functions:
def linear(input, weight, bias=None):# First braces create a Function object. Any arguments given here# will be passed to __init__. Second braces will invoke the __call__# operator, that will then use forward() to compute the result and# return it.return Linear()(input, weight, bias)
你可能想知道你剛剛實現(xiàn)的 backward方法是否正確的計算了梯度。你可以使用 小的有限的差分進行數(shù)值估計。
from torch.autograd import gradcheck# gradchek takes a tuple of tensor as input, check if your gradient
# evaluated with these tensors are close enough to numerical
# approximations and returns True if they all verify this condition.
input = (Variable(torch.randn(20,20).double(), requires_grad=True),)
test = gradcheck.gradcheck(Linear(), input, eps=1e-6, atol=1e-4)
print(test)
擴展 torch.nn
class Linear(nn.Module):def __init__(self, input_features, output_features, bias=True):self.input_features = input_featuresself.output_features = output_features# nn.Parameter is a special kind of Variable, that will get# automatically registered as Module's parameter once it's assigned# as an attribute. Parameters and buffers need to be registered, or# they won't appear in .parameters() (doesn't apply to buffers), and# won't be converted when e.g. .cuda() is called. You can use# .register_buffer() to register buffers.# nn.Parameters can never be volatile and, different than Variables,# they require gradients by default.self.weight = nn.Parameter(torch.Tensor(input_features, output_features))if bias:self.bias = nn.Parameter(torch.Tensor(output_features))else:# You should always register all possible parameters, but the# optional ones can be None if you want.self.register_parameter('bias', None)# Not a very smart way to initialize weightsself.weight.data.uniform_(-0.1, 0.1)if bias is not None:self.bias.data.uniform_(-0.1, 0.1)def forward(self, input):# See the autograd section for explanation of what happens here.return Linear()(input, self.weight, self.bias)#注意這個Linear是之前實現(xiàn)過的Linear
多進程最佳實踐
共享CUDA張量
最佳實踐和提示
避免和抵制死鎖
重用經(jīng)過隊列的緩沖區(qū)
異步多進程訓練(例如Hogwild)
import torch.multiprocessing as mp
from model import MyModeldef train(model):# Construct data_loader, optimizer, etc.for data, labels in data_loader:optimizer.zero_grad()loss_fn(model(data), labels).backward()optimizer.step() # This will update the shared parametersif __name__ == '__main__':num_processes = 4model = MyModel()# NOTE: this is required for the ``fork`` method to workmodel.share_memory()processes = []for rank in range(num_processes):p = mp.Process(target=train, args=(model,))p.start()processes.append(p)for p in processes:p.join()
序列化語義
PACKAGE參考
參考文獻
參考鏈接:https://pytorch-cn.readthedocs.io/zh/latest/