众望所归!pytorch-v0.4.0发布!计算内存优化,window系统支持,Tensor与Variabler合并。
之前一直在用的pytorch-unstable-0.4.0终于发布稳定版了,这次的改动很大,更新了的童鞋们程序需要进行迁移!
从旧版本(0.4.0版本以前)迁移到新版本(0.4.0),迁移指南:http://pytorch.org/2018/04/22/0_4_0-migration-guide.html
从旧版本(0.4.0版本以前)迁移到新版本(0.4.0),迁移指南:http://pytorch.org/2018/04/22/0_4_0-migration-guide.html
从旧版本(0.4.0版本以前)迁移到新版本(0.4.0),迁移指南:http://pytorch.org/2018/04/22/0_4_0-migration-guide.html
windows用户的同学可以通过以下命令来下载最新的pytorch:(cuda版本不一样的话改变下面命令中的cuda版本号即可)
conda install pytorch cuda91 -c pytorch pip3 install torchvision
更新内容
Tensor/Variable 合并!
删除volatile参数!
支持windows系统!
支持零维张量(Zero-dimensional Tensors)!
24 个全新概率分布!
增加熵等统计量计算!
分布式模型训练!
重点更新内容
Tensor和Variable已经合并了。
torch.autograd.Variable和torch.Tensor现在是同一个class,准确地说,torch.Tensor和之前的Variable一样可以追踪自动求导历史。和Variable使用方法一样,另外,Variable的Wrapping依然能用,不过经过wrapping后返回的是一个torch.Tensor的类型。也就是说Wrap不Wrap都一样。
Tensor的type类型已经变化:
>>> x = torch.DoubleTensor([1, 1, 1]) >>> print(type(x)) # was torch.DoubleTensor <class 'torch.autograd.variable.Variable'> >>> print(x.type()) # OK: 'torch.DoubleTensor' 'torch.DoubleTensor' >>> print(isinstance(x, torch.DoubleTensor)) # OK: True True
requires_grad已经可以在Tensor中使用了:
>>> x = torch.ones(1) # create a tensor with requires_grad=False (default) >>> x.requires_grad False >>> y = torch.ones(1) # another tensor with requires_grad=False >>> z = x + y >>> # both inputs have requires_grad=False. so does the output >>> z.requires_grad False >>> # then autograd won't track this computation. let's verify! >>> z.backward() RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn >>> >>> # now create a tensor with requires_grad=True >>> w = torch.ones(1, requires_grad=True) >>> w.requires_grad True >>> # add to the previous result that has require_grad=False >>> total = w + z >>> # the total sum now requires grad! >>> total.requires_grad True >>> # autograd can compute the gradients as well >>> total.backward() >>> w.grad tensor([ 1.]) >>> # and no computation is wasted to compute gradients for x, y and z, which don't require grad >>> z.grad == x.grad == y.grad == None True
更多的更新内容,请看官方的release文档!