This rule raises when a torch.autograd.Variable is instantiated.
The Pytorch Variable API has been deprecated. The behavior of Variables is now provided by the Pytorch tensors and can be controlled with the
requires_grad parameter.
The Variable API now returns tensors anyway, so there should not be any breaking changes.
Replace the call to torch.autograd.Variable with a call to torch.tensor and set the requires_grad attribute
to True if needed.
import torch x = torch.autograd.Variable(torch.tensor([1.0]), requires_grad=True) # Noncompliant x2 = torch.autograd.Variable(torch.tensor([1.0])) # Noncompliant
import torch x = torch.tensor([1.0], requires_grad=True) x2 = torch.tensor([1.0])