tvm-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <>
Subject [GitHub] [incubator-tvm] liaopeiyuan opened a new issue #5602: [Torch] tvm's interaction with pytorch_geometric
Date Fri, 15 May 2020 07:52:44 GMT

liaopeiyuan opened a new issue #5602:

   I'm trying to compile a graph neural network model written with PyTorch and an extension
called [torch_geometric](, but it seems that
tvm has limited support for external libraries it uses such as torch-scatter, torch-sparse,
torch-cluster and torch-spline-conv. I'm very new to tvm so I'm not 100% sure if I'm using
it correctly, but here's the code to trigger the exception:
   import tvm
   from tvm import relay
   import numpy as np
   from import download_testdata
   # PyTorch imports
   import torch
   import torch.nn.functional as F
   from torch_geometric.datasets import Planetoid
   import torch_geometric.transforms as T
   from torch_geometric.nn import GCNConv
   class Net(torch.nn.Module):
       def __init__(self):
           super(Net, self).__init__()
           self.conv1 = GCNConv(1433, 16, cached=False,
           self.conv2 = GCNConv(16, 7, cached=False,
       def forward(self, x, edge_index):
           c1 = self.conv1(x, edge_index)
           rc1 = F.relu(c1)
           d1 = F.dropout(rc1,
           c2 = self.conv2(d1, edge_index)
           r = F.log_softmax(c2, dim=1)
           return r
   dataset = 'Cora'
   path = osp.join('..', 'data', dataset)
   dataset = Planetoid(path, dataset, transform=T.NormalizeFeatures())
   data = dataset[0]
   device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
   model = Net().to(device)
   inp = (data.x.cuda(), data.edge_index.cuda())
   scripted_model = torch.jit.trace(model, inp).eval()
   input_name = 'input0'
   shape_list = [(input_name, data.x.size())]
   mod, params = relay.frontend.from_pytorch(scripted_model,
   It seems that a range of ops are not currently supported, including:
    ['aten::_set_item', 'prim::ImplicitTensorToNum', 'aten::__range_length', 'aten::numel',
'aten::__is__', 'prim::unchecked_cast', 'aten::index', 'aten::dim', 'prim::dtype', 'aten::scatter_add_',
'aten::__isnot__', 'aten::index_put_', 'aten::index_select']
   #5133 already addresses `prim::ImplicitTensorToNum` I believe, and functions like `aten::scatter_add_`
are specific to external libraries like torch-scatter.
   I'm wondering if concerns like this align with the current direction for development. Thanks!

This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:

View raw message