tvm-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] [incubator-tvm] jroesch commented on a change in pull request #5324: [Runtime][Relay][Cleanup] Clean up for memory pass to enable heterogenous execution support.
Date Tue, 14 Apr 2020 06:53:44 GMT
jroesch commented on a change in pull request #5324: [Runtime][Relay][Cleanup] Clean up for
memory pass to enable heterogenous execution support.
URL: https://github.com/apache/incubator-tvm/pull/5324#discussion_r407905267
 
 

 ##########
 File path: include/tvm/relay/attrs/memory.h
 ##########
 @@ -27,10 +27,37 @@
 #include <tvm/ir/attrs.h>
 #include <tvm/relay/expr.h>
 #include <string>
+#include <vector>
 
 namespace tvm {
 namespace relay {
 
+std::vector<TensorType> FlattenTupleType(const Type& type);
+std::vector<Expr> FromTupleType(const Type& type, const Expr& expr);
+Expr ToTupleType(const Type& t, const Array<Expr>& exprs);
+
+/*!
+ * \brief Options for allocating storage.
+ */
+struct AllocStorageAttrs : public tvm::AttrsNode<AllocStorageAttrs> {
+  DataType dtype;
+  int device_id;
+  int device_type;
+
+  TVM_DECLARE_ATTRS(AllocStorageAttrs, "relay.attrs.AllocStorageAttrs") {
+    TVM_ATTR_FIELD(dtype)
+      .describe(
+         "The dtype of the tensor to allocate.")
+      .set_default(DataType::Float(32, 1));
+    TVM_ATTR_FIELD(device_id)
 
 Review comment:
   @tqchen and I talked about support for mapping virtual contexts to logical contexts, i.e
schedule for n devices but map from 1 to n at runtime. This allows us to do a mixture of static
and dynamic scheduling. Now that I think about it we probably want to logically group the
allocations based on device id either way to ensure we place them on the right device (i.e
which device do I place an allocation when I have n-gpus). 
   
   I want to store the the context directly but it doesn't extend Object, i.e is not a valid
attribute type. 

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

Mime
View raw message