From commits-return-45269-archive-asf-public=cust-asf.ponee.io@tvm.apache.org Wed Aug 18 16:40:12 2021 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mxout1-ec2-va.apache.org (mxout1-ec2-va.apache.org [3.227.148.255]) by mx-eu-01.ponee.io (Postfix) with ESMTPS id D39A4180679 for ; Wed, 18 Aug 2021 18:40:11 +0200 (CEST) Received: from mail.apache.org (mailroute1-lw-us.apache.org [207.244.88.153]) by mxout1-ec2-va.apache.org (ASF Mail Server at mxout1-ec2-va.apache.org) with SMTP id 23344422BE for ; Wed, 18 Aug 2021 16:39:54 +0000 (UTC) Received: (qmail 58575 invoked by uid 500); 18 Aug 2021 16:39:54 -0000 Mailing-List: contact commits-help@tvm.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@tvm.apache.org Delivered-To: mailing list commits@tvm.apache.org Received: (qmail 58531 invoked by uid 99); 18 Aug 2021 16:39:53 -0000 Received: from ec2-52-202-80-70.compute-1.amazonaws.com (HELO gitbox.apache.org) (52.202.80.70) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 18 Aug 2021 16:39:53 +0000 From: =?utf-8?q?GitBox?= To: commits@tvm.apache.org Subject: =?utf-8?q?=5BGitHub=5D_=5Btvm-rfcs=5D_manupa-arm_commented_on_a_change_in_pu?= =?utf-8?q?ll_request_=239=3A_=5BRFC=5D_TVM_Unified_Static_Memory_Planning?= Message-ID: <162930479385.16310.2959411101821014570.asfpy@gitbox.apache.org> Date: Wed, 18 Aug 2021 16:39:53 -0000 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 8bit References: In-Reply-To: manupa-arm commented on a change in pull request #9: URL: https://github.com/apache/tvm-rfcs/pull/9#discussion_r690530825 ########## File path: rfcs/0009_Unified_Static_Memory_Planning.md ########## @@ -0,0 +1,476 @@ + Feature Name: Unified Static Memory Planner + Start Date: 2021 June 1 + RFC PR: #0009 + GitHub Issue: https://github.com/apache/tvm/issues/8404 + +# Background + +Currently, given a ML model primarily TVM will generate two main artifacts : + +* A1 : executor configuration : the description of the sequential execution of operators + 1. If the "executor" is "graph", this would be a JSON + 2. if the "executor" is "aot", this would be a main function describing call graph of operators + 3. if the "executor" is "vm", this would be a series of VM bytecode instructions +* A2 : library of operators (in the form of runtime.Module) +* A3 : compiled parameters of the model + +A1 is generally created out of lowering the "main" relay function and A2 is created lowering fused relay primitive functions → TIR PrimFuncs → C or LLVM artifacts of the operator library. + +### Is there some sort of memory planning already being performed ? + +Yes, there is. + +For A1, the inter-(fused) operator tensors are visible in the "main" relay function. There exists currently a Relay level pass known as "GraphPlanMemory" that works on the Relay IR to share the space used by tensors which are not live simultaneously and are visible between (fused) operators . Currently, the said pass will use Shared Memory Buffer Object memory planning scheme (See https://blog.tensorflow.org/2020/10/optimizing-tensorflow-lite-runtime.html) to perform the planning. + +For A2, the operators are lowered to TIR PrimFuncs. There exist a pass called StorageRewrite that more or less does the same thing as "GraphPlanMemory" but on TIR for the tensors visible within (fused) operators and are not live simultaneously. + +# Motivation + +For embedded use-cases, its widely accepted that aggressive memory optimizations are vital. Intially we are looking at enable memory planning for embedded use-cases using the AoT executor. + +Therefore, there exist two main shortcomings of the current approach : + +* The memory used by intermediary tensors within operators are not shared between memory used by inter-operator tensors. + +Example TIR : +``` + primfn(placeholder_3: handle, placeholder_4: handle, placeholder_5: handle, T_cast_1: handle) -> () + attr = { "global_symbol" : "fused_nn_conv2d_add_fixed_point_multiply_clip_cast_cast_21" , "tir.noalias" : True} + buffers = {T_cast: Buffer(T_cast_2: Pointer(int16), int16, [ 1 , 56 , 56 , 128 ], []), + placeholder_2: Buffer(placeholder_6: Pointer(int32), int32, [ 1 , 1 , 1 , 128 ], []), + placeholder: Buffer(placeholder_7: Pointer(int16), int16, [ 1 , 56 , 56 , 128 ], []), + placeholder_1: Buffer(placeholder_8: Pointer(int16), int16, [ 3 , 3 , 128 , 1 ], [])} + + buffer_map = {placeholder_3: placeholder, placeholder_4: placeholder_1, placeholder_5: placeholder_2, T_cast_1: T_cast} { + attr [PaddedInput: Pointer(int16)] "storage_scope" = "global" ; + allocate(PaddedInput, int16, [ 430592 ]); + attr [DepthwiseConv2d: Pointer(int32)] "storage_scope" = "global" ; + + allocate(DepthwiseConv2d, int32, [ 401408 ]) { + for (i1: int32, 0 , 58 ) { + for (i2: int32, 0 , 58 ) { + for(i3: int32,0,128) { + PaddedInput[(((i1*7424) + (i2*128)) + i3)] = @tir.if_then_else(((((1<= i1) && (i1 < 57)) && (1<= i2)) && (i2 < 57)), (int16*)placeholder_7[((((i1*7168) + (i2* 128 )) + i3) - 7296)], 0i16, dtype=int16) + } +``` + +The above TIR snippet shows that two intra operator buffers PaddedInput, DepthwiseConv2d are not visible for optimization by the Relay-level GraphPlanMemory approach. + +* Assumption of local optimization : performing sharing inside the operator first and sub-subsequently sharing that workspace with inter-operator tensors, would be sub-optimal. + +Thus, for the embedded use-cases, we'd need a unified static memory planner that performs memory planning of all tensors holistically to achieve best memory utilization. + +# Goals + +G1. There would be no TVMBackendAlloc(/Free)Workspace calls generated for tir.allocates that could be evaluated at compile time. + +Currently, the TVM codegen and the AoT executor relies on TVMB(A/F)W calls to increment/decrement a pointer of user provided workspace buffer. By the end of this set of work, if the backend uses Unified Static Memory Planning, there should not be TVMB(A/F)W calls rather correct offset in to the user provided buffer should be codegen'd for allocates for which the size argument could be evaluated at compile time. The dynamically sized allocates will remain untouched, thus will be lowered as usual. + +G2. The static memory planning algorithm should be changeable. + +There are a variety of memory planning algorithms in discussion with different tradeoffs (See https://discuss.tvm.apache.org/t/discussion-alignment-memory-planning/9730 and https://blog.tensorflow.org/2020/10/optimizing-tensorflow-lite-runtime.html). Depending on the topology and schedules of intermediary buffers, the memory planning algorithm should easily be able to be change able. However, the current design ties the algorithm intimately to the IR constructs – making it harder to modularize / change the algorithm w/o inventing a whole new pass. In reality, the outcome of USMP's algorithm is offsets within a given workspace buffer. Moreover, to produce that it should only need to know the sizes of each tensor and their relative liveness. Therefore, the algorithm interface to USMP should be kept simple to be able to add more algorithms. + +G3. Multiple pool support (including constants) + +Ideally, the user would expect to provide these buffers in the granularity of the memories they'd want to pin them to. E.g., if there are two RW memories : DRAM and SRAM, the buffers need to be identified and pooled by the compiler. Similiarly, for constant data, we need to have a mechanism to allow user to pin them to appropriate memories and addresses in the IR would simply be offsets into the constant buffer(s) provided by the user + +# Guide-level explanation + +## U1: Most simple use case + +### TVMC + + +``` +tvmc compile my_model.tflite --executor=aot --output-format=mlf --target=c +``` + + ### Codegen'd artifacts + + +``` + `//Codegen'd artifacts in metadata.c (lib0.c)` + const TVMModel my_model = { + ... + .entrypoint = &entrypoint, + } + + static uint8_t workspace_buffer[WORKSPACE_BUFFER_SIZE]; + static const uint8_t parameters_buffer[PARAMETERS_BUFFER_SIZE] = ; + + static int32_t entrypoint(TVMInputs_my_model* inputs, + TVMOutputs_my_model* outputs, + TVMContext* context){ + return my_model_main(inputs.input0, + outputs.output0, + &workspace_buffer, + parameters_buffer, + context.resource_handle); + } +``` +``` +// metadata.h + + typedef struct { + uint8_t* input0; + } TVMInputs_my_model; + + typedef struct { + uint8_t* output0; + } TVMOutputs_my_model; +``` + +### User Application +``` + + // The User Application + extern const TVMModel my_model; + int main(...) { + ... + TVMInputs_my_model inputs = {my_data}; + TVMOutputs_my_model outputs = {output_space}; + TVMExecute(&my_model, + &inputs, + &outputs, + NULL); + } +``` +## U2: User wants to share workspaces + +### TVMC +``` + tvmc compile my_model_1.tflite + --executor=aot + --output-format=mlf + --target=accel,c + --with-workspace-buffer= "name=sram;target=c,accel" + + tvmc compile my_model_2.tflite + --executor=aot + --output-format=mlf + --target=accel,c + --with-workspace-buffer= "name=sram;target=c,accel" +``` +### Codegen'd Artifacts +``` + //Codegen'd artifacts in metadata.c (lib0.c) + const TVMModel my_model_1 = { + ... + .entrypoint = &entrypoint, + } + + static const uint8_t parameters_buffer[PARAMETERS_BUFFER_SIZE] = ; + + static int32_t entrypoint(TVMInputs_my_model_1* inputs, + TVMOutputs_my_model_1* outputs, + TVMContext* context){ + return my_model_1_main(inputs.input0, + outputs.output0, + parameters_buffer, + context.workspaces.sram, + context.resource_handle); + } +``` +``` +// metadata.h + + #define TVM_MY_MODEL_1_SRAM_WORKSPACE_BUFFER_SIZE xxxx + + typedef struct { + uint8_t* sram; + } TVMWorkspaces_my_model_1; + + typedef struct { + uint8_t* input0; + } TVMInputs_my_model_1; + + typedef struct { + uint8_t* output0; + } TVMOutputs_my_model_1; + +`//Codegen'd artifacts in metadata.c (lib0.c)` + + const TVMModel my_model_2 = { + ... + .entrypoint = &entrypoint, + } +``` +``` + static const uint8_t parameters_buffer[PARAMETERS_BUFFER_SIZE] = ; + + static int32_t entrypoint(TVMInputs_my_model_2* inputs, + TVMOutputs_my_model_2* outputs, + TVMContext* context){ + return my_model_2_main(inputs.input0, + outputs.output0, + parameters_buffer, + context.workspaces.sram, + context.resource_handle); + } +``` +``` +// metadata.h + + #define TVM_MY_MODEL_2_SRAM_WORKSPACE_BUFFER_SIZE xxxx + + typedef struct { + uint8_t* sram; + } TVMWorkspaces_my_model_2; + + typedef struct { + uint8_t* input0; + } TVMInputs_my_model_2; + + typedef struct { + uint8_t* output0; + } TVMOutputs_my_model_2; +``` +### User Application +``` + // The User Application + extern const TVMModel my_model_1; + extern const TVMModel my_model_2; + + // Please calculate the maximum of TVM_MY_MODEL_1_SRAM_WORKSPACE_BUFFER_SIZE and TVM_MY_MODEL_2_SRAM_WORKSPACE_BUFFER_SIZE and define it as TVM_MY_MODELS_COMMON_WORKSPACE_BUFFER_SIZE + // Alternatively, user could use a malloc (if permitted and desired) for runtime calculation of the max + static uint8_t workspace_buffer[TVM_MY_MODELS_COMMON_WORKSPACE_BUFFER_SIZE]; + + int main(...) { + ... + TVMContext context; + TVMInputs_my_model_1 inputs = {my_data_1}; + TVMOutputs_my_model_1 outputs = {output_space_1}; + TVMWorkspaces_my_model_1 workspaces1 = { + .sram = &workspace_buffer, + }; + TVMSetWorkspaces(&context, &workspaces1); + TVMExecute(&my_model_1, &inputs_1, &outputs_1, &context); + ... + TVMInputs_my_model_2 inputs = {my_data_2}; + TVMOutputs_my_model_2 outputs = {output_space_2}; + TVMWorkspaces_my_model_2 workspaces2 = { + .sram = &workspace_buffer, + }; + TVMSetWorkspaces(&context, &workspaces2); + TVMExecute(&my_model_2, &inputs_2, &outputs_2, &context); + ... + } +``` +## U3 : User wants to pin buffers to different memories + +### TVMC +``` + tvmc compile my_model.tflite + --executor=aot + --target=accel,c + --with-workspace-buffer= "name=dtcm;target=c;size=1000" # Here the size is more of a hint/guide provided to USMP + --with-workspace-buffer= "name=sram;target=c,accel" + --with-parameter-buffer= "name=itcm;target=c;size=5000" # Here the size is more of a hint/guide provided to USMP + --with-parameter-buffer= "name=flash;target=c,accel" +``` +### Codegen'd Artifacts +``` + //Codegen'd artifacts in metadata.c (lib0.c) + const TVMModel my_model = { + ... + .entrypoint = &entrypoint, + } + + static int32_t entrypoint(TVMInputs_my_model* inputs, + TVMOutputs_my_model* outputs, + TVMContext* context){ + + return my_model_main(inputs.input0, + outputs.output0, + context.workspaces.dtcm, + context.workspaces.sram, + context.parameters.itcm, + context.parameters.flash, + context.resource_handle); + } +``` +``` +// metadata.h + + #define TVM_MY_MODEL_DTCM_WORKSPACE_BUFFER_SIZE xxxx + #define TVM_MY_MODEL_SRAM_WORKSPACE_BUFFER_SIZE xxxx + #define TVM_MY_MODEL_ITCM_PARAMETER_BUFFER_SIZE xxxx + #define TVM_MY_MODEL_FLASH_PARAMETER_BUFFER_SIZE xxxx + + typedef struct { + uint8_t* dtcm; + uint8_t* sram; + } TVMWorkspaces_my_model; + + typedef struct { + uint8_t* itcm; + uint8_t* flash; + } TVMParameters_my_model; + + typedef struct { + uint8_t* input0; + } TVMInputs_my_model; + + typedef struct { + uint8_t* output0; + } TVMOutputs_my_model; +``` +### User Application +``` + // The User Application + extern const TVMModel my_model; + __attribute__((section( "ITCM" ) const uint8_t my_model_params_1[TVM_MY_MODEL_ITCM_PARAMETER_BUFFER_SIZE] = ; + __attribute__((section( "FLASH" ), aligned( 16 ))) const uint8_t my_model_params_2[TVM_MY_MODEL_FLASH_PARAMETER_BUFFER_SIZE] = ; + __attribute__((section( "DTCM" ) static uint8_t workspace_buffer_1[TVM_MY_MODEL_DTCM_WORKSPACE_BUFFER_SIZE]; + __attribute__((section( "SRAM" ), aligned( 16 ))) static uint8_t workspace_buffer_2[TVM_MY_MODEL_SRAM_WORKSPACE_BUFFER_SIZE]; + + int main(...) { + ... + TVMContext context; + TVMInputs_my_model_1 inputs = {input}; + TVMOutputs_my_model_1 outputs = {output}; + TVMWorkspaces_my_model workspaces = { + .sram = &workspace_buffer_1, + .dtcm = &workspace_buffer_2, + }; + TVMParameters_my_model parameters = { + .flash = &my_model_params_1, + .itcm = &my_model_params_2 + }; + TVMSetWorkspaces(&context, &workspaces); + TVMSetParameters(&context, parameters); + TVMExecute(&my_model, &inputs, &outputs, &context); + } +``` +# Reference-level explanation + +## Overview + +This should be a IRModule (TIR) → IRModule (TIR) pass. + +Inputs : +* AoT TIR PrimFunc ( the control function describing the call graph to operators) Review comment: Hi @areusch @csullivan, Thanks for the clarification and it makes sense for us. -- This is an automated message from the Apache Git Service. To respond to the message, please log on to GitHub and use the URL above to go to the specific comment. To unsubscribe, e-mail: commits-unsubscribe@tvm.apache.org For queries about this service, please contact Infrastructure at: users@infra.apache.org