From issues-return-8416-archive-asf-public=cust-asf.ponee.io@systemml.apache.org Wed May 16 22:07:05 2018 Return-Path: X-Original-To: archive-asf-public@cust-asf.ponee.io Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by mx-eu-01.ponee.io (Postfix) with SMTP id 208C4180671 for ; Wed, 16 May 2018 22:07:04 +0200 (CEST) Received: (qmail 65296 invoked by uid 500); 16 May 2018 20:07:04 -0000 Mailing-List: contact issues-help@systemml.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: dev@systemml.apache.org Delivered-To: mailing list issues@systemml.apache.org Received: (qmail 65287 invoked by uid 99); 16 May 2018 20:07:04 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd3-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Wed, 16 May 2018 20:07:04 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd3-us-west.apache.org (ASF Mail Server at spamd3-us-west.apache.org) with ESMTP id D05E218031E for ; Wed, 16 May 2018 20:07:03 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd3-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: -110.311 X-Spam-Level: X-Spam-Status: No, score=-110.311 tagged_above=-999 required=6.31 tests=[ENV_AND_HDR_SPF_MATCH=-0.5, RCVD_IN_DNSWL_MED=-2.3, SPF_PASS=-0.001, T_RP_MATCHES_RCVD=-0.01, USER_IN_DEF_SPF_WL=-7.5, USER_IN_WHITELIST=-100] autolearn=disabled Received: from mx1-lw-eu.apache.org ([10.40.0.8]) by localhost (spamd3-us-west.apache.org [10.40.0.10]) (amavisd-new, port 10024) with ESMTP id 15dNF5QI9mkQ for ; Wed, 16 May 2018 20:07:01 +0000 (UTC) Received: from mailrelay1-us-west.apache.org (mailrelay1-us-west.apache.org [209.188.14.139]) by mx1-lw-eu.apache.org (ASF Mail Server at mx1-lw-eu.apache.org) with ESMTP id 27F4D5F358 for ; Wed, 16 May 2018 20:07:01 +0000 (UTC) Received: from jira-lw-us.apache.org (unknown [207.244.88.139]) by mailrelay1-us-west.apache.org (ASF Mail Server at mailrelay1-us-west.apache.org) with ESMTP id 59360E0110 for ; Wed, 16 May 2018 20:07:00 +0000 (UTC) Received: from jira-lw-us.apache.org (localhost [127.0.0.1]) by jira-lw-us.apache.org (ASF Mail Server at jira-lw-us.apache.org) with ESMTP id 1F548217A2 for ; Wed, 16 May 2018 20:07:00 +0000 (UTC) Date: Wed, 16 May 2018 20:07:00 +0000 (UTC) From: "LI Guobao (JIRA)" To: issues@systemml.apache.org Message-ID: In-Reply-To: References: Subject: [jira] [Updated] (SYSTEMML-2299) API design of the paramserv function MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/SYSTEMML-2299?page=3Dcom.atlas= sian.jira.plugin.system.issuetabpanels:all-tabpanel ] LI Guobao updated SYSTEMML-2299: -------------------------------- Description:=20 The objective of =E2=80=9Cparamserv=E2=80=9D built-in function is to update= an initial or existing model with configuration. An initial function signa= ture would be:=C2=A0 {code:java} model'=3Dparamserv(model, features=3DX, labels=3DY, val_features=3DX_val, v= al_labels=3DY_val, upd=3D"fun1", agg=3D"fun2", mode=3D"LOCAL", utype=3D"BSP= ", freq=3D"BATCH", epochs=3D100, batchsize=3D64, k=3D7, scheme=3D"disjoint_= contiguous", hyperparams=3Dparams, checkpointing=3D"NONE"){code} We are interested in providing the model (which will be a struct-like data = structure consisting of the weights, the biases and the hyperparameters), t= he training features and labels, the validation features and labels, the ba= tch update function (i.e., gradient calculation func), the update strategy = (e.g. sync, async, hogwild!, stale-synchronous), the update frequency (e.g.= epoch or mini-batch), the gradient aggregation function, the number of epo= ch, the batch size, the degree of parallelism, the data partition scheme, a= list of additional hyper parameters, as well as the checkpointing strategy= . And the function will return a trained model in struct format. *Inputs*: * model : a list consisting of the weight and bias matrices * features=C2=A0: training features matrix * labels : training label matrix * val_features : validation features matrix * val_labels : validation label matrix * upd : the name of gradient calculation function * agg : the name of gradient aggregation function * mode (options: LOCAL, REMOTE_SPARK): the execution backend wher= e the parameter is executed * utype=C2=A0 (options: BSP, ASP, SSP): the updating mode * freq (options: EPOCH, BATCH): the frequence of updates * epochs : the number of epoch * batchsize [optional] (default value: 64): the size of batch, i= f the update frequence is "EPOCH", this argument will be ignored * k : the degree of parallelism * scheme (options: disjoint_contiguous, disjoint_round_robin, dis= joint_random, overlap_reshuffle): the scheme of data partition, i.e., how t= he data is distributed across workers * hyperparams [optional]: a list consisting of the additional hyper= parameters, e.g., learning rate, momentum * checkpointing (options: NONE(default), EPOCH, EPOCH10) [optiona= l]: the checkpoint strategy, we could set a checkpoint for each epoch or ea= ch 10 epochs=C2=A0 *Output*: * model' : a list consisting of the updated weight and bias matrices was: The objective of =E2=80=9Cparamserv=E2=80=9D built-in function is to update= an initial or existing model with configuration. An initial function signa= ture would be:=C2=A0 {code:java} model'=3Dparamserv(model, features=3DX, labels=3DY, val_features=3DX_val, v= al_labels=3DY_val, upd=3D"fun1", agg=3D"fun2", mode=3D"LOCAL", utype=3D"BSP= ", freq=3D"BATCH", epochs=3D100, batchsize=3D64, k=3D7, scheme=3D"disjoint_= contiguous", hyperparams=3Dparams, checkpointing=3D"NONE"){code} We are interested in providing the model (which will be a struct-like data = structure consisting of the weights, the biases and the hyperparameters), t= he training features and labels, the validation features and labels, the ba= tch update function (i.e., gradient calculation func), the update strategy = (e.g. sync, async, hogwild!, stale-synchronous), the update frequency (e.g.= epoch or mini-batch), the gradient aggregation function, the number of epo= ch, the batch size, the degree of parallelism, the data partition scheme, a= list of additional hyper parameters, as well as the checkpointing strategy= . And the function will return a trained model in struct format. *Inputs*: * model : a list consisting of the weight and bias matrices * features=C2=A0: training features matrix * labels : training label matrix * val_features : validation features matrix * val_labels : validation label matrix * upd : the name of gradient calculation function * agg : the name of gradient aggregation function * mode (options: LOCAL, REMOTE_SPARK): the execution backend wher= e the parameter is executed * utype=C2=A0 (options: BSP, ASP, SSP): the updating mode * freq (options: EPOCH, BATCH): the frequence of updates * epochs : the number of epoch * batchsize [optional]: the size of batch, if the update frequen= ce is "EPOCH", this argument will be ignored * k : the degree of parallelism * scheme (options: disjoint_contiguous, disjoint_round_robin, dis= joint_random, overlap_reshuffle): the scheme of data partition, i.e., how t= he data is distributed across workers * hyperparams [optional]: a list consisting of the additional hyper= parameters, e.g., learning rate, momentum * checkpointing (options: NONE(default), EPOCH, EPOCH10) [optiona= l]: the checkpoint strategy, we could set a checkpoint for each epoch or ea= ch 10 epochs=C2=A0 *Output*: * model' : a list consisting of the updated weight and bias matrices > API design of the paramserv function > ------------------------------------ > > Key: SYSTEMML-2299 > URL: https://issues.apache.org/jira/browse/SYSTEMML-2299 > Project: SystemML > Issue Type: Sub-task > Reporter: LI Guobao > Assignee: LI Guobao > Priority: Major > Fix For: SystemML 1.2 > > > The objective of =E2=80=9Cparamserv=E2=80=9D built-in function is to upda= te an initial or existing model with configuration. An initial function sig= nature would be:=C2=A0 > {code:java} > model'=3Dparamserv(model, features=3DX, labels=3DY, val_features=3DX_val,= val_labels=3DY_val, upd=3D"fun1", agg=3D"fun2", mode=3D"LOCAL", utype=3D"B= SP", freq=3D"BATCH", epochs=3D100, batchsize=3D64, k=3D7, scheme=3D"disjoin= t_contiguous", hyperparams=3Dparams, checkpointing=3D"NONE"){code} > We are interested in providing the model (which will be a struct-like dat= a structure consisting of the weights, the biases and the hyperparameters),= the training features and labels, the validation features and labels, the = batch update function (i.e., gradient calculation func), the update strateg= y (e.g. sync, async, hogwild!, stale-synchronous), the update frequency (e.= g. epoch or mini-batch), the gradient aggregation function, the number of e= poch, the batch size, the degree of parallelism, the data partition scheme,= a list of additional hyper parameters, as well as the checkpointing strate= gy. And the function will return a trained model in struct format. > *Inputs*: > * model : a list consisting of the weight and bias matrices > * features=C2=A0: training features matrix > * labels : training label matrix > * val_features : validation features matrix > * val_labels : validation label matrix > * upd : the name of gradient calculation function > * agg : the name of gradient aggregation function > * mode (options: LOCAL, REMOTE_SPARK): the execution backend wh= ere the parameter is executed > * utype=C2=A0 (options: BSP, ASP, SSP): the updating mode > * freq (options: EPOCH, BATCH): the frequence of updates > * epochs : the number of epoch > * batchsize [optional] (default value: 64): the size of batch,= if the update frequence is "EPOCH", this argument will be ignored > * k : the degree of parallelism > * scheme (options: disjoint_contiguous, disjoint_round_robin, d= isjoint_random, overlap_reshuffle): the scheme of data partition, i.e., how= the data is distributed across workers > * hyperparams [optional]: a list consisting of the additional hyp= er parameters, e.g., learning rate, momentum > * checkpointing (options: NONE(default), EPOCH, EPOCH10) [optio= nal]: the checkpoint strategy, we could set a checkpoint for each epoch or = each 10 epochs=C2=A0 > *Output*: > * model' : a list consisting of the updated weight and bias matric= es -- This message was sent by Atlassian JIRA (v7.6.3#76005)