spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From WangJianfei <wangjianfe...@otcaix.iscas.ac.cn>
Subject Why don't we imp some adaptive learning rate methods, such as adadelat, adam?
Date Wed, 30 Nov 2016 08:51:43 GMT
Hi devs:
    Normally, the adaptive learning rate methods can have a fast convergence
then standard SGD, so why don't we imp them?
see the link for more details 
http://sebastianruder.com/optimizing-gradient-descent/index.html#adadelta



--
View this message in context: http://apache-spark-developers-list.1001551.n3.nabble.com/Why-don-t-we-imp-some-adaptive-learning-rate-methods-such-as-adadelat-adam-tp20057.html
Sent from the Apache Spark Developers List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org


Mime
View raw message