spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From WangJianfei <>
Subject Why don't we imp some adaptive learning rate methods, such as adadelat, adam?
Date Wed, 30 Nov 2016 08:51:43 GMT
Hi devs:
    Normally, the adaptive learning rate methods can have a fast convergence
then standard SGD, so why don't we imp them?
see the link for more details

View this message in context:
Sent from the Apache Spark Developers List mailing list archive at

To unsubscribe e-mail:

View raw message