systemml-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Krishna Kalyan <krishnakaly...@gmail.com>
Subject Running Performance Test workloads
Date Wed, 06 Dec 2017 23:52:03 GMT
Hello,
I have been running some performance tests on AWS, EMR Cluster.
[1 master, 2 slaves] instance type m3.xlarge.

Incase you want to run some performance workloads on 1.0.0 release please
follow the steps below.

- Set SYSTEMML_HOME, SPARK_HOME environment variables in your cluster
- Download and extract the release (systemml-1.0.0-bin.tgz)
- The following extra files need to be copied to your systemml-1.0.0-bin
folders

/home/hadoop/systemml/conf/*
/home/hadoop/systemml/bin/*
/home/hadoop/systemml/scripts/perftest/extractTestData.dml

Using the jars in the lib folder gave me an error (Not sure why), I had to
manually build a latest version and copy SystemML.jar  to the S
ystemml-1.0.0-bin/target folder.

/home/hadoop/systemml/target/*

Make sure that your temp folders are empty
rm -r /home/hadoop/systemml-1.0.0-bin/temp_perftest/
hdfs dfs -rmr /user/hadoop/*

And finally
./run_perftest.py (hybrid-spark)
Will run all your test suit for all algorithms on 10MB data.

./run_perftest.py  --config-dir /home/hadoop/systemml-1.0.0-bin/temp
--temp-dir  /home/hadoop/systemml-1.0.0-bin/temp --file-system-type local
--exec-type singlenode (singlenode)

Regards,
Krishna

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message