systemml-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Matthias Boehm (JIRA)" <>
Subject [jira] [Created] (SYSTEMML-1772) Perftest: MultiLogReg 100M x 1K, sparse fails with OOM
Date Sat, 15 Jul 2017 00:31:00 GMT
Matthias Boehm created SYSTEMML-1772:

             Summary: Perftest: MultiLogReg 100M x 1K, sparse fails with OOM
                 Key: SYSTEMML-1772
             Project: SystemML
          Issue Type: Bug
            Reporter: Matthias Boehm

Our perftest MultiLogReg 100M x 1K, sparse fails with the following OOM when ran with 20GB
driver budget. 

java.lang.OutOfMemoryError: GC overhead limit exceeded
17/07/14 13:42:04 WARN hdfs.BlockReaderFactory: I/O error constructing remote block reader. Premature EOF: no length prefix available
	at org.apache.hadoop.hdfs.protocolPB.PBHelper.vintPrefixed(
	at org.apache.hadoop.hdfs.RemoteBlockReader2.newBlockReader(
	at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReader(
	at org.apache.hadoop.hdfs.BlockReaderFactory.getRemoteBlockReaderFromTcp(
	at org.apache.hadoop.hdfs.DFSInputStream.blockSeekTo(
	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(
	at java.util.concurrent.ThreadPoolExecutor.runWorker(
	at java.util.concurrent.ThreadPoolExecutor$

Thanks for catching this issue [~acs_s]. The root cause can be seen in the following HOP characteristics
and the generated runtime plan which contains a CP mmchain operation for hop 456

17/07/14 16:48:50 INFO recompile.Recompiler: EXPLAIN RECOMPILE 
GENERIC (lines 207-208):
--(432) TRead X [100000000,1000,1000,1000,999978303] [0,0,23270 -> 23270MB], SPARK
--(439) r(t) (432) [1000,100000000,1000,1000,999978303] [23270,0,11444 -> 34714MB], SPARK
--(431) TRead P [100000000,2,1000,1000,200000000] [0,0,1526 -> 1526MB], CP
--(436) rix (431) [100000000,1,1000,1000,-1] [1526,0,763 -> 2289MB], CP
--(1276) u(sprop) (436) [100000000,1,1000,1000,-1] [763,0,763 -> 1526MB], CP
--(429) TRead ssX_V [1000,1,1000,1000,1000] [0,0,0 -> 0MB], CP
--(437) ba(+*) (432,429) [100000000,1,1000,1000,-1] [23270,0,763 -> 24033MB], SPARK
--(1275) b(*) (1276,437) [100000000,1,1000,1000,-1] [1526,0,763 -> 2289MB], CP
--(456) ba(+*) (439,1275) [1000,1,1000,1000,-1] [12207,0,0 -> 12207MB], CP
--(457) TWrite HV (456) [1000,1,1000,1000,-1] [0,0,0 -> 0MB], CP

The final matrix multiplication for {{t(X) tmp}} fits in CP and satisfied the mmchain pattern
However, mmchain avoids the transpose (assuming that X must fit into memory given that t(X)
fits in memory). Given our MCSR and CSR representations this is not necessarily true because
there each row has a certain sparse row overhead independent of the number of non-zeros.

We should consider this scenario during execution type selection and send the entire pattern
to SPARK in these cases which is anyway a good idea because the first matrix multiplications
is already in SPARK. If the additional broadcast and blocksize constraints are met we compile
a SPARK mmchain, otherwise two subsequent SPARK matrix multiplications.

This message was sent by Atlassian JIRA

View raw message