spark-reviews mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From avulanov <...@git.apache.org>
Subject [GitHub] spark pull request: [SPARK-10599] Lower communication for block ma...
Date Tue, 15 Sep 2015 21:36:17 GMT
Github user avulanov commented on the pull request:

    https://github.com/apache/spark/pull/8757#issuecomment-140554085
  
    Thank you for the update. Indeed, the tests take finite time to finish now. Let's add
@mengxr to the discussion.
    
    Distributed matrix multiplication makes sense when it is faster than doing it on a single
node. Lets assume that we have squared blocks, and `block*block` takes time `Tblock` on a
single machine. I prepared two tests: 
      - Block-diagonal matrix multiplication `(M * M)`, where `M` is `NxN`. Single machine
multiplication time will be `N*Tblock`. The optimal distributed time would be `Tblock` if
the number of nodes <= `N`. This seems to be embarrassingly parallel.
      - Columnar and row matrix multiplication, (M * M^T), where M has `1` column and `N`
row blocks. Single machine multiplication time will be `N*N*Tblock`
    
    I've done a benchmark for single node multiplication, for example it take 0.04s to multiply
matrix 1000x1000 and 16.55s for 10000x10000 with OpenBLAS and 2x Xeon X5650  @ 2.67GHz. More
results are here https://github.com/avulanov/scala-blas. 
    
    For the following distributed experiment, I am using 6 node with the same CPU, 5 workers
and 1 master.
    #### Block-diagonal matrix multiplication:
    
    Size | Block | Time | Est. single node time
    ------------ | ------------- | ---------| ------|
    1000x1000 | block:1000 | 0.539322901 | 0.04
    2000x2000 | block:1000 | 0.594227124 | 0.08
    3000x3000 | block:1000 | 0.541293169 | 0.12
    4000x4000 | block:1000 | 0.520753395 | 0.16
    5000x5000 | block:1000 | 0.702532957 | 0.2  
    
    Size | Block | Time | Est. single node time
    ------------ | ------------- | ---------| ------|
    10000x10000 | block:10000 | 27.565218631 | 16.55
    20000x20000 | block:10000  | 28.363953039 | 33.1
    30000x30000 | block:10000 | 114.133834717 | 49.65
    40000x40000 | block:10000 | 117.701914787 | 66.2
    50000x50000 | block:10000 | 141.827804904 | 82.75
    
    For some reason, distributed operations are slower than the estimation on single node,
though they can be well parallalized. Do you know the reason for that?
    
    #### Column and row matrix multiplication
    
    Size | Block | Time | Est. single node time
    ------------ | ------------- | ---------| ------|
    1000x1000 | block:1000 |  0.281162649 | 0.04
    2000x1000 | block:1000 |  0.461582522 | 0.16
    3000x1000 | block:1000 |  0.520122422 | 0.36
    4000x1000 | block:1000 |  0.560923767 | 0.64
    5000x1000 | block:1000 |  0.887406721 | 1
    
    Distributed operations become faster than single node with bigger columnar matrix. The
test did not finish for the block size of 10000 because of Out of free space exception, though
I used tempfs of 18GB as both spark.local.dir and java tmp. It seems that shuffling is really
huge. Should it be so big?
    
    Link to the tests: https://github.com/avulanov/blockmatrix-benchmark/blob/master/src/blockmatrix.scala


---
If your project is set up for it, you can reply to this email and have your
reply appear on GitHub as well. If your project does not have this feature
enabled and wishes so, or if the feature is enabled but not working, please
contact infrastructure at infrastructure@apache.org or file a JIRA ticket
with INFRA.
---

---------------------------------------------------------------------
To unsubscribe, e-mail: reviews-unsubscribe@spark.apache.org
For additional commands, e-mail: reviews-help@spark.apache.org


Mime
View raw message