hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Zbigniew Kostrzewa <kostrz...@9livesdata.com>
Subject [YARN] does reserved memory count as used?
Date Wed, 01 Mar 2017 08:28:31 GMT
Hi all,

I have a question about reservations in YARN in Fair Scheduler case. I 
have setup a small cluster - 3 nodes with 8GB RAM and 4vcpus each. I 
have submitted a single Spark job - SparkPi with 100000 iterations to be 
exact and web UI reports all memory (24GB) as used but it also marks 6GB 
of memory as reserved. If I understand the docs correctly, reservations 
are made on some existing free resources (which currently do not fit 
application needs) and not on resources that will be available in the 
future. So if 6GB is marked as reserved I would expect used memory to be 
rather 18GB instead of 24GB.

Could anyone shed some light on how reservations actually work in YARN? 
and how is it possible that all memory is marked as used and yet still 
6GB is marked as reserved?

A few details about the cluster:
OS: CentOS 7.3
Java: 1.8
Hadoop: 2.6.5 (configured with external shuffle service)
Spark: 1.6.2 (configured with dynamic allocation)

Regards,
Zbyszek



Mime
View raw message