hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mich Talebzadeh" <m...@peridale.co.uk>
Subject RE: Swap requirements
Date Wed, 25 Mar 2015 23:14:35 GMT
Yes I believe that is the case.

 

This is very common from days of Max shared memory on Solaris etc. Large applications tend
to have processes with large virtual address spaces. This is typically the result of attaching
to large shared memory segments used by applications and large copy-on-write (COW) segments
that get mapped but sometimes never actually get touched. The net effect of this is that on
the host supporting multiple applications, the virtual address space requirements will grow
to be quite large, typically exceeding the physical memory. Consequently, a fair amount of
swap disk needs to be configured to support these applications  with large virtual address
space running concurrently. In the old days this would typically be 1.2* shared memory segment
or RAM

 

 

HTH

 

Mich Talebzadeh

 

http://talebzadehmich.wordpress.com

 

Publications due shortly:

Creating in-memory Data Grid for Trading Systems with Oracle TimesTen and Coherence Cache

 

NOTE: The information in this email is proprietary and confidential. This message is for the
designated recipient only, if you are not the intended recipient, you should destroy it immediately.
Any information in this message shall not be understood as given or endorsed by Peridale Ltd,
its subsidiaries or their employees, unless expressly so stated. It is the responsibility
of the recipient to ensure that this email is virus free, therefore neither Peridale Ltd,
its subsidiaries nor their employees accept any responsibility.

 

From: max scalf [mailto:oracle.blog3@gmail.com] 
Sent: 25 March 2015 23:05
To: user@hadoop.apache.org
Subject: Re: Swap requirements

 

Thank you harsh.  Can you please explain what you mean when u said "Just simple virtual memory
used by the process" ?  Doesn't virtual memory means swap?

On Wednesday, March 25, 2015, Harsh J <harsh@cloudera.com> wrote:

The suggestion (regarding swappiness) is not for disabling swap as much as it is to 'not using
swap (until really necessary)'. When you run a constant memory-consuming service such as HBase
you'd ideally want the RAM to serve up as much as it can, which setting that swappiness value
helps do (the OS otherwise begins swapping way before its available physical RAM is nearing
full state).

 

The vmem-pmem ratio is something entirely else. The vmem of a process does not mean swap space
usage, just simple virtual memory used by the . I'd recommend disabling YARN's vmem checks
on today's OSes (but keep pmem checks on). You can read some more on this at http://www.quora.com/Why-do-some-applications-use-significantly-more-virtual-memory-on-RHEL-6-compared-to-RHEL-5

 

On Thu, Mar 26, 2015 at 3:37 AM, Abdul I Mohammed <oracle.blog3@gmail.com <javascript:_e(%7B%7D,'cvml','oracle.blog3@gmail.com');>
> wrote:

Thanks Mith...any idea about Yarn.nodemanager.Vmem-pmem-ratio parameter...

If data nodes does not require swap then what about the above parameter?  What is that used
for in yarn?





 

-- 

Harsh J


Mime
View raw message