1) The memory on my machine is
user@localhost:~$ free -m
             total       used       free     shared    buffers     cached
Mem:        127932      31882      96049          0       1876      18229
-/+ buffers/cache:      11776     116156
Swap:       130043        369     129674

2) Mappers and reducers i tried to increase reducers to 4 also tried other numbers 3,2 and more than 4.But i am running cascalog queries which by default sets number of reducers to 1 when i use some global sort/count/max operations.

3) mapred.job.reuse.jvm.num.tasks = -1 
   io.sort.mb = 610
   mapred.child.java.opts  , mapred.map.child.java.opts and  mapred.reduce.child.java.opts - these three properties i have not set 

4) Attachment shows my cpuinfo

On 11 Aug 2014, at 07:53, hadoop hive <hadoophive@gmail.com> wrote:

How much memory it have and how many maps and reducer you have set with how much heap size?

On Aug 11, 2014 11:17 AM, "Sindhu Hosamane" <sindhuht@gmail.com> wrote:

So i see 2 datanodes up and running when i run jps command.
The machine on which i set this 2 datanode hadoop is very powerful - means it has 96 cores .
But still i dont get significant performance from 2 datanodes .  How do i make sure 2 datanodes are being used.
Or why does performance fail with 2 datanodes on same machine even when i work on a powerful machine?

Because before tweaking hadoop with those mapped properties  to improve performance, i want know if i get any performance from 2 datanodes(since i am working on a powerful server)

Any of your advices would be helpful.