Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 115C27B01 for ; Sat, 16 Jul 2011 11:14:04 +0000 (UTC) Received: (qmail 3733 invoked by uid 500); 16 Jul 2011 11:14:03 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 3405 invoked by uid 500); 16 Jul 2011 11:13:58 -0000 Mailing-List: contact hdfs-user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-user@hadoop.apache.org Delivered-To: mailing list hdfs-user@hadoop.apache.org Received: (qmail 3395 invoked by uid 99); 16 Jul 2011 11:13:56 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 16 Jul 2011 11:13:56 +0000 X-ASF-Spam-Status: No, hits=2.9 required=5.0 tests=FRT_SOMA2,FS_REPLICA,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of harsh@cloudera.com designates 74.125.83.48 as permitted sender) Received: from [74.125.83.48] (HELO mail-gw0-f48.google.com) (74.125.83.48) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 16 Jul 2011 11:13:50 +0000 Received: by gwj22 with SMTP id 22so1138041gwj.35 for ; Sat, 16 Jul 2011 04:13:29 -0700 (PDT) Received: by 10.236.181.234 with SMTP id l70mr5851551yhm.335.1310814809144; Sat, 16 Jul 2011 04:13:29 -0700 (PDT) MIME-Version: 1.0 Received: by 10.236.61.73 with HTTP; Sat, 16 Jul 2011 04:13:09 -0700 (PDT) In-Reply-To: References: <50FF1638B37A477EBE3CC21174B08BDA@china.huawei.com> From: Harsh J Date: Sat, 16 Jul 2011 16:43:09 +0530 Message-ID: Subject: Re: could only be replicated to 0 nodes, instead of 1 To: hdfs-user@hadoop.apache.org Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable The actual check is done to see if 5 blocks worth of space is available remaining. On Sat, Jul 16, 2011 at 1:52 PM, Thomas Anderson wrote: > Harsh, > > Thanks, you are right. The problem stems from the tmp directory space > is not large enough. After changing tmp dir to other place, the > problem goes away. > > But I remember one block size (default) in hdfs is 64m, so shouldn't > it at least allow one file, whose actual size in local disk is smaller > than 1k, to be uploaded? > > Thanks again for the advice. > > On Fri, Jul 15, 2011 at 7:49 PM, Harsh J wrote: >> Thomas, >> >> Your problem might lie simply with the virtual node DNs using /tmp and >> tmpfs being used for that -- which somehow is causing reported free >> space to go as 0 in reports to the NN (master). >> >> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 101M =A0 44K =A0101M =A0 1% /tmp >> >> This causes your trouble that the NN can't choose a suitable DN to >> write to, cause it determines that none has at least a block size >> worth of space (64MB default) available for writes. >> >> You can resolve as: >> >> 1. Stop DFS completely. >> >> 2. Create a directory under root somewhere (I use Cloudera's distro, >> and its default configured location for data files comes along as >> /var/lib/hadoop-0.20/cache/, if you need an idea for a location) and >> set it as your hadoop.tmp.dir in core-site.xml on all the nodes. >> >> 3. Reformat your NameNode (hadoop namenode -format, say Y) and restart >> DFS. Things _should_ be OK now. >> >> Config example (core-site.xml): >> >> =A0 >> =A0 hadoop.tmp.dir >> =A0 /var/lib/hadoop-0.20/cache >> =A0 >> >> Let us know if this still doesn't get your dev cluster up and running >> for action :) >> >> On Fri, Jul 15, 2011 at 4:40 PM, Thomas Anderson >> wrote: >>> When doing partition, I remember only / and swap was specified for all >>> nodes during creation. So I think /tmp is also mounted under /, which >>> should have size around 9G. The total size of hardisk specified is >>> 10G. >>> >>> The df -kh shows >>> >>> server01: >>> /dev/sda1 =A0 =A0 =A0 =A0 =A0 =A0 9.4G =A02.3G =A06.7G =A025% / >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 5.0M =A04.0K =A05.0M =A0 1% /lib/= init/rw >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 5.0M =A0 =A0 0 =A05.0M =A0 0% /va= r/run/lock >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 101M =A0132K =A0101M =A0 1% /tmp >>> udev =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0247M =A0 =A0 0 =A0247M =A0 0% /= dev >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 101M =A0 =A0 0 =A0101M =A0 0% /va= r/run/shm >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A051M =A0176K =A0 51M =A0 1% /va= r/run >>> >>> server02: >>> /dev/sda1 =A0 =A0 =A0 =A0 =A0 =A0 9.4G =A02.2G =A06.8G =A025% / >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 5.0M =A04.0K =A05.0M =A0 1% /lib/= init/rw >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 5.0M =A0 =A0 0 =A05.0M =A0 0% /va= r/run/lock >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 101M =A0 44K =A0101M =A0 1% /tmp >>> udev =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0247M =A0 =A0 0 =A0247M =A0 0% /= dev >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 101M =A0 =A0 0 =A0101M =A0 0% /va= r/run/shm >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A051M =A0176K =A0 51M =A0 1% /va= r/run >>> >>> server03: >>> /dev/sda1 =A0 =A0 =A0 =A0 =A0 =A0 9.4G =A02.2G =A06.8G =A025% / >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 5.0M =A04.0K =A05.0M =A0 1% /lib/= init/rw >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 5.0M =A0 =A0 0 =A05.0M =A0 0% /va= r/run/lock >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 101M =A0 44K =A0101M =A0 1% /tmp >>> udev =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0247M =A0 =A0 0 =A0247M =A0 0% /= dev >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 101M =A0 =A0 0 =A0101M =A0 0% /va= r/run/shm >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A051M =A0176K =A0 51M =A0 1% /va= r/run >>> >>> server04: >>> /dev/sda1 =A0 =A0 =A0 =A0 =A0 =A0 9.4G =A02.2G =A06.8G =A025% / >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 5.0M =A04.0K =A05.0M =A0 1% /lib/= init/rw >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 5.0M =A0 =A0 0 =A05.0M =A0 0% /va= r/run/lock >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 101M =A0 44K =A0101M =A0 1% /tmp >>> udev =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0247M =A0 =A0 0 =A0247M =A0 0% /= dev >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 101M =A0 =A0 0 =A0101M =A0 0% /va= r/run/shm >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A051M =A0176K =A0 51M =A0 1% /va= r/run >>> >>> server05: >>> /dev/sda1 =A0 =A0 =A0 =A0 =A0 =A0 9.4G =A02.2G =A06.8G =A025% / >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 5.0M =A04.0K =A05.0M =A0 1% /lib/= init/rw >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 5.0M =A0 =A0 0 =A05.0M =A0 0% /va= r/run/lock >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 101M =A0 44K =A0101M =A0 1% /tmp >>> udev =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0247M =A0 =A0 0 =A0247M =A0 0% /= dev >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 101M =A0 =A0 0 =A0101M =A0 0% /va= r/run/shm >>> tmpfs =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A051M =A0176K =A0 51M =A0 1% /va= r/run >>> >>> In addition, the output of dfs (du -sk /tmp/hadoop-user/dfs) is >>> >>> server02: >>> 8 =A0 =A0 =A0 /tmp/hadoop-user/dfs/ >>> >>> server03: >>> 8 =A0 =A0 =A0 /tmp/hadoop-user/dfs/ >>> >>> server04: >>> 8 =A0 =A0 =A0 /tmp/hadoop-user/dfs/ >>> >>> server05: >>> 8 =A0 =A0 =A0 /tmp/hadoop-user/dfs/ >>> >>> On Fri, Jul 15, 2011 at 7:01 PM, Harsh J wrote: >>>> (P.s. I asked that cause if you look at your NN's live nodes tables, >>>> the reported space is all 0) >>>> >>>> What's the output of: >>>> >>>> du -sk /tmp/hadoop-user/dfs on all your DNs? >>>> >>>> On Fri, Jul 15, 2011 at 4:01 PM, Harsh J wrote: >>>>> Thomas, >>>>> >>>>> Is your /tmp/ mount point also under the / or is it separate? Your >>>>> dfs.data.dir are /tmp/hadoop-user/dfs/data in all DNs, and if they ar= e >>>>> separately mounted then what's the available space on that? >>>>> >>>>> (bad idea in production to keep things default on /tmp though, like >>>>> dfs.name.dir, dfs.data.dir -- reconfigure+restart as necessary) >>>>> >>>>> On Fri, Jul 15, 2011 at 3:47 PM, Thomas Anderson >>>>> wrote: >>>>>> 1.) The disk usage (with df -kh) on namenode (server01) >>>>>> >>>>>> Filesystem =A0 =A0 =A0 =A0 =A0 =A0Size =A0Used Avail Use% Mounted on >>>>>> /dev/sda1 =A0 =A0 =A0 =A0 =A0 =A0 9.4G =A02.3G =A06.7G =A025% / >>>>>> >>>>>> and datanodes (server02 ~ server05) >>>>>> /dev/sda1 =A0 =A0 =A0 =A0 =A0 =A0 9.4G =A02.2G =A06.8G =A025% / >>>>>> /dev/sda1 =A0 =A0 =A0 =A0 =A0 =A0 9.4G =A02.2G =A06.8G =A025% / >>>>>> /dev/sda1 =A0 =A0 =A0 =A0 =A0 =A0 9.4G =A02.2G =A06.8G =A025% / >>>>>> /dev/sda1 =A0 =A0 =A0 =A0 =A0 =A0 9.4G =A02.2G =A06.8G =A025% / >>>>>> >>>>>> 2.) How can I make sure that datanode is busy? The environment is on= ly >>>>>> for testing so there is no other user processes are running at that >>>>>> moment. Also it is a fresh installation, so only hadoop required >>>>>> packages are installed such as hadoop and jdk. >>>>>> >>>>>> 3.) fs.block.size is not set in hdfs-site.xml, including datanodes a= nd >>>>>> namenode, because its purpose is for testing. I thought it would use >>>>>> the default value, which should be 512? >>>>>> >>>>>> 4.) What might be a good way for fast check if network is not stable= ? >>>>>> I check the healthy page e.g. server01:50070/dfshealth.jsp where >>>>>> livenodes are up and =A0last contact varies when checking the page. >>>>>> >>>>>> Node =A0 =A0 Last Contact =A0 =A0Admin State =A0 =A0 Configured =A0C= apacity (GB) =A0 =A0 =A0 Used >>>>>> (GB) =A0 =A0 Non DFS =A0Used (GB) =A0 =A0 =A0Remaining =A0(GB) =A0 = =A0 =A0 =A0 Used =A0(%) =A0 =A0 =A0 Used =A0(%) >>>>>> Remaining =A0(%) =A0 Blocks >>>>>> server02 =A0 =A0 =A0 =A0 2 =A0 =A0 =A0In Service =A0 =A0 =A00.1 =A0 = =A0 0 =A0 =A0 =A0 0 =A0 =A0 =A0 0.1 =A0 =A0 0.01 =A0 =A0 99.96 =A00 >>>>>> server03 =A0 =A0 =A0 =A0 0 =A0 =A0 =A0In Service =A0 =A0 =A00.1 =A0 = =A0 0 =A0 =A0 =A0 0 =A0 =A0 =A0 0.1 =A0 =A0 0.01 =A0 =A0 99.96 =A00 >>>>>> server04 =A0 =A0 =A0 =A0 1 =A0 =A0 =A0In Service =A0 =A0 =A00.1 =A0 = =A0 0 =A0 =A0 =A0 0 =A0 =A0 =A0 0.1 =A0 =A0 0.01 =A0 =A0 99.96 =A00 >>>>>> server05 =A0 =A0 =A0 =A0 2 =A0 =A0 =A0In Service =A0 =A0 =A00.1 =A0 = =A0 0 =A0 =A0 =A0 0 =A0 =A0 =A0 0.1 =A0 =A0 0.01 =A0 =A0 99.96 =A00 >>>>>> >>>>>> 5.) Only command `hadoop fs -put /tmp/testfile test` is issued as it >>>>>> is just to test if the installation is working. So the file e.g. >>>>>> testfile will be removed first (hadoop fs -rm test/testfile), then >>>>>> upload again with hadoop put command. >>>>>> >>>>>> The logs are listed as below: >>>>>> >>>>>> namenode: >>>>>> server01: http://pastebin.com/TLpDmmPx >>>>>> >>>>>> datanodes: >>>>>> server02: http://pastebin.com/pdE5XKfi >>>>>> server03: http://pastebin.com/4aV7ECCV >>>>>> server04: http://pastebin.com/tF7HiRZj >>>>>> server05: http://pastebin.com/5qwSPrvU >>>>>> >>>>>> Please let me know if more information needs to be provided. >>>>>> >>>>>> I really appreciate your suggestion. >>>>>> >>>>>> Thank you. >>>>>> >>>>>> >>>>>> On Fri, Jul 15, 2011 at 4:54 PM, Brahma Reddy wrote: >>>>>>> Hi, >>>>>>> >>>>>>> By seeing this exception(could only be replicated to 0 nodes, inste= ad of 1) >>>>>>> ,datanode is not available to Name Node.. >>>>>>> >>>>>>> This are the following cases Data Node may not available to Name No= de >>>>>>> >>>>>>> 1)Data Node disk is Full >>>>>>> >>>>>>> 2)Data Node is Busy with block report and block scanning >>>>>>> >>>>>>> 3)If Block Size is Negative value(dfs.block.size in hdfs-site.xml) >>>>>>> >>>>>>> 4)while write in progress primary datanode goes down(Any n/w flucta= tions b/w >>>>>>> Name Node and Data Node Machines) >>>>>>> >>>>>>> 5)when Ever we append any partial chunk and call sync for subsequen= t partial >>>>>>> chunk appends client should store the previous data in buffer. >>>>>>> >>>>>>> For example after appending "a" I have called sync and when I am tr= ying the >>>>>>> to append the buffer should have "ab" >>>>>>> >>>>>>> And Server side when the chunk is not multiple of 512 then it will = try to do >>>>>>> Crc comparison for the data present in block file as well as crc pr= esent in >>>>>>> metafile. But while constructing crc for the data present in block = it is >>>>>>> always comparing till the initial Offeset >>>>>>> >>>>>>> Or For more analysis Please the data node logs >>>>>>> >>>>>>> Warm Regards >>>>>>> >>>>>>> Brahma Reddy >>>>>>> >>>>>>> *******************************************************************= ********* >>>>>>> *********** >>>>>>> This e-mail and attachments contain confidential information from H= UAWEI, >>>>>>> which is intended only for the person or entity whose address is li= sted >>>>>>> above. Any use of the information contained herein in any way (incl= uding, >>>>>>> but not limited to, total or partial disclosure, reproduction, or >>>>>>> dissemination) by persons other than the intended recipient's) is >>>>>>> prohibited. If you receive this e-mail in error, please notify the = sender by >>>>>>> phone or email immediately and delete it! >>>>>>> -----Original Message----- >>>>>>> From: Thomas Anderson [mailto:t.dt.aanderson@gmail.com] >>>>>>> Sent: Friday, July 15, 2011 9:09 AM >>>>>>> To: hdfs-user@hadoop.apache.org >>>>>>> Subject: could only be replicated to 0 nodes, instead of 1 >>>>>>> >>>>>>> I have fresh =A0hadoop 0.20.2 installed on virtualbox 4.0.8 with jd= k >>>>>>> 1.6.0_26. The problem is when trying to put a file to hdfs, it thro= ws >>>>>>> error `org.apache.hadoop.ipc.RemoteException: java.io.IOException: >>>>>>> File /path/to/file could only be replicated to 0 nodes, instead of = 1'; >>>>>>> however, there is no problem to create a folder, as the command ls >>>>>>> print the result >>>>>>> >>>>>>> Found 1 items >>>>>>> drwxr-xr-x =A0 - user supergroup =A0 =A0 =A0 =A0 =A00 2011-07-15 11= :09 /user/user/test >>>>>>> >>>>>>> I also try with flushing firewall (remove all iptables restriction)= , >>>>>>> but the error message is still thrown out when uploading (hadoop fs >>>>>>> -put /tmp/x test) a file from local fs. >>>>>>> >>>>>>> The name node log shows >>>>>>> >>>>>>> 2011-07-15 10:42:43,491 INFO org.apache.hadoop.hdfs.StateChange: >>>>>>> BLOCK* NameSystem.registerDatanode: node registration from >>>>>>> aaa.bbb.ccc.ddd.22:50010 storage DS-929017105-aaa.bbb.ccc.22-50010-= 13 >>>>>>> 10697763488 >>>>>>> 2011-07-15 10:42:43,495 INFO org.apache.hadoop.net.NetworkTopology: >>>>>>> Adding a new node: /default-rack/aaa.bbb.ccc.22:50010 >>>>>>> 2011-07-15 10:42:44,169 INFO org.apache.hadoop.hdfs.StateChange: >>>>>>> BLOCK* NameSystem.registerDatanode: node registration from >>>>>>> aaa.bbb.ccc.35:50010 storage DS-884574392-aaa.bbb.ccc.35-50010-13 >>>>>>> 10697764164 >>>>>>> 2011-07-15 10:42:44,170 INFO org.apache.hadoop.net.NetworkTopology: >>>>>>> Adding a new node: /default-rack/aaa.bbb.ccc.35:50010 >>>>>>> 2011-07-15 10:42:44,507 INFO org.apache.hadoop.hdfs.StateChange: >>>>>>> BLOCK* NameSystem.registerDatanode: node registration from >>>>>>> aaa.bbb.ccc.ddd.11:50010 storage DS-1537583073-aaa.bbb.ccc.11-50010= -1 >>>>>>> 310697764488 >>>>>>> 2011-07-15 10:42:44,507 INFO org.apache.hadoop.net.NetworkTopology: >>>>>>> Adding a new node: /default-rack/aaa.bbb.ccc.11:50010 >>>>>>> 2011-07-15 10:42:45,796 INFO org.apache.hadoop.hdfs.StateChange: >>>>>>> BLOCK* NameSystem.registerDatanode: node registration from >>>>>>> 140.127.220.25:50010 storage DS-1500589162-aaa.bbb.ccc.25-50010-1 >>>>>>> 310697765386 >>>>>>> 2011-07-15 10:42:45,797 INFO org.apache.hadoop.net.NetworkTopology: >>>>>>> Adding a new node: /default-rack/aaa.bbb.ccc.25:50010 >>>>>>> >>>>>>> And all datanodes have similar message as below: >>>>>>> >>>>>>> 2011-07-15 10:42:46,562 INFO >>>>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: using >>>>>>> BLOCKREPORT_INTERVAL of 3600000msec Initial delay: 0msec >>>>>>> 2011-07-15 10:42:47,163 INFO >>>>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 >>>>>>> blocks got processed in 3 msecs >>>>>>> 2011-07-15 10:42:47,187 INFO >>>>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic >>>>>>> block scanner. >>>>>>> 2011-07-15 11:19:42,931 INFO >>>>>>> org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 >>>>>>> blocks got processed in 1 msecs >>>>>>> >>>>>>> Command `hadoop fsck /` =A0displays >>>>>>> >>>>>>> Status: HEALTHY >>>>>>> =A0Total size: =A0 =A00 B >>>>>>> =A0Total dirs: =A0 =A03 >>>>>>> =A0Total files: =A0 0 (Files currently being written: 1) >>>>>>> =A0Total blocks (validated): =A0 =A0 =A00 >>>>>>> =A0Minimally replicated blocks: =A0 0 >>>>>>> =A0Over-replicated blocks: =A0 =A0 =A0 =A00 >>>>>>> =A0Under-replicated blocks: =A0 =A0 =A0 0 >>>>>>> =A0Mis-replicated blocks: =A0 =A0 =A0 =A0 0 >>>>>>> =A0Default replication factor: =A0 =A03 >>>>>>> =A0Average block replication: =A0 =A0 0.0 >>>>>>> =A0Corrupt blocks: =A0 =A0 =A0 =A0 =A0 =A0 =A0 =A00 >>>>>>> =A0Missing replicas: =A0 =A0 =A0 =A0 =A0 =A0 =A00 >>>>>>> =A0Number of data-nodes: =A0 =A0 =A0 =A0 =A04 >>>>>>> >>>>>>> The setting in conf include: >>>>>>> >>>>>>> - Master node: >>>>>>> core-site.xml >>>>>>> =A0 >>>>>>> =A0 =A0fs.default.name >>>>>>> =A0 =A0hdfs://lab01:9000/ >>>>>>> =A0 >>>>>>> >>>>>>> hdfs-site.xml >>>>>>> =A0 >>>>>>> =A0 =A0dfs.replication >>>>>>> =A0 =A03 >>>>>>> =A0 >>>>>>> >>>>>>> -Slave nodes: >>>>>>> core-site.xml >>>>>>> =A0 >>>>>>> =A0 =A0fs.default.name >>>>>>> =A0 =A0hdfs://lab01:9000/ >>>>>>> =A0 >>>>>>> >>>>>>> hdfs-site.xml >>>>>>> =A0 >>>>>>> =A0 =A0dfs.replication >>>>>>> =A0 =A03 >>>>>>> =A0 >>>>>>> >>>>>>> Do I missing any configuration? Or any place that I can check? >>>>>>> >>>>>>> Thanks. >>>>>>> >>>>>>> >>>>>> >>>>> >>>>> >>>>> >>>>> -- >>>>> Harsh J >>>>> >>>> >>>> >>>> >>>> -- >>>> Harsh J >>>> >>> >> >> >> >> -- >> Harsh J >> > --=20 Harsh J