Return-Path: X-Original-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-issues-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 3B14E6912 for ; Tue, 28 Jun 2011 06:53:00 +0000 (UTC) Received: (qmail 99585 invoked by uid 500); 28 Jun 2011 06:52:57 -0000 Delivered-To: apmail-hadoop-hdfs-issues-archive@hadoop.apache.org Received: (qmail 99356 invoked by uid 500); 28 Jun 2011 06:52:55 -0000 Mailing-List: contact hdfs-issues-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: hdfs-issues@hadoop.apache.org Delivered-To: mailing list hdfs-issues@hadoop.apache.org Received: (qmail 98867 invoked by uid 99); 28 Jun 2011 06:52:43 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 28 Jun 2011 06:52:43 +0000 X-ASF-Spam-Status: No, hits=-2000.0 required=5.0 tests=ALL_TRUSTED,T_RP_MATCHES_RCVD X-Spam-Check-By: apache.org Received: from [140.211.11.116] (HELO hel.zones.apache.org) (140.211.11.116) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 28 Jun 2011 06:52:38 +0000 Received: from hel.zones.apache.org (hel.zones.apache.org [140.211.11.116]) by hel.zones.apache.org (Postfix) with ESMTP id 483774359FA for ; Tue, 28 Jun 2011 06:52:18 +0000 (UTC) Date: Tue, 28 Jun 2011 06:52:18 +0000 (UTC) From: "Brahma Reddy Battula (JIRA)" To: hdfs-issues@hadoop.apache.org Message-ID: <516530426.914.1309243938292.JavaMail.tomcat@hel.zones.apache.org> In-Reply-To: <127267612.6783.1308144887487.JavaMail.tomcat@hel.zones.apache.org> Subject: [jira] [Commented] (HDFS-2076) ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(1 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-JIRA-FingerPrint: 30527f35849b9dde25b450d4833f0394 [ https://issues.apache.org/jira/browse/HDFS-2076?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13056347#comment-13056347 ] Brahma Reddy Battula commented on HDFS-2076: -------------------------------------------- can please check the DN logs..whatis happening from DN,YOU will come to know > ERROR org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(1 > ----------------------------------------------------------------------------- > > Key: HDFS-2076 > URL: https://issues.apache.org/jira/browse/HDFS-2076 > Project: Hadoop HDFS > Issue Type: Bug > Components: data-node > Affects Versions: 0.20.2 > Environment: hadoop -hdfs > Reporter: chakali ranga swamy > > see sir > datanode log socket and datasteam problem unable to upload text file to DFS i deleted tmp folders dfs and mapred again i formated "hadoop namenode -format" > start-all.sh done then > dfs folder contains: > data node ,name node,secondarynamenode > mapred: empty > about space:----------------- > linux-8ysi:/etc/hadoop/hadoop-0.20.2 # df -h > Filesystem Size Used Avail Use% Mounted on > /dev/sda5 25G 16G 7.4G 69% / > udev 987M 212K 986M 1% /dev > /dev/sda7 42G 5.5G 34G 14% /home > ------------------------------------------- > http://localhost:50070/dfshealth.jsp------------------ > NameNode 'localhost:54310' > Started: Wed Jun 15 04:13:14 IST 2011 > Version: 0.20.2, r911707 > Compiled: Fri Feb 19 08:07:34 UTC 2010 by chrisdo > Upgrades: There are no upgrades in progress. > Browse the filesystem > Namenode Logs > Cluster Summary > 10 files and directories, 0 blocks = 10 total. Heap Size is 15.5 MB / 966.69 MB (1%) > Configured Capacity : 24.61 GB > DFS Used : 24 KB > Non DFS Used : 17.23 GB > DFS Remaining : 7.38 GB > DFS Used% : 0 % > DFS Remaining% : 29.99 % > Live Nodes : 1 > Dead Nodes : 0 > NameNode Storage: > Storage Directory Type State > /tmp/Testinghadoop/dfs/name IMAGE_AND_EDITS Active > Hadoop, 2011. > ---------------------------------------- > core-site.xml > --------------------------------- > > > > > > hadoop.tmp.dir > /tmp/Testinghadoop/ > A base for other temporary directories. > > > fs.default.name > hdfs://localhost:54310 > The name of the default file system. A URI whose > scheme and authority determine the FileSystem implementation. The > uri's scheme determines the config property (fs.SCHEME.impl) naming > the FileSystem implementation class. The uri's authority is used to > determine the host, port, etc. for a filesystem. > > > ------------------------------------------------ > hdfs-site.xml > ---------------------------------- > > > > > > dfs.permissions > true > > If "true", enable permission checking in HDFS. > If "false", permission checking is turned off, > but all other behavior is unchanged. > Switching from one parameter value to the other does not change the mode, > owner, or group of files or directories. > > > > dfs.replication > 1 > Default block replication. > The actual number of replications can be specified when the file is created. > The default is used if replication is not specified in create time. > > > > --------------------------------------- > mapred-site.xml > ---------------------------------- > > > > > > mapred.job.tracker > localhost:54311 > The host and port that the MapReduce job tracker runs > at. If "local", then jobs are run in-process as a single map > and reduce task. > > > > ---------------------------------------------------------------------------------- > please give suggetions about this error: > ------------------------------------------------------------------------------------------------------------------ > linux-8ysi:/etc/hadoop/hadoop-0.20.2/conf # hadoop fsck / > RUN_JAVA > /usr/java/jre1.6.0_25/bin/java > .Status: HEALTHY > Total size: 0 B > Total dirs: 7 > Total files: 1 (Files currently being written: 1) > Total blocks (validated): 0 > Minimally replicated blocks: 0 > Over-replicated blocks: 0 > Under-replicated blocks: 0 > Mis-replicated blocks: 0 > Default replication factor: 1 > Average block replication: 0.0 > Corrupt blocks: 0 > Missing replicas: 0 > Number of data-nodes: 1 > Number of racks: 1 > The filesystem under path '/' is HEALTHY > linux-8ysi:/etc/hadoop/hadoop-0.20.2/conf # hadoop dfsadmin -report > RUN_JAVA > /usr/java/jre1.6.0_25/bin/java > Configured Capacity: 26425618432 (24.61 GB) > Present Capacity: 7923564544 (7.38 GB) > DFS Remaining: 7923539968 (7.38 GB) > DFS Used: 24576 (24 KB) > DFS Used%: 0% > Under replicated blocks: 0 > Blocks with corrupt replicas: 0 > Missing blocks: 0 > ------------------------------------------------- > Datanodes available: 1 (1 total, 0 dead) > Name: 127.0.0.1:50010 > Decommission Status : Normal > Configured Capacity: 26425618432 (24.61 GB) > DFS Used: 24576 (24 KB) > Non DFS Used: 18502053888 (17.23 GB) > DFS Remaining: 7923539968(7.38 GB) > DFS Used%: 0% > DFS Remaining%: 29.98% > Last contact: Wed Jun 15 05:54:00 IST 2011 > i got this error: > ---------------------------- > linux-8ysi:/etc/hadoop/hadoop-0.20.2 # hadoop dfs -put spo.txt In > RUN_JAVA > /usr/java/jre1.6.0_25/bin/java > 11/06/15 04:50:18 WARN hdfs.DFSClient: DataStreamer Exception: org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/In/spo.txt could only be replicated to 0 nodes, instead of 1 > at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271) > at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422) > at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) > at java.lang.reflect.Method.invoke(Unknown Source) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Unknown Source) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953) > at org.apache.hadoop.ipc.Client.call(Client.java:740) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) > at $Proxy0.addBlock(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) > at java.lang.reflect.Method.invoke(Unknown Source) > at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) > at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) > at $Proxy0.addBlock(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) > 11/06/15 04:50:18 WARN hdfs.DFSClient: Error Recovery for block null bad datanode[0] nodes == null > 11/06/15 04:50:18 WARN hdfs.DFSClient: Could not get block locations. Source file "/user/root/In/spo.txt" - Aborting... > put: java.io.IOException: File /user/root/In/spo.txt could only be replicated to 0 nodes, instead of 1 > 11/06/15 04:50:18 ERROR hdfs.DFSClient: Exception closing file /user/root/In/spo.txt : org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/In/spo.txt could only be replicated to 0 nodes, instead of 1 > at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271) > at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422) > at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) > at java.lang.reflect.Method.invoke(Unknown Source) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Unknown Source) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953) > org.apache.hadoop.ipc.RemoteException: java.io.IOException: File /user/root/In/spo.txt could only be replicated to 0 nodes, instead of 1 > at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271) > at org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422) > at sun.reflect.GeneratedMethodAccessor5.invoke(Unknown Source) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) > at java.lang.reflect.Method.invoke(Unknown Source) > at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959) > at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955) > at java.security.AccessController.doPrivileged(Native Method) > at javax.security.auth.Subject.doAs(Unknown Source) > at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953) > at org.apache.hadoop.ipc.Client.call(Client.java:740) > at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220) > at $Proxy0.addBlock(Unknown Source) > at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) > at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source) > at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source) > at java.lang.reflect.Method.invoke(Unknown Source) > at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82) > at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59) > at $Proxy0.addBlock(Unknown Source) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:2937) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:2819) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2000(DFSClient.java:2102) > at org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2288) > regards > Ranga Swamy > 8904524975 -- This message is automatically generated by JIRA. For more information on JIRA, see: http://www.atlassian.com/software/jira