hadoop-general mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Something Something <mailinglist...@gmail.com>
Subject Re: Error: could only be replicated to 0 nodes, instead of 1
Date Mon, 22 Mar 2010 17:39:33 GMT
Everything in datanode log looks normal to me.  Here it is..  Let me know if
you see any problem.  Thanks.

2010-03-22 09:55:02,578 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting DataNode
STARTUP_MSG:   host = training-vm/127.0.0.1
STARTUP_MSG:   args = []
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build =
https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r
911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
2010-03-22 09:55:18,091 INFO org.apache.hadoop.hdfs.server.common.Storage:
Storage directory /home/training/hadoop/dfs/data is not formatted.
2010-03-22 09:55:18,093 INFO org.apache.hadoop.hdfs.server.common.Storage:
Formatting ...
2010-03-22 09:55:18,527 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Registered
FSDatasetStatusMBean
2010-03-22 09:55:18,533 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Opened info server at 50010
2010-03-22 09:55:18,560 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Balancing bandwith is
1048576 bytes/s
2010-03-22 09:55:33,986 INFO org.mortbay.log: Logging to
org.slf4j.impl.Log4jLoggerAdapter(org.mortbay.log) via
org.mortbay.log.Slf4jLog
2010-03-22 09:55:34,386 INFO org.apache.hadoop.http.HttpServer: Port
returned by webServer.getConnectors()[0].getLocalPort() before open() is -1.
Opening the listener on 50075
2010-03-22 09:55:34,386 INFO org.apache.hadoop.http.HttpServer:
listener.getLocalPort() returned 50075
webServer.getConnectors()[0].getLocalPort() returned 50075
2010-03-22 09:55:34,386 INFO org.apache.hadoop.http.HttpServer: Jetty bound
to port 50075
2010-03-22 09:55:34,387 INFO org.mortbay.log: jetty-6.1.14
2010-03-22 09:57:00,445 INFO org.mortbay.log: Started
SelectChannelConnector@0.0.0.0:50075
2010-03-22 09:57:00,480 INFO org.apache.hadoop.metrics.jvm.JvmMetrics:
Initializing JVM Metrics with processName=DataNode, sessionId=null
2010-03-22 09:57:15,652 INFO org.apache.hadoop.ipc.metrics.RpcMetrics:
Initializing RPC Metrics with hostName=DataNode, port=50020
2010-03-22 09:57:15,676 INFO org.apache.hadoop.ipc.Server: IPC Server
Responder: starting
2010-03-22 09:57:15,713 INFO org.apache.hadoop.ipc.Server: IPC Server
listener on 50020: starting
2010-03-22 09:57:15,738 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 0 on 50020: starting
2010-03-22 09:57:15,766 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 1 on 50020: starting
2010-03-22 09:57:15,836 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: dnRegistration =
DatanodeRegistration(localhost:50010, storageID=, infoPort=50075,
ipcPort=50020)
2010-03-22 09:57:15,880 INFO org.apache.hadoop.ipc.Server: IPC Server
handler 2 on 50020: starting
2010-03-22 09:57:15,988 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: New storage id
DS-3114065-127.0.0.1-50010-1269277035840 is assigned to data-node
127.0.0.1:50010
2010-03-22 09:57:16,022 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: DatanodeRegistration(
127.0.0.1:50010, storageID=DS-3114065-127.0.0.1-50010-1269277035840,
infoPort=50075, ipcPort=50020)In DataNode.run, data =
FSDataset{dirpath='/home/training/hadoop/dfs/data/current'}
2010-03-22 09:57:16,032 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: using BLOCKREPORT_INTERVAL
of 3600000msec Initial delay: 0msec
2010-03-22 09:57:16,243 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got
processed in 16 msecs
2010-03-22 09:57:16,258 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: Starting Periodic block
scanner.
2010-03-22 10:34:44,492 INFO
org.apache.hadoop.hdfs.server.datanode.DataNode: BlockReport of 0 blocks got
processed in 3 msecs




On Mon, Mar 22, 2010 at 10:33 AM, Jay Booth <jaybooth@gmail.com> wrote:

> Usually when I see this in pseudo-distributed, it means the datanode isn't
> up.  Check logs?
>
> On Mon, Mar 22, 2010 at 1:11 PM, Something Something <
> mailinglists19@gmail.com> wrote:
>
> > After upgrading to hadoop-0.20.2, I started seeing these messages in the
> > log:
> >
> >
> > 2010-03-22 09:56:28,393 INFO org.apache.hadoop.ipc.Server: IPC Server
> > handler 1 on 9000, call addBlock(/home/training/hadoop/mapred/system/
> > jobtracker.info, DFSClient_-1956918169) from 127.0.0.1:41067: error:
> > java.io.IOException: File /home/training/hadoop/mapred/system/
> > jobtracker.info could only be replicated to 0 nodes, instead of 1
> > java.io.IOException: File /home/training/hadoop/mapred/system/
> > jobtracker.info could only be replicated to 0 nodes, instead of 1
> >        at
> >
> >
> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
> >        at
> >
> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
> >        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
> >        at
> >
> >
> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
> >        at
> >
> >
> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
> >        at java.lang.reflect.Method.invoke(Method.java:597)
> >        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
> >        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
> >        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
> >        at java.security.AccessController.doPrivileged(Native Method)
> >        at javax.security.auth.Subject.doAs(Subject.java:396)
> >        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
> >
> >
> > I am running in Psuedo-distributed mode.  Before upgrading I used to see
> > these messages occasionally, but they were harmless.  Now I see these
> when
> > I
> > try to copy files to DFS using -copyFromLocal, and the files no longer
> get
> > copied :(
> >
> > I mean, it's quite possible, this has nothing to do with the Hadoop
> upgrade
> > but something changed in my environment, but any help will be
> appreciated.
> >  Thanks.
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message