hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sanford Rockowitz <rockow...@minsoft.com>
Subject Re: exceptions copying files into HDFS
Date Sun, 12 Dec 2010 17:39:46 GMT
Varadharajan,

I should have been more explicit in pointing out that I am trying to run 
in pseudo-distributed mode.  The dfs-replication value in hdfs-site.xml 
is 1.

I had looked at Michael Noll's tutorial, and the only differences I saw 
from the Apache basic tutorial for pseudo-distributed mode are:
  - running Hadoop on user hadoop rather than the logged on user
  - explicit port settings and hadoop directory name settings in the 
configuration files
  - the files being copied into HDFS for the example

I have changed the configuration files to use Noll's ports and explicit 
directory name.   The result is unchanged.

Since all I am trying to do run the command:
   hadoop fs -put conf conf input

it seemed to me that for testing purposes all I need to do was start the 
hdfs daemons using the command:
   start-dfs.sh

JobTracker and TaskTracker should be irrelevant for my testcase.   
However, I have started all the daemons with the command
   start-all.sh

The result is unchanged.

In reading the logs it seems to me that the key error is the datanode 
log, where I find
    SocketException: operation not supported
on SocketAdapater.getReceiveBufferSize() calls within 
DataXceiver.writeBlock()

Thanks for your comments.

Sanford





On 12/12/2010 1:16 AM, Varadharajan Mukundan wrote:
> HI,
>
>> jps reports DataNode, NameNode, and SecondayNameNode as running:
>>
>> rock@ritter:/tmp/hadoop-rock>  jps
>> 31177 Jps
>> 29909 DataNode
>> 29751 NameNode
>> 30052 SecondaryNameNode
> In master node, the output of the "JPS" will contain a tasktracker,
> jobtracker, namenode, secondary namenode, datanode(optional, depending on
> your config) and your slaves will have tasktracker, datanodes in their jps
> output. If you need more help on configuring hadoop, i recommend you to take
> a look at
> http://www.michael-noll.com/tutorials/running-hadoop-on-ubuntu-linux-single-node-cluster/
>
>
>
>
>> Here is the contents of the Hadoop node tree.  The only thing that looks
>> like a log file are the dncp_block_verification.log.curr files, and those
>> are empty.
>> Note the presence of the in_use.lock files, which suggests that this node
> is
>> indeed being used.
>
> The logs will be in the "logs" directory in $HADOOP_HOME (hadoop home
> directory), are you looking for logs in this directory?
>
>


Mime
View raw message