hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "AMARNATH, Balachandar" <BALACHANDAR.AMARN...@airbus.com>
Subject Hadoop cluster setup - could not see second datanode
Date Tue, 05 Mar 2013 11:34:41 GMT
Thanks for the information,

Now I am trying to install hadoop dfs using 2 nodes. A namenode cum datanode, and a separate
data node. I use the following configuration for my hdfs-site.xml

<configuration>

  <property>
    <name>fs.default.name</name>
    <value>localhost:9000</value>
  </property>

  <property>
    <name>dfs.data.dir</name>
    <value>/home/bala/data</value>
  </property>

  <property>
    <name>dfs.name.dir</name>
    <value>/home/bala/name</value>
  </property>
</configuration>


In namenode, I have added the datanode hostnames (machine1 and machine2).
When I do 'start-all.sh', I see the log that the data node is starting in both the machines
but I went to the browser in the namenode, I see only one live node. (That is the namenode
which is configured as datanode)

Any hint here will help me


With regards
Bala





From: Mahesh Balija [mailto:balijamahesh.mca@gmail.com]
Sent: 05 March 2013 14:15
To: user@hadoop.apache.org
Subject: Re: Hadoop file system

You can be able to use Hdfs alone in the distributed mode to fulfill your requirement.
Hdfs has the Filesystem java api through which you can interact with the HDFS from your client.
HDFS is good if you have less number of files with huge size rather than you having many files
with small size.

Best,
Mahesh Balija,
Calsoft Labs.
On Tue, Mar 5, 2013 at 10:43 AM, AMARNATH, Balachandar <BALACHANDAR.AMARNATH@airbus.com<mailto:BALACHANDAR.AMARNATH@airbus.com>>
wrote:

Hi,

I am new to hdfs. In my java application, I need to perform 'similar operation' over large
number of files. I would like to store those files in distributed machines. I don't think,
I will need map reduce paradigm. But however I would like to use HDFS for file storage and
access. Is it possible (or nice idea) to use HDFS as a stand alone stuff? And, java APIs are
available to work with HDFS so that I can read/write in distributed environment ? Any thoughts
here will be helpful.


With thanks and regards
Balachandar




The information in this e-mail is confidential. The contents may not be disclosed or used
by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.

If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.

Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as
it has been sent over public networks. If you have any concerns over the content of this message
or its Accuracy or Integrity, please contact Airbus immediately.

All outgoing e-mails from Airbus are checked using regularly updated virus scanning software
but you should take whatever measures you deem to be appropriate to ensure that this message
and any attachments are virus free.


The information in this e-mail is confidential. The contents may not be disclosed or used
by anyone other than the addressee. Access to this e-mail by anyone else is unauthorised.
If you are not the intended recipient, please notify Airbus immediately and delete this e-mail.
Airbus cannot accept any responsibility for the accuracy or completeness of this e-mail as
it has been sent over public networks. If you have any concerns over the content of this message
or its Accuracy or Integrity, please contact Airbus immediately.
All outgoing e-mails from Airbus are checked using regularly updated virus scanning software
but you should take whatever measures you deem to be appropriate to ensure that this message
and any attachments are virus free.


Mime
View raw message