hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Yuvrajsinh Chauhan" <yuvraj.chau...@elitecore.com>
Subject RE: HDFS Installation / Configuration
Date Tue, 17 Jul 2012 13:25:39 GMT
Dear Tariq / Bijoy,

Please Ignore my previous mail. I can now see the directories using
following commands:

[hadoop@rac1 bin]$ ./hadoop fs -ls /
Found 5 items
drwxr-xr-x   - hadoop hadoop              0 2012-07-16 14:23 /test
drwxr-xr-x   - hadoop hadoop              0 2012-07-17 13:29 /test1
drwxr-xr-x   - hadoop supergroup          0 2012-07-17 18:03 /user
drwxr-xr-x   - hadoop supergroup          0 2012-07-17 14:11 /usr
drwxr-xr-x   - hadoop supergroup          0 2012-07-17 17:41 /yuvi

I can see this data from both the nodes.

Now, I am exploring more using by reading help.

Regards,

Yuvrajsinh Chauhan || Sr. DBA || CRESTEL-PSG
Elitecore Technologies Pvt. Ltd.
904, Silicon Tower || Off C.G.Road
Behind Pariseema Building || Ahmedabad || INDIA
[GSM]: +91 9727746022


-----Original Message-----
From: Yuvrajsinh Chauhan [mailto:yuvraj.chauhan@elitecore.com] 
Sent: 17 July 2012 18:49
To: hdfs-user@hadoop.apache.org
Subject: RE: HDFS Installation / Configuration

Dear Tariq,

Values of both the properties are already configured in hdfs-site.xml file. 

<property>
<name>dfs.name.dir</name>
<value>/usr/local/hadoopstorage/namenode</value>	#Directories created
with proper Hadoop user permission with R/W.
<final>true</final>
</property>
<property>
<name>dfs.data.dir</name>
<value>/usr/local/hadoopstorage/datanode</value>
<final>true</final>
</property>
============================================================================
Command for creating a Directory

[hadoop@rac1 bin]$ ./hadoop fs -mkdir /yuvi
[hadoop@rac1 bin]$
[hadoop@rac1 bin]$ ./hadoop fs -ls
ls: Cannot access .: No such file or directory.
[hadoop@rac1 bin]$

But when I go on to that path, I cannot see any directory. However, I can
see the directories from GUI.
Also I cannot find any error in Logs from both the nodes.



Regards,

Yuvrajsinh Chauhan || Sr. DBA || CRESTEL-PSG Elitecore Technologies Pvt.
Ltd.
904, Silicon Tower || Off C.G.Road
Behind Pariseema Building || Ahmedabad || INDIA
[GSM]: +91 9727746022


-----Original Message-----
From: Mohammad Tariq [mailto:dontariq@gmail.com]
Sent: 17 July 2012 18:22
To: hdfs-user@hadoop.apache.org
Subject: Re: HDFS Installation / Configuration

Hi Yuvrajsinh,

        There is absolutely nothing to be sorry for. Have you added thee
following properties in your 'hdfs-site.xml' file ??
- dfs.name.dir
- dfs.data.dir

By default the values of these properties is the /tmp directoy. It is
advisable to create 2 directories on your local FS and assign the complete
paths of these directories as the values of above specified properties. And
these are the locations where your metadata and actual data will be stored.
(Another important reason to set these properties is that, on each restart
the /tmp directory is emptied and all the data and Hdfs namespace info will
be lost). Hope this helps.

Regards,
    Mohammad Tariq


On Tue, Jul 17, 2012 at 6:02 PM, Yuvrajsinh Chauhan
<yuvraj.chauhan@elitecore.com> wrote:
> Dear Tariq,
>
> All Web GUI are working fine. I can be able to make the directories 
> using ./hadoop fs -mkdir command. (For Ex. ./hadoop fs -mkdir /test) 
> But where this files are getting created ? also the same directory 
> created on both the Node ?
>
> I can see folder created on Namenode->Browse the filesystem Link. But 
> the same was not available on OS Level.
>
> Sorry, But I am new in HDFS. So please excuse me for silly questions.
>
>
> Regards,
> Yuvrajsinh Chauhan
>
> -----Original Message-----
> From: Mohammad Tariq [mailto:dontariq@gmail.com]
> Sent: 17 July 2012 17:19
> To: hdfs-user@hadoop.apache.org
> Subject: Re: HDFS Installation / Configuration
>
> Hello Yuvrajsinh,
>
>         Hadoop provides us web interfaces using which we can see the 
> status of our cluster to check if everything is ok. Simply point your 
> web browser to http://namenode_host:50070(for Hdfs status) and to 
> http://jobtracker_host:50030(for MapReduce status). Apart from this 
> just give a try to few basic shell commands to see if everything is 
> going fine(like bin/hadoop fs -ls /, bin/hadoop fs -mkdir /testdir 
> etc). Also try to run the wordcount program once.
>
> Regards,
>     Mohammad Tariq
>
>
> On Tue, Jul 17, 2012 at 3:34 PM, Yuvrajsinh Chauhan 
> <yuvraj.chauhan@elitecore.com> wrote:
>> Dear All,
>>
>> I have completed all installation & configuration. I have setup HDFS 
>> between two nodes.
>> Currently my Data node and Task Tracker services are running on both 
>> the nodes.
>>
>> Now please let me know how I test this FS ?
>>
>> Also, I want to format additional partition with HDFS. Please provide 
>> the steps for this activity.
>>
>> Thanks.
>>
>> Regards,
>> Yuvrajsinh Chauhan
>>
>>
>> -----Original Message-----
>> From: Harsh J [mailto:harsh@cloudera.com]
>> Sent: 01 May 2012 18:15
>> To: hdfs-user@hadoop.apache.org
>> Subject: Re: HDFS Installation / Configuration
>>
>> Hey Yuvrajsinh,
>>
>> Have you tried / taken the time to follow the official setup guides?
>>
>> For a single node, start with
>> http://hadoop.apache.org/common/docs/stable/single_node_setup.html,
>> followed by
>> http://hadoop.apache.org/common/docs/stable/cluster_setup.html
>> for a fully-distributed cluster (multi-node) setup.
>>
>> From community, Michael Noll maintains excellent notes on setting up 
>> clusters at his tutorials page http://www.michael-noll.com/tutorials/
>>
>> If you do not want MapReduce, just ignore the steps that relate to it.
>>
>> On Tue, May 1, 2012 at 6:00 PM, Yuvrajsinh Chauhan 
>> <yuvraj.chauhan@elitecore.com> wrote:
>>> All,
>>>
>>>
>>>
>>> I'm a new in this community. I want to install HDFS on Linux Box.
>>> Would appreciate if anyone can share installation steps / download 
>>> location of binary / performance parameter etc. Thanks in Adv.
>>>
>>>
>>>
>>> Regards,
>>>
>>>
>>>
>>> Yuvrajsinh Chauhan || CRESTEL || Sr. DBA
>>>
>>> Elitecore Technologies Pvt. Ltd.
>>>
>>>
>>
>>
>>
>> --
>> Harsh J
>>
>



Mime
View raw message