hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Arpit Agarwal <aagar...@hortonworks.com>
Subject Re: about replication
Date Thu, 22 Aug 2013 04:29:12 GMT
Most likely there is a stale pid file. Something like
\tmp\hadoop-*datanode.pid. You could try deleting it and then restarting
the datanode.

I haven't read the entire thread so you may have looked at this already.

-Arpit


On Wed, Aug 21, 2013 at 9:22 PM, Irfan Sayed <irfu.sayed@gmail.com> wrote:

> datanode is trying to connect to namenode continuously but fails
>
> when i try to run "jps" command it says :
> $ ./jps.exe
> 4584 NameNode
> 4016 Jps
>
> and when i ran the "./start-dfs.sh" then it says :
>
> $ ./start-dfs.sh
> namenode running as process 3544. Stop it first.
> DFS-1: datanode running as process 4076. Stop it first.
> localhost: secondarynamenode running as process 4792. Stop it first.
>
> both these logs are contradictory
> please find the attached logs
>
> should i attach the conf files as well ?
>
> regards
>
>
>
> On Wed, Aug 21, 2013 at 5:28 PM, Mohammad Tariq <dontariq@gmail.com>wrote:
>
>> Your DN is still not running. Showing me the logs would be helpful.
>>
>> Warm Regards,
>> Tariq
>> cloudfront.blogspot.com
>>
>>
>> On Wed, Aug 21, 2013 at 5:11 PM, Irfan Sayed <irfu.sayed@gmail.com>wrote:
>>
>>> i followed the url and did the steps mention in that. i have deployed on
>>> the windows platform
>>>
>>> Now, i am able to browse url : http://localhost:50070 (name node )
>>> however, not able to browse url : http://localhost:50030
>>>
>>> please refer below
>>>
>>> [image: Inline image 1]
>>>
>>> i have modified all the config files as mentioned and formatted the hdfs
>>> file system as well
>>> please suggest
>>>
>>> regards
>>>
>>>
>>>
>>> On Tue, Aug 20, 2013 at 4:14 PM, Irfan Sayed <irfu.sayed@gmail.com>wrote:
>>>
>>>> thanks. i followed this url :
>>>> http://blog.sqltrainer.com/2012/01/installing-and-configuring-apache.html
>>>> let me follow the url which you gave for pseudo distributed setup and
>>>> then will switch to distributed mode
>>>>
>>>> regards
>>>> irfan
>>>>
>>>>
>>>>
>>>> On Tue, Aug 20, 2013 at 3:23 PM, Mohammad Tariq <dontariq@gmail.com>wrote:
>>>>
>>>>> You are welcome. Which link have you followed for the
>>>>> configuration?Your *core-site.xml* is empty. Remove the property *
>>>>> fs.default.name *from *hdfs-site.xml* and add it to *core-site.xml*.
>>>>> Remove *mapred.job.tracker* as well. It is required in *
>>>>> mapred-site.xml*.
>>>>>
>>>>> I would suggest you to do a pseudo distributed setup first in order to
>>>>> get yourself familiar with the process and then proceed to the distributed
>>>>> mode. You can visit this link<http://cloudfront.blogspot.in/2012/07/how-to-configure-hadoop.html#.UhM8d2T0-4I>if
you need some help. Let me know if you face any issue.
>>>>>
>>>>> HTH
>>>>>
>>>>> Warm Regards,
>>>>> Tariq
>>>>> cloudfront.blogspot.com
>>>>>
>>>>>
>>>>> On Tue, Aug 20, 2013 at 2:56 PM, Irfan Sayed <irfu.sayed@gmail.com>wrote:
>>>>>
>>>>>> thanks tariq for response.
>>>>>> as discussed last time, i have sent you all the config files in my
>>>>>> setup .
>>>>>> can you please go through that ?
>>>>>>
>>>>>> please let me know
>>>>>>
>>>>>> regards
>>>>>> irfan
>>>>>>
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Aug 20, 2013 at 1:22 PM, Mohammad Tariq <dontariq@gmail.com>wrote:
>>>>>>
>>>>>>> I'm sorry for being unresponsive. Was out of touch for sometime
>>>>>>> because of ramzan and eid. Resuming work today.
>>>>>>>
>>>>>>> What's the current status?
>>>>>>>
>>>>>>> Warm Regards,
>>>>>>> Tariq
>>>>>>> cloudfront.blogspot.com
>>>>>>>
>>>>>>>
>>>>>>> On Mon, Aug 19, 2013 at 7:18 PM, manish dunani <manishd207@gmail.com
>>>>>>> > wrote:
>>>>>>>
>>>>>>>> First of all read the concepts ..I hope you will like it..
>>>>>>>>
>>>>>>>> https://www.frcrc.org/sites/default/files/HadoopTutorialPart1.pdf
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> On Mon, Aug 19, 2013 at 9:45 AM, Irfan Sayed <irfu.sayed@gmail.com>wrote:
>>>>>>>>
>>>>>>>>> please suggest
>>>>>>>>>
>>>>>>>>> regards
>>>>>>>>> irfan
>>>>>>>>>
>>>>>>>>>
>>>>>>>>>
>>>>>>>>> On Tue, Aug 13, 2013 at 12:56 PM, Irfan Sayed <
>>>>>>>>> irfu.sayed@gmail.com> wrote:
>>>>>>>>>
>>>>>>>>>> hey Tariq,
>>>>>>>>>> i am still stuck ..
>>>>>>>>>> can you please suggest
>>>>>>>>>>
>>>>>>>>>> regards
>>>>>>>>>> irfan
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>> On Thu, Aug 8, 2013 at 5:56 AM, Irfan Sayed <irfu.sayed@gmail.com
>>>>>>>>>> > wrote:
>>>>>>>>>>
>>>>>>>>>>> please suggest
>>>>>>>>>>>
>>>>>>>>>>> regards
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>> On Wed, Aug 7, 2013 at 9:49 AM, Irfan Sayed <
>>>>>>>>>>> irfu.sayed@gmail.com> wrote:
>>>>>>>>>>>
>>>>>>>>>>>> attachment got quarantined
>>>>>>>>>>>> resending in txt format. please rename it
to conf.rar
>>>>>>>>>>>>
>>>>>>>>>>>> regards
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>> On Wed, Aug 7, 2013 at 9:41 AM, Irfan Sayed
<
>>>>>>>>>>>> irfu.sayed@gmail.com> wrote:
>>>>>>>>>>>>
>>>>>>>>>>>>> thanks.
>>>>>>>>>>>>>
>>>>>>>>>>>>> if i run the jps command on namenode
:
>>>>>>>>>>>>>
>>>>>>>>>>>>> Administrator@DFS-DC /cygdrive/c/Java/jdk1.7.0_25/bin
>>>>>>>>>>>>> $ ./jps.exe
>>>>>>>>>>>>> 3164 NameNode
>>>>>>>>>>>>> 1892 Jps
>>>>>>>>>>>>>
>>>>>>>>>>>>> same command on datanode :
>>>>>>>>>>>>>
>>>>>>>>>>>>> Administrator@DFS-1 /cygdrive/c/Java/jdk1.7.0_25/bin
>>>>>>>>>>>>> $ ./jps.exe
>>>>>>>>>>>>> 3848 Jps
>>>>>>>>>>>>>
>>>>>>>>>>>>> jps does not list any process for datanode.
however, on web
>>>>>>>>>>>>> browser i can see one live data node
>>>>>>>>>>>>> please find the attached conf rar file
of namenode
>>>>>>>>>>>>>
>>>>>>>>>>>>> regards
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>> On Wed, Aug 7, 2013 at 1:52 AM, Mohammad
Tariq <
>>>>>>>>>>>>> dontariq@gmail.com> wrote:
>>>>>>>>>>>>>
>>>>>>>>>>>>>> OK. we'll start fresh. Could you
plz show me your latest
>>>>>>>>>>>>>> config files?
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> BTW, are your daemons running fine?Use
JPS to verify that.
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> Warm Regards,
>>>>>>>>>>>>>> Tariq
>>>>>>>>>>>>>> cloudfront.blogspot.com
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>> On Tue, Aug 6, 2013 at 10:59 PM,
Irfan Sayed <
>>>>>>>>>>>>>> irfu.sayed@gmail.com> wrote:
>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> i have created these dir "wksp_data"
and "wksp_name" on both
>>>>>>>>>>>>>>> datanode and namenode
>>>>>>>>>>>>>>> made the respective changes in
"hdfs-site.xml" file
>>>>>>>>>>>>>>> formatted the namenode
>>>>>>>>>>>>>>> started the dfs
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> but still, not able to browse
the file system through web
>>>>>>>>>>>>>>> browser
>>>>>>>>>>>>>>> please refer below
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> anything still missing ?
>>>>>>>>>>>>>>> please suggest
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> [image: Inline image 1]
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>> On Tue, Aug 6, 2013 at 10:35
PM, Irfan Sayed <
>>>>>>>>>>>>>>> irfu.sayed@gmail.com> wrote:
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> these dir needs to be created
on all datanodes and
>>>>>>>>>>>>>>>> namenodes ?
>>>>>>>>>>>>>>>> further,  hdfs-site.xml needs
to be updated on both
>>>>>>>>>>>>>>>> datanodes and namenodes for
these new dir?
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> regards
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>> On Tue, Aug 6, 2013 at 5:30
PM, Mohammad Tariq <
>>>>>>>>>>>>>>>> dontariq@gmail.com> wrote:
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Create 2 directories
manually corresponding to the values
>>>>>>>>>>>>>>>>> of dfs.name.dir and dfs.data.dir
properties and change the permissions of
>>>>>>>>>>>>>>>>> these directories to
755. When you start pushing data into your HDFS, data
>>>>>>>>>>>>>>>>> will start going inside
the directory specified by dfs.data.dir and the
>>>>>>>>>>>>>>>>> associated metadata will
go inside dfs.name.dir. Remember, you store data
>>>>>>>>>>>>>>>>> in HDFS, but it eventually
gets stored in your local/native FS. But you
>>>>>>>>>>>>>>>>> cannot see this data
directly on your local/native FS.
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> Warm Regards,
>>>>>>>>>>>>>>>>> Tariq
>>>>>>>>>>>>>>>>> cloudfront.blogspot.com
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>> On Tue, Aug 6, 2013 at
5:26 PM, Irfan Sayed <
>>>>>>>>>>>>>>>>> irfu.sayed@gmail.com>
wrote:
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> thanks.
>>>>>>>>>>>>>>>>>> however, i need this
to be working on windows environment
>>>>>>>>>>>>>>>>>> as project requirement.
>>>>>>>>>>>>>>>>>> i will add/work on
Linux later
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> so, now , at this
stage , c:\\wksp is the HDFS file
>>>>>>>>>>>>>>>>>> system OR do i need
to create it from command line ?
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> please suggest
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> regards,
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>> On Tue, Aug 6, 2013
at 5:19 PM, Mohammad Tariq <
>>>>>>>>>>>>>>>>>> dontariq@gmail.com>
wrote:
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Hello Irfan,
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Sorry for being
unresponsive. Got stuck with some imp
>>>>>>>>>>>>>>>>>>> work.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> HDFS webUI doesn't
provide us the ability to create file
>>>>>>>>>>>>>>>>>>> or directory.
You can browse HDFS, view files, download files etc. But
>>>>>>>>>>>>>>>>>>> operation like
create, move, copy etc are not supported.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> These values
look fine to me.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> One suggestion
though. Try getting a Linux machine(if
>>>>>>>>>>>>>>>>>>> possible). Or
at least use a VM. I personally feel that using Hadoop on
>>>>>>>>>>>>>>>>>>> windows is always
messy.
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> Warm Regards,
>>>>>>>>>>>>>>>>>>> Tariq
>>>>>>>>>>>>>>>>>>> cloudfront.blogspot.com
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>> On Tue, Aug 6,
2013 at 5:09 PM, Irfan Sayed <
>>>>>>>>>>>>>>>>>>> irfu.sayed@gmail.com>
wrote:
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> thanks.
>>>>>>>>>>>>>>>>>>>> when i browse
the file system , i am getting following :
>>>>>>>>>>>>>>>>>>>> i haven't
seen any make directory option there
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> i need to
create it from command line ?
>>>>>>>>>>>>>>>>>>>> further,
in the hdfs-site.xml file , i have given
>>>>>>>>>>>>>>>>>>>> following
entries. are they correct ?
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> <property>
>>>>>>>>>>>>>>>>>>>>   <name>dfs.data.dir</name>
>>>>>>>>>>>>>>>>>>>>   <value>c:\\wksp</value>
>>>>>>>>>>>>>>>>>>>>   </property>
>>>>>>>>>>>>>>>>>>>> <property>
>>>>>>>>>>>>>>>>>>>>   <name>dfs.name.dir</name>
>>>>>>>>>>>>>>>>>>>>   <value>c:\\wksp</value>
>>>>>>>>>>>>>>>>>>>>   </property>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> please suggest
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> [image: Inline
image 1]
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>> On Tue, Aug
6, 2013 at 12:40 PM, manish dunani <
>>>>>>>>>>>>>>>>>>>> manishd207@gmail.com>
wrote:
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> *You
are wrong at this:*
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Administrator@DFS-DC/cygdrive/c/hadoop-1.1.2/hadoop-1.1.2/bin
>>>>>>>>>>>>>>>>>>>>> $ ./hadoop
dfs -copyFromLocal
>>>>>>>>>>>>>>>>>>>>> /cygdrive/c/Users/Administrator/Desktop/hadoop-1.1.2.tar
/wksp
>>>>>>>>>>>>>>>>>>>>> copyFromLocal:
File
>>>>>>>>>>>>>>>>>>>>> /cygdrive/c/Users/Administrator/Desktop/hadoop-1.1.2.tar
does not exist.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Administrator@DFS-DC/cygdrive/c/hadoop-1.1.2/hadoop-1.1.2/bin
>>>>>>>>>>>>>>>>>>>>> $ ./hadoop
dfs -copyFromLocal
>>>>>>>>>>>>>>>>>>>>> /cygdrive/c/Users/Administrator/Desktop/hadoop-1.1.2.tar.gz
/wksp
>>>>>>>>>>>>>>>>>>>>> copyFromLocal:
File
>>>>>>>>>>>>>>>>>>>>> /cygdrive/c/Users/Administrator/Desktop/hadoop-1.1.2.tar.gz
does not exist.
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Because,You
had wrote both the paths local and You
>>>>>>>>>>>>>>>>>>>>> need
not to copy hadoop into hdfs...Hadoop is already working..
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Just
check out in browser by after starting ur single
>>>>>>>>>>>>>>>>>>>>> node
cluster :
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> localhost:50070
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> then
go for browse the filesystem link in it..
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> If there
is no directory then make directory there.
>>>>>>>>>>>>>>>>>>>>> That
is your hdfs directory.
>>>>>>>>>>>>>>>>>>>>> Then
copy any text file there(no need to copy hadoop
>>>>>>>>>>>>>>>>>>>>> there).beacause
u are going to do processing on that data in text
>>>>>>>>>>>>>>>>>>>>> file.That's
why hadoop is used for ,first u need to make it clear in ur
>>>>>>>>>>>>>>>>>>>>> mind.Then
and then u will do it...otherwise not possible..
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> *Try
this: *
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> Administrator@DFS-DC/cygdrive/c/hadoop-1.1.2/hadoop-1.1.2
>>>>>>>>>>>>>>>>>>>>> $ .bin/hadoop
dfs -copyFromLocal
>>>>>>>>>>>>>>>>>>>>> /full/local/path/to/ur/file
/hdfs/directory/path
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> On Tue,
Aug 6, 2013 at 11:49 AM, Irfan Sayed <
>>>>>>>>>>>>>>>>>>>>> irfu.sayed@gmail.com>
wrote:
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> thanks.
yes , i am newbie.
>>>>>>>>>>>>>>>>>>>>>> however,
i need windows setup.
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> let
me surely refer the doc and link which u sent but
>>>>>>>>>>>>>>>>>>>>>> i
need this to be working ...
>>>>>>>>>>>>>>>>>>>>>> can
you please help
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>> regards
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>> --
>>>>>>>>>>>>>>>>>>>>> MANISH
DUNANI
>>>>>>>>>>>>>>>>>>>>> -THANX
>>>>>>>>>>>>>>>>>>>>> +91 9426881954,+91
8460656443
>>>>>>>>>>>>>>>>>>>>> manishd207@gmail.com
>>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>>
>>>>>>>>>>>>>>
>>>>>>>>>>>>>
>>>>>>>>>>>>
>>>>>>>>>>>
>>>>>>>>>>
>>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>> --
>>>>>>>> Regards
>>>>>>>>
>>>>>>>> *Manish Dunani*
>>>>>>>> *Contact No* : +91 9408329137
>>>>>>>> *skype id* : manish.dunani*
>>>>>>>> *
>>>>>>>>
>>>>>>>>
>>>>>>>>
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

-- 
CONFIDENTIALITY NOTICE
NOTICE: This message is intended for the use of the individual or entity to 
which it is addressed and may contain information that is confidential, 
privileged and exempt from disclosure under applicable law. If the reader 
of this message is not the intended recipient, you are hereby notified that 
any printing, copying, dissemination, distribution, disclosure or 
forwarding of this communication is strictly prohibited. If you have 
received this communication in error, please contact the sender immediately 
and delete it from your system. Thank You.

Mime
View raw message