flume-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From shekhar sharma <shekhar2...@gmail.com>
Subject Re: Hadoop restart issue
Date Thu, 05 Jul 2012 10:21:08 GMT
Whats the reason?

On Thu, Jul 5, 2012 at 3:45 PM, vijay k
<k.virjay52@gmail.com<k.vijay52@gmail.com>
> wrote:

> Thanks a lot for quick response, it's working fine now.
>
>
> On Thu, Jul 5, 2012 at 3:36 PM, shekhar sharma <shekhar2581@gmail.com>wrote:
>
>> Is the namenode is in the safe mode? Run the below command to make hadoop
>> come out of safe mode
>> dfsadmin -safemode leave
>>
>> post this question in Hadoop user list.
>>
>> Regards,
>> Som
>>
>>
>> On Thu, Jul 5, 2012 at 2:42 PM, vijay k <k.vijay52@gmail.com> wrote:
>>
>>> Hi Users list,
>>>
>>> I did the following scripts for restaring the system.
>>>
>>> 1.stop-dfs.sh
>>> 2.stop-mapred.sh
>>> 3.start-dfs.sh
>>> 4.start-mapred.sh
>>>
>>> master:
>>> hduser@md-trngpoc1:/usr/local/hadoop/bin$ jps
>>> 14454 Jps
>>> 18207 NameNode
>>> 18537 JobTracker
>>> 18410 SecondaryNameNode
>>>
>>> slave:
>>> hduser@md-trngpoc2:/usr/local/hadoop/bin$ jps
>>> 28038 Jps
>>> 23199 TaskTracker
>>> 23125 DataNode
>>>
>>> Here, I am trying to view the hive table, i am not able to view the
>>> table, getting following error
>>>
>>>
>>>
>>> Error:
>>> ===============
>>>
>>> java.io.IOException: File
>>> /tmp/hive-hduser/hive_2012-07-05_11-38-26_060_2347748706666350421/-ext-10000/Invoice_Details.txt
>>> could only be replicated to 0 nodes, instead
>>> of 1
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1271)
>>>         at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:422)
>>>         at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
>>>         at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
>>>         at java.lang.reflect.Method.invoke(Method.java:597)
>>>         at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:959)
>>>         at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:955)
>>>         at java.security.AccessController.doPrivileged(Native Method)
>>>         at javax.security.auth.Subject.doAs(Subject.java:396)
>>>         at org.apache.hadoop.ipc.Server$Handler.run(Server.java:953)
>>>
>>> dfsadmin -report
>>> =========================
>>>
>>> hduser@md-trngpoc1:/usr/local/hadoop$ bin/hadoop dfsadmin -report
>>> Configured Capacity: 0 (0 KB)
>>> Present Capacity: 0 (0 KB)
>>> DFS Remaining: 0 (0 KB)
>>> DFS Used: 0 (0 KB)
>>> DFS Used%: �%
>>> Under replicated blocks: 6802
>>> Blocks with corrupt replicas: 0
>>> Missing blocks: 0
>>>
>>> -------------------------------------------------
>>> Datanodes available: 0 (1 total, 1 dead)
>>>
>>> Name: 10.5.114.101:50010
>>> Decommission Status : Normal
>>> Configured Capacity: 0 (0 KB)
>>> DFS Used: 0 (0 KB)
>>> Non DFS Used: 0 (0 KB)
>>> DFS Remaining: 0(0 KB)
>>> DFS Used%: 100%
>>> DFS Remaining%: 0%
>>> Last contact: Wed Jul 04 23:58:44 IST 2012
>>>
>>> Please suggest me with out reformat the NameNode, how to resove this
>>> issue?
>>>
>>> Thanks,
>>> Vijay
>>>
>>>
>>>
>>
>>
>

Mime
View raw message