hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jean-Daniel Cryans <jdcry...@apache.org>
Subject Re: Use of the test cluster in HBase
Date Tue, 12 Apr 2011 23:43:54 GMT
The namenode does this:

            long remaining = node.getRemaining() -
	                     (node.getBlocksScheduled() * blockSize);
	    // check the remaining capacity of the target machine
	    if (blockSize* FSConstants.MIN_BLOCKS_FOR_WRITE>remaining)

MIN_BLOCKS_FOR_WRITE defaults to 5.

J-D

On Tue, Apr 12, 2011 at 4:35 PM, Jason Rutherglen
<jason.rutherglen@gmail.com> wrote:
> Hmm... There's no physical limitation, is there an artificial setting?
>
> On Tue, Apr 12, 2011 at 4:27 PM, Jean-Daniel Cryans <jdcryans@apache.org> wrote:
>> It says:
>>
>> 2011-04-12 16:16:17,157 DEBUG [IPC Server handler 7 on 51372]
>> namenode.ReplicationTargetChooser(408): Node
>> /default-rack/127.0.0.1:22967 is not chosen because the node does not
>> have enough space
>>
>> J-D
>>
>> On Tue, Apr 12, 2011 at 4:24 PM, Jason Rutherglen
>> <jason.rutherglen@gmail.com> wrote:
>>> Ah, I had changed conf/log4j.properties.  So I changed
>>> src/test/resources/log4j.properties, and now the -output file's much
>>> more verbose.  I'm not sure I understand what's going on however.
>>>
>>> I'll try to make sense out of the log:
>>>
>>> http://pastebin.com/MrQJcbJr
>>>
>>> On Tue, Apr 12, 2011 at 3:38 PM, Stack <stack@duboce.net> wrote:
>>>> You changed the src/test/resources/log4j.properties?
>>>>
>>>> Not sure why changing the block size would make a difference, why it
>>>> would even care.
>>>>
>>>> St.Ack
>>>>
>>>> On Tue, Apr 12, 2011 at 2:38 PM, Jason Rutherglen
>>>> <jason.rutherglen@gmail.com> wrote:
>>>>> Thanks, I'm only seeing the error when I change the block size, either
>>>>> via DFSClient.create or via the Configuration dfs.block.size property.
>>>>>
>>>>> When I changed the log4j.properties to
>>>>> 'log4j.logger.org.apache.hadoop=WARN' I'm not seeing anything
>>>>> additional in the output in the target/surefire-reports directory.
>>>>>
>>>>> On Tue, Apr 12, 2011 at 12:59 PM, Gary Helmling <ghelmling@gmail.com>
wrote:
>>>>>> Depends what the log4j.properties file that your code is picking
up says.
>>>>>> mvn test or IDE "run" test classes should pick up
>>>>>> src/test/resources/log4j.properties, which will log to stderr.  If
that's
>>>>>> how you're running you could tweak the hadoop logging level to see
if it
>>>>>> shows anything more.  Change:
>>>>>>
>>>>>> log4j.logger.org.apache.hadoop=WARN
>>>>>>
>>>>>> to INFO or DEBUG.
>>>>>>
>>>>>> Also, mvn test will redirect the log output to
>>>>>> target/surefire-reports/org.apache.hadoop.hbase....-output.txt
>>>>>>
>>>>>>
>>>>>>
>>>>>> On Tue, Apr 12, 2011 at 12:43 PM, Jason Rutherglen <
>>>>>> jason.rutherglen@gmail.com> wrote:
>>>>>>
>>>>>>> Where does MiniDFSCluster store the logs?  I don't see a location,
>>>>>>> assuming it's different than stdout/err.
>>>>>>>
>>>>>>> On Tue, Apr 12, 2011 at 11:26 AM, Stack <stack@duboce.net>
wrote:
>>>>>>> > The datanodes are not starting?  Anything about that in
the log?
>>>>>>> > St.Ack
>>>>>>> >
>>>>>>> > On Tue, Apr 12, 2011 at 11:13 AM, Jason Rutherglen
>>>>>>> > <jason.rutherglen@gmail.com> wrote:
>>>>>>> >> I'm running into an error when setting the DFS block
size to be larger
>>>>>>> >> than the default.  The following code is used to create
the test
>>>>>>> >> cluster:
>>>>>>> >>
>>>>>>> >> Configuration conf = new Configuration();
>>>>>>> >> MiniDFSCluster cluster = new MiniDFSCluster(conf, 2,
true, null);
>>>>>>> >> FileSystem fileSys = cluster.getFileSystem();
>>>>>>> >>
>>>>>>> >> org.apache.hadoop.ipc.RemoteException: java.io.IOException:
File
>>>>>>> >> /index/_0_0.tib could only be replicated to 0 nodes,
instead of 1
>>>>>>> >>        at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1363)
>>>>>>> >>        at
>>>>>>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:449)
>>>>>>> >>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
>>>>>>> >>        at
>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>>> >>        at
>>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>>>> >>        at java.lang.reflect.Method.invoke(Method.java:616)
>>>>>>> >>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>>>>>> >>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:961)
>>>>>>> >>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:957)
>>>>>>> >>        at java.security.AccessController.doPrivileged(Native
Method)
>>>>>>> >>        at javax.security.auth.Subject.doAs(Subject.java:416)
>>>>>>> >>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:955)
>>>>>>> >>
>>>>>>> >>        at org.apache.hadoop.ipc.Client.call(Client.java:740)
>>>>>>> >>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>>>>>> >>        at $Proxy4.addBlock(Unknown Source)
>>>>>>> >>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native
Method)
>>>>>>> >>        at
>>>>>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>>>>>> >>        at
>>>>>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>>>>>> >>        at java.lang.reflect.Method.invoke(Method.java:616)
>>>>>>> >>        at
>>>>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>>>>>> >>        at
>>>>>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>>>>>> >>        at $Proxy4.addBlock(Unknown Source)
>>>>>>> >>        at
>>>>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3184)
>>>>>>> >>        at
>>>>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3058)
>>>>>>> >>        at
>>>>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2200(DFSClient.java:2276)
>>>>>>> >>        at
>>>>>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2511)
>>>>>>> >>
>>>>>>> >
>>>>>>>
>>>>>>
>>>>>
>>>>
>>>
>>
>

Mime
View raw message