hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stack <st...@duboce.net>
Subject Re: Use of the test cluster in HBase
Date Tue, 12 Apr 2011 22:38:34 GMT
You changed the src/test/resources/log4j.properties?

Not sure why changing the block size would make a difference, why it
would even care.

St.Ack

On Tue, Apr 12, 2011 at 2:38 PM, Jason Rutherglen
<jason.rutherglen@gmail.com> wrote:
> Thanks, I'm only seeing the error when I change the block size, either
> via DFSClient.create or via the Configuration dfs.block.size property.
>
> When I changed the log4j.properties to
> 'log4j.logger.org.apache.hadoop=WARN' I'm not seeing anything
> additional in the output in the target/surefire-reports directory.
>
> On Tue, Apr 12, 2011 at 12:59 PM, Gary Helmling <ghelmling@gmail.com> wrote:
>> Depends what the log4j.properties file that your code is picking up says.
>> mvn test or IDE "run" test classes should pick up
>> src/test/resources/log4j.properties, which will log to stderr.  If that's
>> how you're running you could tweak the hadoop logging level to see if it
>> shows anything more.  Change:
>>
>> log4j.logger.org.apache.hadoop=WARN
>>
>> to INFO or DEBUG.
>>
>> Also, mvn test will redirect the log output to
>> target/surefire-reports/org.apache.hadoop.hbase....-output.txt
>>
>>
>>
>> On Tue, Apr 12, 2011 at 12:43 PM, Jason Rutherglen <
>> jason.rutherglen@gmail.com> wrote:
>>
>>> Where does MiniDFSCluster store the logs?  I don't see a location,
>>> assuming it's different than stdout/err.
>>>
>>> On Tue, Apr 12, 2011 at 11:26 AM, Stack <stack@duboce.net> wrote:
>>> > The datanodes are not starting?  Anything about that in the log?
>>> > St.Ack
>>> >
>>> > On Tue, Apr 12, 2011 at 11:13 AM, Jason Rutherglen
>>> > <jason.rutherglen@gmail.com> wrote:
>>> >> I'm running into an error when setting the DFS block size to be larger
>>> >> than the default.  The following code is used to create the test
>>> >> cluster:
>>> >>
>>> >> Configuration conf = new Configuration();
>>> >> MiniDFSCluster cluster = new MiniDFSCluster(conf, 2, true, null);
>>> >> FileSystem fileSys = cluster.getFileSystem();
>>> >>
>>> >> org.apache.hadoop.ipc.RemoteException: java.io.IOException: File
>>> >> /index/_0_0.tib could only be replicated to 0 nodes, instead of 1
>>> >>        at
>>> org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getAdditionalBlock(FSNamesystem.java:1363)
>>> >>        at
>>> org.apache.hadoop.hdfs.server.namenode.NameNode.addBlock(NameNode.java:449)
>>> >>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> >>        at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> >>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> >>        at java.lang.reflect.Method.invoke(Method.java:616)
>>> >>        at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:508)
>>> >>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:961)
>>> >>        at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:957)
>>> >>        at java.security.AccessController.doPrivileged(Native Method)
>>> >>        at javax.security.auth.Subject.doAs(Subject.java:416)
>>> >>        at org.apache.hadoop.ipc.Server$Handler.run(Server.java:955)
>>> >>
>>> >>        at org.apache.hadoop.ipc.Client.call(Client.java:740)
>>> >>        at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:220)
>>> >>        at $Proxy4.addBlock(Unknown Source)
>>> >>        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
>>> >>        at
>>> sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
>>> >>        at
>>> sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
>>> >>        at java.lang.reflect.Method.invoke(Method.java:616)
>>> >>        at
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:82)
>>> >>        at
>>> org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:59)
>>> >>        at $Proxy4.addBlock(Unknown Source)
>>> >>        at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.locateFollowingBlock(DFSClient.java:3184)
>>> >>        at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.nextBlockOutputStream(DFSClient.java:3058)
>>> >>        at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream.access$2200(DFSClient.java:2276)
>>> >>        at
>>> org.apache.hadoop.hdfs.DFSClient$DFSOutputStream$DataStreamer.run(DFSClient.java:2511)
>>> >>
>>> >
>>>
>>
>

Mime
View raw message