hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nkechi Achara <nkach...@googlemail.com>
Subject Re: Example of spinning up a Hbase mock style test for integration testing in scala
Date Mon, 14 Mar 2016 22:22:11 GMT
Hi Ted,

I believe it is an issue with long file name lengths in  windows as when i
attempt to get to the directory it is trying to replicate the block to, I
recieve the ever annoying error of:

The filename or extension is too long.

Does anyone know how to fix this?


On 14 March 2016 at 18:42, Ted Yu <yuzhihong@gmail.com> wrote:

> You can inspect the output from 'mvn dependency:tree' to see if any
> incompatible hadoop dependency exists.
>
> FYI
>
> On Mon, Mar 14, 2016 at 10:26 AM, Parsian, Mahmoud <mparsian@illumina.com>
> wrote:
>
>> Hi Keech,
>>
>> Please post your sample test, its run log, version of Hbase , hadoop, …
>> And make sure that hadoop-core-1.2.1.jar is not your classpath (causes
>> many errors!).
>>
>> Best,
>> Mahmoud
>> From: Nkechi Achara <nkachara@googlemail.com<mailto:
>> nkachara@googlemail.com>>
>> Date: Monday, March 14, 2016 at 10:14 AM
>> To: "user@hbase.apache.org<mailto:user@hbase.apache.org>" <
>> user@hbase.apache.org<mailto:user@hbase.apache.org>>, Mahmoud Parsian <
>> mparsian@illumina.com<mailto:mparsian@illumina.com>>
>>
>> Subject: Re: Example of spinning up a Hbase mock style test for
>> integration testing in scala
>>
>>
>> Thanks Mahmoud,
>>
>> This is what I am using,  but as the previous reply stated, I  receiving
>> an exception when starting the cluster.
>> Thinking about it, it looks to be more of a build problem of my hbase
>> mini cluster,  as I am receiving the following error:
>>
>> 16/03/14 12:29:00 WARN datanode.DataNode: IOException in
>> BlockReceiver.run():
>>
>> java.io.IOException: Failed to move meta file for ReplicaBeingWritten,
>> blk_1073741825_1001, RBW
>>
>>   getNumBytes()     = 7
>>
>>   getBytesOnDisk()  = 7
>>
>>   getVisibleLength()= 7
>>
>>   getVolume()       =
>> C:\Users\unknown\Documents\trs\target\test-data\780d11ca-27b8-4004-bed8-480bc9903125\dfscluster_d292c05b-0190-43b1-83b2-bebf483c8b3c\dfs\data\data1\current
>>
>>   getBlockFile()    =
>> C:\Users\unknown\Documents\trs\target\test-data\780d11ca-27b8-4004-bed8-480bc9903125\dfscluster_d292c05b-0190-43b1-83b2-bebf483c8b3c\dfs\data\data1\current\BP-1081755239-10.66.90.86-1457954925705\current\rbw\blk_1073741825
>>
>>   bytesAcked=7
>>
>>   bytesOnDisk=7 from
>> C:\Users\unknown\Documents\trs\target\test-data\780d11ca-27b8-4004-bed8-480bc9903125\dfscluster_d292c05b-0190-43b1-83b2-bebf483c8b3c\dfs\data\data1\current\BP-1081755239-10.66.90.86-1457954925705\current\rbw\blk_1073741825_1001.meta
>> to
>> C:\Users\unknown\Documents\trs\target\test-data\780d11ca-27b8-4004-bed8-480bc9903125\dfscluster_d292c05b-0190-43b1-83b2-bebf483c8b3c\dfs\data\data1\current\BP-1081755239-10.66.90.86-1457954925705\current\finalized\subdir0\subdir0\blk_1073741825_1001.meta
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:615)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.BlockPoolSlice.addBlock(BlockPoolSlice.java:250)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsVolumeImpl.addBlock(FsVolumeImpl.java:229)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeReplica(FsDatasetImpl.java:1119)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.finalizeBlock(FsDatasetImpl.java:1100)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.finalizeBlock(BlockReceiver.java:1293)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.BlockReceiver$PacketResponder.run(BlockReceiver.java:1233)
>>
>> at java.lang.Thread.run(Thread.java:745)
>>
>> Caused by: 3: The system cannot find the path specified.
>>
>> at org.apache.hadoop.io.nativeio.NativeIO.renameTo0(Native Method)
>>
>> at org.apache.hadoop.io.nativeio.NativeIO.renameTo(NativeIO.java:830)
>>
>> at
>> org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.moveBlockFiles(FsDatasetImpl.java:613)
>>
>> ... 7 more
>>
>> 16/03/14 12:29:00 INFO datanode.DataNode: Starting CheckDiskError Thread
>>
>> Thanks,
>>
>> Keech
>>
>> On 14 Mar 2016 6:10 pm, "Parsian, Mahmoud" <mparsian@illumina.com<mailto:
>> mparsian@illumina.com>> wrote:
>> Hi Keech,
>>
>> You may use the org.apache.hadoop.hbase.HBaseCommonTestingUtility class to
>> start a ZK, and an HBase cluster and then do your unit tests and
>> integration.
>> I am using this with junit and it works very well. But I am using Java
>> only.
>>
>> Best regards,
>> Mahmoud Parsian
>>
>>
>> On 3/13/16, 11:52 PM, "Nkechi Achara" <nkachara@googlemail.com<mailto:
>> nkachara@googlemail.com>> wrote:
>>
>> >Hi,
>> >
>> >I am trying to find an example of how to spin up a Hbase server in a mock
>> >or integration style, so I can test my code locally in my IDE.
>> >I have tried fake-hbase and hbase testing utility and receive errors
>> >especially when trying to start the cluster.
>> >Has anyone got any examples in scala to do this?
>> >
>> >Thanks,
>> >
>> >Keech
>>
>>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message