hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From NNever <nnever...@gmail.com>
Subject Re: Can coprocessor operate HDFS directly?
Date Tue, 14 Feb 2012 14:35:14 GMT
Thanks Sanel.
I try to use

*FileSystem fs = FileSystem.get(HBaseConfiguration.create());*
*fs.delete(new Path(...))*

in corpocessor's preDelete method.
There is no exception, but the target-path file has not deleted after those
code also.
I don't know why...

It's late night here now. I'll try that again tomorrow morning to see if I
made anything wrong....Thanks for your reply...

2012/2/14 Sanel Zukan <sanelz@gmail.com>

> AFAIK it is possible, just make sure regionservers can see hadoop jar
> (which is true by default). Actually, you can call anything from these
> methods ;)
>
> On Tue, Feb 14, 2012 at 9:15 AM, NNever <nneverwei@gmail.com> wrote:
> > As we know in HBase coprocessor methods such as prePut, we can operate
> > htable from ObserverContext<RegionCoprocessorEnviroment>...
> > But in many situations there will be some Tables with qualifier to record
> > the File URI. Then when we delete one row and trigger some ops in
> > coprocessor, it's always need to delete the real File in hdfs through the
> > recorded URI...
> >
> > So What my question is, * can we use Hadoop API to operate hdfs in
> > coprocessor*?
> > If it's possible, what's the codes will like?
> >
> > Thanks!
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message