hadoop-mapreduce-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From John Lilley <john.lil...@redpoint.net>
Subject RE: Accessing HDFS
Date Mon, 15 Jul 2013 21:27:12 GMT
Thanks!  They are fine, I was just confused seeing them talked about in forums.
John


-----Original Message-----
From: Harsh J [mailto:harsh@cloudera.com] 
Sent: Friday, July 05, 2013 8:01 PM
To: <user@hadoop.apache.org>
Subject: Re: Accessing HDFS

These APIs (ClientProtocol, DFSClient) are not for Public access.
Please do not use them in production. The only API we care not to change incompatibly are
the FileContext and the FileSystem APIs. They provide much of what you want - if not, log
a JIRA.

On Fri, Jul 5, 2013 at 11:40 PM, John Lilley <john.lilley@redpoint.net> wrote:
> I've seen mentioned that you can access HDFS via ClientProtocol, as in:
>
> ClientProtocol namenode = DFSClient.createNamenode(conf); 
> LocatedBlocks lbs = namenode.getBlockLocations(path, start, length);
>
>
>
> But we use:
>
> fs = FileSystem.get(URI, conf);
>
> filestatus = fs.getFileStatus(path);
>
> fs.getFileBlockLocations(filestatus, start, length);
>
>
>
> As a YARN application and/or DFS client, are there times when I should 
> use the ClientProtocol directly?
>
>
>
> Thanks
>
> John
>
>



--
Harsh J

Mime
View raw message