hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Zizon Qiu <zzd...@gmail.com>
Subject Re: Is hadoop 1.0.0 + HBase 0.90.5 the best combination for production cluster?
Date Mon, 09 Jan 2012 02:35:45 GMT
It should be the same as hbase daemon user.

the check perform by datanode are implement as follow, inside a RPC call.
the "current user" refer to the remote user,in this case, should the same
as your hbase user

  private void checkBlockLocalPathAccess() throws IOException {
    checkKerberosAuthMethod("getBlockLocalPathInfo()");
    *String currentUser =
UserGroupInformation.getCurrentUser().getShortUserName();*
    if (!*currentUser*.equals(this.userWithLocalPathAccess)) {
      throw new AccessControlException(
          "Can't continue with getBlockLocalPathInfo() "
              + "authorization. The user " + currentUser
              + " is not allowed to call getBlockLocalPathInfo");
    }
  }

On Sun, Jan 8, 2012 at 11:45 PM, Yves Langisch <yves@langisch.ch> wrote:

> I have no special settings for the hadoop security. Is it necessary to
> specify dfs.block.local-path-access.user then? If yes, is it the same user
> my hadoop daemon is running under?
>
> Thanks
> Yves
>
> On Jan 6, 2012, at 4:27 PM, Stack wrote:
>
> > On Thu, Jan 5, 2012 at 10:48 PM, zizon <zzdtsv@gmail.com> wrote:
> >> <property>
> >>  <name>*dfs.client.read.shortcircuit*</name>
> >>  <value>true</value>
> >>  <description>set this to true to enable DFSClient short circuit
> >> read</description>
> >> </property>
> >>
> >
> > You must set the above on both client and server side; i.e. in
> > hbase-site.xml and in hdfs-site.xml.
> >
> >> <property>
> >>  <name>*dfs.block.local-path-access.user*</name>
> >>  <value>hadoop</value>
> >>  <description>add users that need perform short circuit read
> here,datanode
> >> will do security check before the read</description>
> >> </property>
> >>
> >
> > This you set server-side only.  Adjust the value 'hadoop' accordingly.
> >
> > I believe the only way to tell local reads are working is jstacking
> > your hbase and looking for local block accesses.
> >
> > We''ll fix our documentation to include the above.
> >
> > St.Ack
> >
> >>
> >> On Fri, Jan 6, 2012 at 2:25 PM, Yves Langisch <yves@langisch.ch> wrote:
> >>
> >>> How can you enable the mentioned local-read-optimization for
> hadoop-1.0.0?
> >>> I could not find any related information.
> >>>
> >>> -
> >>> Yves
> >>>
> >>> On Jan 6, 2012, at 6:24 AM, Arun C Murthy wrote:
> >>>
> >>>> I know we've done integration testing with hadoop-1.0.0 and
> hbase-0.90.4
> >>> and things work well, not sure about hbase-0.90.5 (I don't imagine
> there
> >>> are issues, but ymmv).
> >>>>
> >>>> In fact, you get a nice perf boost with hadoop-1.0.0 for hbase if you
> >>> enable the local-read-optimization.
> >>>>
> >>>> Arun
> >>>>
> >>>> On Jan 4, 2012, at 10:19 PM, praveenesh kumar wrote:
> >>>>
> >>>>> Don't know about Hadoop 1.0.0 but Hadoop 0.20.205 and Hadoop 0.90.5
> are
> >>>>> playing with each other very fine.
> >>>>>
> >>>>> Thanks,
> >>>>> Praveenesh
> >>>>>
> >>>>> On Thu, Jan 5, 2012 at 11:36 AM, Weihua JIANG <
> weihua.jiang@gmail.com
> >>>> wrote:
> >>>>>
> >>>>>> Hi all,
> >>>>>>
> >>>>>> Hadoop 1.0.0 and HBase 0.90.5 are released. I am curious whether
> hadoop
> >>>>>> 1.0.0 is compatible with HBase 0.90.5. And whether this combination
> is
> >>> the
> >>>>>> best choice for production cluster now?
> >>>>>>
> >>>>>> Thanks
> >>>>>> Weihua
> >>>>>>
> >>>>
> >>>
> >>>
> >
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message