Return-Path: X-Original-To: apmail-hbase-user-archive@www.apache.org Delivered-To: apmail-hbase-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 31CCA10465 for ; Sat, 27 Apr 2013 22:38:30 +0000 (UTC) Received: (qmail 83572 invoked by uid 500); 27 Apr 2013 22:38:28 -0000 Delivered-To: apmail-hbase-user-archive@hbase.apache.org Received: (qmail 83513 invoked by uid 500); 27 Apr 2013 22:38:28 -0000 Mailing-List: contact user-help@hbase.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hbase.apache.org Delivered-To: mailing list user@hbase.apache.org Received: (qmail 83504 invoked by uid 99); 27 Apr 2013 22:38:28 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 27 Apr 2013 22:38:28 +0000 X-ASF-Spam-Status: No, hits=2.5 required=5.0 tests=FREEMAIL_REPLY,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of yuzhihong@gmail.com designates 209.85.217.170 as permitted sender) Received: from [209.85.217.170] (HELO mail-lb0-f170.google.com) (209.85.217.170) by apache.org (qpsmtpd/0.29) with ESMTP; Sat, 27 Apr 2013 22:38:22 +0000 Received: by mail-lb0-f170.google.com with SMTP id r10so4152233lbi.29 for ; Sat, 27 Apr 2013 15:38:01 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:x-received:in-reply-to:references:date:message-id :subject:from:to:content-type; bh=u8DqaPrHMuXb/Zom8gMRn0jGaJZOqxQMeubItnSRbQU=; b=w+YkTySQtJcdMml2Z3HsrVjMvZEMb51tezcuW6kBsmhnVcUr33Q4NjSkbiI+6ToCnr 7vrqb88OeeEU/WDQ/rZb02De/Kp6W57yneyigi8a4HCgXNszGLt+5TN7CylvGhzsdaps QHuvhZZQGaCtjzflivShUxGgoJsKCZEB5FrU/z+g/QR9Ac2TdcHSPA70Q0U47guTPWyE IjnPS9aUUQlANqnmAUsqf0tI6/QNKOLoGMHRlwsoNwIg1Kw70Q7Pa4Cd6+9MXcRTmq4w Ki25Rrq61D7qSFmw6yJCySSyGLdCoxMQuFBwWtDTEFQLbms4nkFBRsx5jF1FRaYnGvWY BvEA== MIME-Version: 1.0 X-Received: by 10.112.5.137 with SMTP id s9mr13555909lbs.68.1367102281740; Sat, 27 Apr 2013 15:38:01 -0700 (PDT) Received: by 10.112.155.41 with HTTP; Sat, 27 Apr 2013 15:38:01 -0700 (PDT) In-Reply-To: References: Date: Sun, 28 Apr 2013 06:38:01 +0800 Message-ID: Subject: Re: Dual Hadoop/HBase configuration through same client From: Ted Yu To: user@hbase.apache.org Content-Type: multipart/alternative; boundary=14dae94ed61bdffaa904db5f4dc3 X-Virus-Checked: Checked by ClamAV on apache.org --14dae94ed61bdffaa904db5f4dc3 Content-Type: text/plain; charset=ISO-8859-1 Shahab: Can you enable Kerberos-based security in the other cluster ? Exporting information from secure cluster to insecure cluster doesn't seem right. Cheers On Sun, Apr 28, 2013 at 12:54 AM, Shahab Yunus wrote: > Interesting lead, thanks. > > Meanwhile, I was also thinking of using distcp. With the help of hftp we > can overcome the Hadoop mismatch issue as well but I think the mismatch in > security configuration will still be a problem. I tried it as the > following, where the source has Kerberos configured and the destination > didn't but it failed with the exception. This was kicked-off form the > destination server of course. > > hadoop distcp > hftp:///hdfs:/// > > org.apache.hadoop.ipc.RemoteException(java.io.IOException): Security > enabled but user not authenticated by filter > at org.apache.hadoop.ipc.RemoteException.valueOf(RemoteException.java:97) > at > > org.apache.hadoop.hdfs.HftpFileSystem$LsParser.startElement(HftpFileSyste... > > Regards, > Shahab > > > On Sat, Apr 27, 2013 at 2:51 AM, Damien Hardy > wrote: > > > Hello > > > > Maybe you should look at export tools source code as it can export HBase > > data to distant HDFS space (setting a full hdfs:// url in command line > > option for outputdir) > > > > > > > https://github.com/apache/hbase/blob/trunk/hbase-server/src/main/java/org/apache/hadoop/hbase/mapreduce/Export.java > > > > Cheers, > > > > > > 2013/4/27 Shahab Yunus > > > > > Thanks Ted for the response. But the issue is that I want to read from > > one > > > cluster and write to another. If I will have to clients then how will > > they > > > communicate with each other? Essentially what am I trying to do here is > > > intra-cluster data copy/exchange. Any other ideas or suggestions? Even > if > > > both servers have no security or one has Kerberos or both have > > > authentication how to exchange data between them? > > > > > > I was actually not expecting that I cannot load multiple Hadoop or > HBase > > > configurations in 2 different Configuration objects in one application. > > > As mentioned I have tried overwriting properties as well but > > > security/authentication properties are overwritten somehow. > > > > > > Regards, > > > Shahab > > > > > > > > > On Fri, Apr 26, 2013 at 7:43 PM, Ted Yu wrote: > > > > > > > Looks like the easiest solution is to use separate clients, one for > > each > > > > cluster you want to connect to. > > > > > > > > Cheers > > > > > > > > On Sat, Apr 27, 2013 at 6:51 AM, Shahab Yunus < > shahab.yunus@gmail.com > > > > >wrote: > > > > > > > > > Hello, > > > > > > > > > > This is a follow-up to my previous post a few days back. I am > trying > > to > > > > > connect to 2 different Hadoop clusters' setups through a same > client > > > but > > > > I > > > > > am running into the issue that the config of one overwrites the > > other. > > > > > > > > > > The scenario is that I want to read data from an HBase table from > one > > > > > cluster and write it as a file on HDFS on the other. Individually, > > if I > > > > try > > > > > to write to them they both work but when I try this through a same > > Java > > > > > client, they fail. > > > > > > > > > > I have tried loading the core-site.xml through addResource method > of > > > the > > > > > Configuration class but only the first found config file is > picked? I > > > > have > > > > > also tried by renaming the config files and then adding them as a > > > > resource > > > > > (again through the addResource method). > > > > > > > > > > The situation is compounded by the fact that one cluster is using > > > > Kerberos > > > > > authentication and the other is not? If the Kerberos server's file > is > > > > found > > > > > first then authentication failures are faced for the other server > > when > > > > > Hadoop tries to find client authentication information. If the > > 'simple' > > > > > cluster's config is loaded first then 'Authentication is Required' > > > error > > > > is > > > > > encountered against the Kerberos server. > > > > > > > > > > I will gladly provide more information. Is it even possible even if > > let > > > > us > > > > > say both servers have same security configuration or none? Any > ideas? > > > > > Thanks a million. > > > > > > > > > > Regards, > > > > > Shahab > > > > > > > > > > > > > > > > > > > > -- > > Damien HARDY > > IT Infrastructure Architect > > Viadeo - 30 rue de la Victoire - 75009 Paris - France > > > --14dae94ed61bdffaa904db5f4dc3--