Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 52FB9105B6 for ; Thu, 5 Mar 2015 09:00:02 +0000 (UTC) Received: (qmail 20226 invoked by uid 500); 5 Mar 2015 08:59:23 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 20104 invoked by uid 500); 5 Mar 2015 08:59:23 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 20094 invoked by uid 99); 5 Mar 2015 08:59:22 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 05 Mar 2015 08:59:22 +0000 X-ASF-Spam-Status: No, hits=1.7 required=5.0 tests=FREEMAIL_ENVFROM_END_DIGIT,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (nike.apache.org: domain of alexandrucalin29@gmail.com designates 209.85.213.182 as permitted sender) Received: from [209.85.213.182] (HELO mail-ig0-f182.google.com) (209.85.213.182) by apache.org (qpsmtpd/0.29) with ESMTP; Thu, 05 Mar 2015 08:58:55 +0000 Received: by igal13 with SMTP id l13so44226406iga.0 for ; Thu, 05 Mar 2015 00:58:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=KNUecxk98OwXP1iuKWavEPRNt+f1SUdMDzHllAA4tdA=; b=nrFEst88cKob7X4Fd3s/nOmmDq1WzayHtGIYyh6n8mMhZu/yZkb1DQvFMy1zeV1kna olPYx05Al5Th/q8nE1LX/5DTqdwavbp6FlrV3T8p8u5WMxOA4Onchqe2rdpm/dLQaksC Oyzg0q8vezIsZc1e9+H22xlMCCGcvq+FStWCwSVzqdZpW04ue03Wcn+aWpAB+qzKIsXf DKZjdUZ72+PEHKsKcnlVQndB0jMmQ4H1pj0tSxkRwkE+kcJHhq1nW7CSB3WvvtmjtriD WJ8WkUwylcm3t9kRy1vPb0/QvnfoB+c/A+CVHDJfTeWJdLhECAaosqDM6QMUCAgtDiUk /vlw== MIME-Version: 1.0 X-Received: by 10.43.14.10 with SMTP id po10mr2305939icb.64.1425545888450; Thu, 05 Mar 2015 00:58:08 -0800 (PST) Received: by 10.42.185.14 with HTTP; Thu, 5 Mar 2015 00:58:08 -0800 (PST) In-Reply-To: References: Date: Thu, 5 Mar 2015 10:58:08 +0200 Message-ID: Subject: Re: File is not written on HDFS after running libhdfs C API From: Alexandru Calin To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=bcaec50fdfbb4ad0c7051086c454 X-Virus-Checked: Checked by ClamAV on apache.org --bcaec50fdfbb4ad0c7051086c454 Content-Type: text/plain; charset=UTF-8 Now I've also started YARN ( just for the sake of trying anything), the config for mapred-site.xml and yarn-site.xml are those on apache website. A *jps *command shows: 11257 NodeManager 11129 ResourceManager 11815 Jps 10620 NameNode 10966 SecondaryNameNode On Thu, Mar 5, 2015 at 10:48 AM, Azuryy Yu wrote: > Can you share your core-site.xml here? > > > On Thu, Mar 5, 2015 at 4:32 PM, Alexandru Calin < > alexandrucalin29@gmail.com> wrote: > >> No change at all, I've added them at the start and end of the CLASSPATH, >> either way it still writes the file on the local fs. I've also restarted >> hadoop. >> >> On Thu, Mar 5, 2015 at 10:22 AM, Azuryy Yu wrote: >> >>> Yes, you should do it:) >>> >>> On Thu, Mar 5, 2015 at 4:17 PM, Alexandru Calin < >>> alexandrucalin29@gmail.com> wrote: >>> >>>> Wow, you are so right! it's on the local filesystem! Do I have to >>>> manually specify hdfs-site.xml and core-site.xml in the CLASSPATH variable >>>> ? Like this: >>>> CLASSPATH=$CLASSPATH:/usr/local/hadoop/etc/hadoop/core-site.xml >>>> ? >>>> >>>> On Thu, Mar 5, 2015 at 10:04 AM, Azuryy Yu wrote: >>>> >>>>> you need to include core-site.xml as well. and I think you can find >>>>> '/tmp/testfile.txt' on your local disk, instead of HDFS. >>>>> >>>>> if so, My guess is right. because you don't include core-site.xml, >>>>> then your Filesystem schema is file:// by default, not hdfs://. >>>>> >>>>> >>>>> >>>>> On Thu, Mar 5, 2015 at 3:52 PM, Alexandru Calin < >>>>> alexandrucalin29@gmail.com> wrote: >>>>> >>>>>> I am trying to run the basic libhdfs example, it compiles ok, and >>>>>> actually runs ok, and executes the whole program, but I cannot see the file >>>>>> on the HDFS. >>>>>> >>>>>> It is said here , >>>>>> that you have to include *the right configuration directory >>>>>> containing hdfs-site.xml* >>>>>> >>>>>> My hdfs-site.xml: >>>>>> >>>>>> >>>>>> >>>>>> dfs.replication >>>>>> 1 >>>>>> >>>>>> >>>>>> dfs.namenode.name.dir >>>>>> file:///usr/local/hadoop/hadoop_data/hdfs/namenode >>>>>> >>>>>> >>>>>> dfs.datanode.data.dir >>>>>> file:///usr/local/hadoop/hadoop_store/hdfs/datanode >>>>>> >>>>>> >>>>>> I generate my classpath with this: >>>>>> >>>>>> #!/bin/bashexport CLASSPATH=/usr/local/hadoop/ >>>>>> declare -a subdirs=("hdfs" "tools" "common" "yarn" "mapreduce")for subdir in "${subdirs[@]}"do >>>>>> for file in $(find /usr/local/hadoop/share/hadoop/$subdir -name *.jar) >>>>>> do >>>>>> export CLASSPATH=$CLASSPATH:$file >>>>>> donedone >>>>>> >>>>>> and I also add export >>>>>> CLASSPATH=$CLASSPATH:/usr/local/hadoop/etc/hadoop , where my >>>>>> *hdfs-site.xml* reside. >>>>>> >>>>>> MY LD_LIBRARY_PATH = >>>>>> /usr/local/hadoop/lib/native:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/server >>>>>> Code: >>>>>> >>>>>> #include "hdfs.h"#include #include #include #include >>>>>> int main(int argc, char **argv) { >>>>>> >>>>>> hdfsFS fs = hdfsConnect("default", 0); >>>>>> const char* writePath = "/tmp/testfile.txt"; >>>>>> hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0, 0, 0); >>>>>> if(!writeFile) { >>>>>> printf("Failed to open %s for writing!\n", writePath); >>>>>> exit(-1); >>>>>> } >>>>>> printf("\nfile opened\n"); >>>>>> char* buffer = "Hello, World!"; >>>>>> tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1); >>>>>> printf("\nWrote %d bytes\n", (int)num_written_bytes); >>>>>> if (hdfsFlush(fs, writeFile)) { >>>>>> printf("Failed to 'flush' %s\n", writePath); >>>>>> exit(-1); >>>>>> } >>>>>> hdfsCloseFile(fs, writeFile); >>>>>> hdfsDisconnect(fs); >>>>>> return 0;} >>>>>> >>>>>> It compiles and runs without error, but I cannot see the file on HDFS. >>>>>> >>>>>> I have Hadoop 2.6.0 on Ubuntu 14.04 64bit. >>>>>> >>>>>> Any ideas on this ? >>>>>> >>>>>> >>>>>> >>>>> >>>> >>> >> > --bcaec50fdfbb4ad0c7051086c454 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Now I've also started YARN ( just for the sake of tryi= ng anything), the config for mapred-site.xml and yarn-site.xml are those on= apache website. A jps command shows:=C2=A0

= 11257 NodeManager
11129 ResourceManager
11815 Jps
=
10620 NameNode
10966 SecondaryNameNode

On Thu, Mar 5, 2015 a= t 10:48 AM, Azuryy Yu <azuryyyu@gmail.com> wrote:
Can you share your core-site= .xml here?


On Thu, Mar 5, = 2015 at 4:32 PM, Alexandru Calin <alexandrucalin29@gmail.com&= gt; wrote:
No cha= nge at all, I've added them at the start and end of the CLASSPATH, eith= er way it still writes the file on the local fs. I've also restarted ha= doop.

On Thu, Mar 5, 2015 at 10:22 AM, Azuryy Yu <azuryyyu@gmail.com>= ; wrote:
Yes,=C2=A0 you should = do it:)

On Thu, Mar 5, 2015 at 4:17 PM, Alexandru Calin <= ;alexandruc= alin29@gmail.com> wrote:
Wow, you are so right! it's on the local filesystem!=C2=A0 Do I have t= o manually specify hdfs-site.xml and core-site.xml in the CLASSPATH variabl= e ? Like this:=C2=A0
CLASSPATH=3D$CLASSPATH:/usr/local= /hadoop/etc/hadoop/core-site.xml
?

On = Thu, Mar 5, 2015 at 10:04 AM, Azuryy Yu <azuryyyu@gmail.com> wrote:
you need to include core= -site.xml as well. and I think you can find '/tmp/testfile.txt' on = your local disk, instead of HDFS.

if so,=C2=A0 My = guess is right.=C2=A0 because you don't include core-site.xml, then you= r Filesystem schema is file:// by default, not hdfs://.



On Thu, Mar 5, 2015 at 3:52 PM, Alexandru Calin <alexandrucalin29@gmail.com> wrote:

I am trying to run the basic libh= dfs example, it compiles ok, and actually runs ok, and executes the whole p= rogram, but I cannot see the file on the HDFS.

It is said=C2=A0=C2=A0he= re, that you have to include=C2=A0the right configuration directory containing hdfs-s= ite.xml

My hdfs-site.xml:

<configuration=
>
    <property>
        <name>dfs.replication=
</name>
        <value>1</value>=
;
    </property>
    <property>
      <name>dfs.namenode.name.dir</name>
      <value>file:///usr/local/hadoop/hadoo=
p_data/hdfs/namenode</value>
    </property>
    <property>
      <name>dfs.datanode.data.dir</name>
      <value>file:///usr/local/hadoop/hadoo=
p_store/hdfs/datanode</value>
    </property>
</configuration>

I generate my classpath with this:

#!/bin/bash
export CLASSPATH=3D/usr=
/local/hadoop/
declare -a subdirs=3D(&qu=
ot;hdfs" "tools" "common" "yarn" "mapreduce")
for subdir in "${subdirs[@]}"
do
        for file in $(find /usr/local/hadoop/share=
/hadoop/$subdir -name *.jar)
        do
                export CLASSPATH=3D$CLASSPATH:$file
        done
done

and I al= so add=C2=A0export CLASSPATH=3D$CLASSP= ATH:/usr/local/hadoop/etc/hadoop=C2=A0, where my=C2=A0hdfs-site.xml=C2=A0resi= de.

MY LD_LIBRARY_PATH =3D /usr/local/hadoo= p/lib/native:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/server Code:

=
#include "hdfs.h"
#include <stdio.h>=

#include <string.h>
#include <stdio.h>=

#include <stdlib.h>

int main(int argc, char =
**argv) {

    hdfsFS fs =3D hdfsConnect("default", 0
    const char*<=
span style=3D"margin:0px;padding:0px;border:0px currentColor;color:rgb(0,0,=
0)"> writePath =3D "/tmp/testfil=
e.txt";
    hdfsFile writeFile =3D hdfsOpenFile=
(fs, writePath,=
 O_WRONLY|O_CREAT, <=
span style=3D"margin:0px;padding:0px;border:0px currentColor;color:rgb(128,=
0,0)">0, 0, <=
/span>0);
    if(!writeFile) {
          printf("Failed to open %s for writ=
ing!\n", writePath);
          exit(-1);
    }
    printf("\nfile opened\n");
    char* buffer =3D "Hello, World!";
    tSize num_written_bytes =3D hdfsWritefs, writeFile,(void*)buffer, strlen(buffer)+=
1);
    printf("\nWrote %d bytes\n"<=
span style=3D"margin:0px;padding:0px;border:0px currentColor;color:rgb(0,0,=
0)">, (int)nu=
m_written_bytes);
    if (hdfsFlush<=
span style=3D"margin:0px;padding:0px;border:0px currentColor;color:rgb(0,0,=
0)">(fs, writeFile))=
 {
           printf("Failed to 'flush'=
 %s\n", writePath);
          exit(-1);
    }
   hdfsCloseFile(fs, write=
File);
   hdfsDisconnect(fs);
   return 0;
}

It compiles a= nd runs without error, but I cannot see the file on HDFS.

I have Hadoop 2.6.0 on Ubuntu 14.04 64bit.

Any ideas on this ?







--bcaec50fdfbb4ad0c7051086c454--