hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexandru Calin <alexandrucali...@gmail.com>
Subject Re: File is not written on HDFS after running libhdfs C API
Date Thu, 05 Mar 2015 08:32:13 GMT
No change at all, I've added them at the start and end of the CLASSPATH,
either way it still writes the file on the local fs. I've also restarted
hadoop.

On Thu, Mar 5, 2015 at 10:22 AM, Azuryy Yu <azuryyyu@gmail.com> wrote:

> Yes,  you should do it:)
>
> On Thu, Mar 5, 2015 at 4:17 PM, Alexandru Calin <
> alexandrucalin29@gmail.com> wrote:
>
>> Wow, you are so right! it's on the local filesystem!  Do I have to
>> manually specify hdfs-site.xml and core-site.xml in the CLASSPATH variable
>> ? Like this:
>> CLASSPATH=$CLASSPATH:/usr/local/hadoop/etc/hadoop/core-site.xml
>> ?
>>
>> On Thu, Mar 5, 2015 at 10:04 AM, Azuryy Yu <azuryyyu@gmail.com> wrote:
>>
>>> you need to include core-site.xml as well. and I think you can find
>>> '/tmp/testfile.txt' on your local disk, instead of HDFS.
>>>
>>> if so,  My guess is right.  because you don't include core-site.xml,
>>> then your Filesystem schema is file:// by default, not hdfs://.
>>>
>>>
>>>
>>> On Thu, Mar 5, 2015 at 3:52 PM, Alexandru Calin <
>>> alexandrucalin29@gmail.com> wrote:
>>>
>>>> I am trying to run the basic libhdfs example, it compiles ok, and
>>>> actually runs ok, and executes the whole program, but I cannot see the file
>>>> on the HDFS.
>>>>
>>>> It is said  here <http://hadoop.apache.org/docs/r1.2.1/libhdfs.html>,
>>>> that you have to include *the right configuration directory containing
>>>> hdfs-site.xml*
>>>>
>>>> My hdfs-site.xml:
>>>>
>>>> <configuration>
>>>>     <property>
>>>>         <name>dfs.replication</name>
>>>>         <value>1</value>
>>>>     </property>
>>>>     <property>
>>>>       <name>dfs.namenode.name.dir</name>
>>>>       <value>file:///usr/local/hadoop/hadoop_data/hdfs/namenode</value>
>>>>     </property>
>>>>     <property>
>>>>       <name>dfs.datanode.data.dir</name>
>>>>       <value>file:///usr/local/hadoop/hadoop_store/hdfs/datanode</value>
>>>>     </property></configuration>
>>>>
>>>> I generate my classpath with this:
>>>>
>>>> #!/bin/bashexport CLASSPATH=/usr/local/hadoop/
>>>> declare -a subdirs=("hdfs" "tools" "common" "yarn" "mapreduce")for subdir
in "${subdirs[@]}"do
>>>>         for file in $(find /usr/local/hadoop/share/hadoop/$subdir -name *.jar)
>>>>         do
>>>>                 export CLASSPATH=$CLASSPATH:$file
>>>>         donedone
>>>>
>>>> and I also add export CLASSPATH=$CLASSPATH:/usr/local/hadoop/etc/hadoop ,
>>>> where my *hdfs-site.xml* reside.
>>>>
>>>> MY LD_LIBRARY_PATH =
>>>> /usr/local/hadoop/lib/native:/usr/lib/jvm/java-7-openjdk-amd64/jre/lib/amd64/server
>>>> Code:
>>>>
>>>> #include "hdfs.h"#include <stdio.h>#include <string.h>#include
<stdio.h>#include <stdlib.h>
>>>> int main(int argc, char **argv) {
>>>>
>>>>     hdfsFS fs = hdfsConnect("default", 0);
>>>>     const char* writePath = "/tmp/testfile.txt";
>>>>     hdfsFile writeFile = hdfsOpenFile(fs, writePath, O_WRONLY|O_CREAT, 0,
0, 0);
>>>>     if(!writeFile) {
>>>>           printf("Failed to open %s for writing!\n", writePath);
>>>>           exit(-1);
>>>>     }
>>>>     printf("\nfile opened\n");
>>>>     char* buffer = "Hello, World!";
>>>>     tSize num_written_bytes = hdfsWrite(fs, writeFile, (void*)buffer, strlen(buffer)+1);
>>>>     printf("\nWrote %d bytes\n", (int)num_written_bytes);
>>>>     if (hdfsFlush(fs, writeFile)) {
>>>>            printf("Failed to 'flush' %s\n", writePath);
>>>>           exit(-1);
>>>>     }
>>>>    hdfsCloseFile(fs, writeFile);
>>>>    hdfsDisconnect(fs);
>>>>    return 0;}
>>>>
>>>> It compiles and runs without error, but I cannot see the file on HDFS.
>>>>
>>>> I have Hadoop 2.6.0 on Ubuntu 14.04 64bit.
>>>>
>>>> Any ideas on this ?
>>>>
>>>>
>>>>
>>>
>>
>

Mime
View raw message