hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sergey Bartunov <sbos....@gmail.com>
Subject Problems with namenode on pseudo-distributed configuration
Date Sat, 02 Jul 2011 10:42:17 GMT
Hello. I'm running hadoop 0.20.203 in psedo-distributed mode with
standard configuration from
http://hadoop.apache.org/common/docs/stable/single_node_setup.html
Everything worked fine, but today I booted on my Ubuntu 10.10 and
suddenly got "Could only be replicated to 0 nodes instead of 1". I
found several workaround in the internet and just removed all hadoop
directories, re-formatted namenode and restarted hadoop.
So I could upload several files to hdfs but all my code did't work.

I always opened files on HDFS by this code:

          FileSystem inputFS = FileSystem.get(URI.create(input), configuration);
          Path inputPath = new Path(input);
          inputFS.doSomeThings

and relative paths to some files on HDFS, i.e. "some/data" were
treated as HDFS paths (to "/user/sbos/some/data"), but now inputFS has
LocalFileSystem type.

Could someone advice me how to force hadoop to treat my paths as HDFS
paths or just help me understand what happened?

Mime
View raw message