hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From dcave555 <dcave...@gmail.com>
Subject writing file
Date Sat, 13 Oct 2007 15:05:55 GMT

Hello all

I am new to hadoop .

I am trying to write file to single cluster and getting this exception when
i am trying to close output stream

java.io.IOException: CreateProcess: df -k
C:\usr\local\hadoop-datastore\hadoop-hadoop\dfs\tmp error=2
	at java.lang.ProcessImpl.create(Native Method)
	at java.lang.ProcessImpl.<init>(Unknown Source)
	at java.lang.ProcessImpl.start(Unknown Source)
	at java.lang.ProcessBuilder.start(Unknown Source)
	at java.lang.Runtime.exec(Unknown Source)
	at java.lang.Runtime.exec(Unknown Source)
	at org.apache.hadoop.fs.DF.doDF(DF.java:60)
	at org.apache.hadoop.fs.DF.<init>(DF.java:53)
	at
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.confChanged(LocalDirAllocator.java:198)
	at
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.getLocalPathForWrite(LocalDirAllocator.java:235)
	at
org.apache.hadoop.fs.LocalDirAllocator$AllocatorPerContext.createTmpFileForWrite(LocalDirAllocator.java:276)
	at
org.apache.hadoop.fs.LocalDirAllocator.createTmpFileForWrite(LocalDirAllocator.java:155)
	at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.newBackupFile(DFSClient.java:1475)
	at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.openBackupStream(DFSClient.java:1442)
	at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.writeChunk(DFSClient.java:1600)
	at
org.apache.hadoop.fs.FSOutputSummer.writeChecksumChunk(FSOutputSummer.java:140)
	at org.apache.hadoop.fs.FSOutputSummer.flushBuffer(FSOutputSummer.java:122)
	at
org.apache.hadoop.dfs.DFSClient$DFSOutputStream.close(DFSClient.java:1739)
	at
org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:49)
	at
org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:64)
	at Test1.main(Test1.java:23)



My test is:

                        Configuration configuration = new Configuration();	
                        FileSystem fileSystem =
FileSystem.get(configuration);
                        Path path = new Path("/testfile");
			//writing:
			FSDataOutputStream dataOutputStream = fileSystem.create(path);
			dataOutputStream.writeUTF("hello world");
			dataOutputStream.close();
			//reading 
			FSDataInputStream dataInputStream = fileSystem.open(path);
			System.out.println(dataInputStream.readUTF());
			dataInputStream.close();
			fileSystem.close();

i added hadoop-site.xml to classpath :


<configuration>
 
<property>
  <name>hadoop.tmp.dir</name>
  <value>/usr/local/hadoop-datastore/hadoop-hadoop</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
  <name>fs.default.name</name>
  <value>hdfs://172.16.50.13:54310</value>
  <description>The name of the default file system.  A URI whose
  scheme and authority determine the FileSystem implementation.  The
  uri's scheme determines the config property (fs.SCHEME.impl) naming
  the FileSystem implementation class.  The uri's authority is used to
  determine the host, port, etc. for a filesystem.</description>
</property>
 
<property>
  <name>mapred.job.tracker</name>
  <value>172.16.50.13:54311</value>
  <description>The host and port that the MapReduce job tracker runs
  at.  If "local", then jobs are run in-process as a single map
  and reduce task.
  </description>
</property>
 
<property>
  <name>dfs.replication</name>
  <value>1</value>
  <description>Default block replication.
  The actual number of replications can be specified when the file is
created.
  The default is used if replication is not specified in create time.
  </description>
</property>
</configuration>



Please help me
thx

-- 
View this message in context: http://www.nabble.com/writing-file-tf4618565.html#a13190497
Sent from the Hadoop Users mailing list archive at Nabble.com.


Mime
View raw message