hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Geoffry Roberts <geoffry.robe...@gmail.com>
Subject Re: Hadoop 1.0 and WebHDFS
Date Thu, 02 Feb 2012 18:17:07 GMT
All,

I seem to have solved my problem.

In my hdfs.site.xml I had the following:

<property>
  <name>dfs.name.dir</name>
  <value>file:///hdfs/name</
>
> value>
> </property>
> <property>
>   <name>dfs.data.dir</name>
>   <value>file:///hdfs/data</value>
> </property>

The above worked on version 0.21.0, apparently not in 1.0.

I changed them to
/hdfs/name and /hdfs/data respecively and, well, at least my name node is
running.

On 2 February 2012 09:48, Geoffry Roberts <geoffry.roberts@gmail.com> wrote:

> Thanks for the quick response.
>
> Here's a snippet from my hdfs.site.xml file.
>
>     <name>dfs.http.address</name>
>     <value>qq000:50070</value>
>
> qq000 is my name node. Is this correct?
>
> I have also noticed that my name node is crashing.  It says my hdfs is in
> a inconsistent state. I guess I'll have to (shudder) rebuild it.
>
> The complete contents of hdfs.site.xml is below.
>
> <configuration>
> <property>
>   <name>dfs.replication</name>
>   <value>3</value>
>   <description>Default block replication.
>   The actual number of replications can be specified when the file is
> created.
>   The default is used if replication is not specified in create time.
>   </description>
> </property>
> <property>
>   <name>dfs.name.dir</name>
>   <value>file:///hdfs/name</value>
> </property>
> <property>
>   <name>dfs.data.dir</name>
>   <value>file:///hdfs/data</value>
> </property>
> <property>
>   <name>dfs.hosts</name>
>   <value>includes</value>
>   <final>true</final>
> </property>
> <property>
>   <name>dfs.hosts.exclude</name>
>   <value>excludes</value>
>   <final>true</final>
> </property>
>
> <property>
>   <name>dfs.webhdfs.enabled</name>
>   <value>true</value>
> </property>
> <property>
>     <name>dfs.http.address</name>
>     <value>qq000:50070</value>
>     <description>The name of the default file system.  Either the
>        literal string "local" or a host:port for NDFS.
>     </description>
>     <final>true</final>
> </property>
> </configuration>
>
>
>
> On 2 February 2012 09:30, Harsh J <harsh@cloudera.com> wrote:
>
>> Geoffry,
>>
>> What is your "dfs.http.address" set to? What's your NameNode's HTTP
>> address, basically? Have you tried that one?
>>
>> On Thu, Feb 2, 2012 at 10:54 PM, Geoffry Roberts
>> <geoffry.roberts@gmail.com> wrote:
>> > All,
>> >
>> > I have been using hadoop 0.21.0 for sometime now.  This past Monday I
>> > installed hadoop 1.0.
>> >
>> > I've been reading about WebHDFS and it sounds like something I could
>> use but
>> > I can't seem to get it working.  I could definately use some guidance.
>> I can
>> > find little in the way of documentation.
>> >
>> > I added the following property to hdfs_site.xml and bounced hadoop, but
>> > nothing seems to be listening on port 50070, which so far a I can glean
>> is
>> > where WebHDFS should be listening.
>> >
>> > <property>
>> >     <name>dfs.webhdfs.enabled</name>
>> >     <value>true</value>
>> > </property>
>> >
>> > Am I on the correct port? Is there anything else?
>> >
>> > Thanks
>> >
>> > --
>> > Geoffry Roberts
>> >
>>
>>
>>
>> --
>> Harsh J
>> Customer Ops. Engineer
>> Cloudera | http://tiny.cloudera.com/about
>>
>
>
>
> --
> Geoffry Roberts
>
>


-- 
Geoffry Roberts

Mime
View raw message