hadoop-hdfs-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harsh J <ha...@cloudera.com>
Subject Re: Hadoop 1.0 and WebHDFS
Date Thu, 02 Feb 2012 21:40:26 GMT
Yeah I was afraid the API availability would be your next question.
New API MultipleOutputs should be in the 1.0.1 micro update but there
could still be lots of things missing.

On Fri, Feb 3, 2012 at 3:00 AM, Geoffry Roberts
<geoffry.roberts@gmail.com> wrote:
> A down grade! I wouldn't have guessed. Thanks
>
> Do you know if anything happened to the class MultipleOutputs?
>
> I just tried running some of my old MR code against 1.0 and it seems
> MultipleOutputs cannot be found in the new hadoop-core-1.0.0.jar.
>
>
> On 2 February 2012 10:45, Harsh J <harsh@cloudera.com> wrote:
>>
>> Note that 0.21 to 1.0 is "sort-of" a downgrade in some ways,
>> considering 1.0 is a rename of the 0.20-series. You probably want to
>> review a lot of config params since those may not be present in 1.0.
>>
>> On Thu, Feb 2, 2012 at 11:47 PM, Geoffry Roberts
>> <geoffry.roberts@gmail.com> wrote:
>> > All,
>> >
>> > I seem to have solved my problem.
>> >
>> > In my hdfs.site.xml I had the following:
>> >
>> > <property>
>> >   <name>dfs.name.dir</name>
>> >   <value>file:///hdfs/name</
>> >>
>> >> value>
>> >> </property>
>> >> <property>
>> >>   <name>dfs.data.dir</name>
>> >>   <value>file:///hdfs/data</value>
>> >> </property>
>> >
>> > The above worked on version 0.21.0, apparently not in 1.0.
>> >
>> > I changed them to
>> > /hdfs/name and /hdfs/data respecively and, well, at least my name node
>> > is
>> > running.
>> >
>> >
>> > On 2 February 2012 09:48, Geoffry Roberts <geoffry.roberts@gmail.com>
>> > wrote:
>> >>
>> >> Thanks for the quick response.
>> >>
>> >> Here's a snippet from my hdfs.site.xml file.
>> >>
>> >>     <name>dfs.http.address</name>
>> >>     <value>qq000:50070</value>
>> >>
>> >> qq000 is my name node. Is this correct?
>> >>
>> >> I have also noticed that my name node is crashing.  It says my hdfs is
>> >> in
>> >> a inconsistent state. I guess I'll have to (shudder) rebuild it.
>> >>
>> >> The complete contents of hdfs.site.xml is below.
>> >>
>> >> <configuration>
>> >> <property>
>> >>   <name>dfs.replication</name>
>> >>   <value>3</value>
>> >>   <description>Default block replication.
>> >>   The actual number of replications can be specified when the file is
>> >> created.
>> >>   The default is used if replication is not specified in create time.
>> >>   </description>
>> >> </property>
>> >> <property>
>> >>   <name>dfs.name.dir</name>
>> >>   <value>file:///hdfs/name</value>
>> >> </property>
>> >> <property>
>> >>   <name>dfs.data.dir</name>
>> >>   <value>file:///hdfs/data</value>
>> >> </property>
>> >> <property>
>> >>   <name>dfs.hosts</name>
>> >>   <value>includes</value>
>> >>   <final>true</final>
>> >> </property>
>> >> <property>
>> >>   <name>dfs.hosts.exclude</name>
>> >>   <value>excludes</value>
>> >>   <final>true</final>
>> >> </property>
>> >>
>> >> <property>
>> >>   <name>dfs.webhdfs.enabled</name>
>> >>   <value>true</value>
>> >> </property>
>> >> <property>
>> >>     <name>dfs.http.address</name>
>> >>     <value>qq000:50070</value>
>> >>     <description>The name of the default file system.  Either
the
>> >>        literal string "local" or a host:port for NDFS.
>> >>     </description>
>> >>     <final>true</final>
>> >> </property>
>> >> </configuration>
>> >>
>> >>
>> >>
>> >> On 2 February 2012 09:30, Harsh J <harsh@cloudera.com> wrote:
>> >>>
>> >>> Geoffry,
>> >>>
>> >>> What is your "dfs.http.address" set to? What's your NameNode's HTTP
>> >>> address, basically? Have you tried that one?
>> >>>
>> >>> On Thu, Feb 2, 2012 at 10:54 PM, Geoffry Roberts
>> >>> <geoffry.roberts@gmail.com> wrote:
>> >>> > All,
>> >>> >
>> >>> > I have been using hadoop 0.21.0 for sometime now.  This past Monday
>> >>> > I
>> >>> > installed hadoop 1.0.
>> >>> >
>> >>> > I've been reading about WebHDFS and it sounds like something I
could
>> >>> > use but
>> >>> > I can't seem to get it working.  I could definately use some
>> >>> > guidance.
>> >>> > I can
>> >>> > find little in the way of documentation.
>> >>> >
>> >>> > I added the following property to hdfs_site.xml and bounced hadoop,
>> >>> > but
>> >>> > nothing seems to be listening on port 50070, which so far a I can
>> >>> > glean
>> >>> > is
>> >>> > where WebHDFS should be listening.
>> >>> >
>> >>> > <property>
>> >>> >     <name>dfs.webhdfs.enabled</name>
>> >>> >     <value>true</value>
>> >>> > </property>
>> >>> >
>> >>> > Am I on the correct port? Is there anything else?
>> >>> >
>> >>> > Thanks
>> >>> >
>> >>> > --
>> >>> > Geoffry Roberts
>> >>> >
>> >>>
>> >>>
>> >>>
>> >>> --
>> >>> Harsh J
>> >>> Customer Ops. Engineer
>> >>> Cloudera | http://tiny.cloudera.com/about
>> >>
>> >>
>> >>
>> >>
>> >> --
>> >> Geoffry Roberts
>> >>
>> >
>> >
>> >
>> > --
>> > Geoffry Roberts
>> >
>>
>>
>>
>> --
>> Harsh J
>> Customer Ops. Engineer
>> Cloudera | http://tiny.cloudera.com/about
>
>
>
>
> --
> Geoffry Roberts
>



-- 
Harsh J
Customer Ops. Engineer
Cloudera | http://tiny.cloudera.com/about

Mime
View raw message