Return-Path: Delivered-To: apmail-hadoop-core-dev-archive@www.apache.org Received: (qmail 95078 invoked from network); 13 Oct 2008 13:01:40 -0000 Received: from hermes.apache.org (HELO mail.apache.org) (140.211.11.2) by minotaur.apache.org with SMTP; 13 Oct 2008 13:01:40 -0000 Received: (qmail 86055 invoked by uid 500); 13 Oct 2008 13:01:35 -0000 Delivered-To: apmail-hadoop-core-dev-archive@hadoop.apache.org Received: (qmail 85971 invoked by uid 500); 13 Oct 2008 13:01:35 -0000 Mailing-List: contact core-dev-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: core-dev@hadoop.apache.org Delivered-To: mailing list core-dev@hadoop.apache.org Received: (qmail 85948 invoked by uid 99); 13 Oct 2008 13:01:35 -0000 Received: from athena.apache.org (HELO athena.apache.org) (140.211.11.136) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 13 Oct 2008 06:01:35 -0700 X-ASF-Spam-Status: No, hits=-2000.0 required=10.0 tests=ALL_TRUSTED X-Spam-Check-By: apache.org Received: from [140.211.11.140] (HELO brutus.apache.org) (140.211.11.140) by apache.org (qpsmtpd/0.29) with ESMTP; Mon, 13 Oct 2008 13:00:37 +0000 Received: from brutus (localhost [127.0.0.1]) by brutus.apache.org (Postfix) with ESMTP id 6E9E1234C21D for ; Mon, 13 Oct 2008 06:00:44 -0700 (PDT) Message-ID: <547201277.1223902844451.JavaMail.jira@brutus> Date: Mon, 13 Oct 2008 06:00:44 -0700 (PDT) From: "Steve Loughran (JIRA)" To: core-dev@hadoop.apache.org Subject: [jira] Commented: (HADOOP-3999) Need to add host capabilites / abilities In-Reply-To: <248855554.1219398166106.JavaMail.jira@brutus> MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Virus-Checked: Checked by ClamAV on apache.org [ https://issues.apache.org/jira/browse/HADOOP-3999?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12639040#action_12639040 ] Steve Loughran commented on HADOOP-3999: ---------------------------------------- 1. This would be good if it could be easily extended; rather than than a hard coded set of values, clients could add other (key,value) info for schedulers to use. Things like expected-availability for cycle-scavenging task-trackers, and other extensions that custom schedulers could use. It could also integrate with diagnostics. 2. There's a danger here in trying to do a full grid scheduler. Why Danger? Hard to get right, there are other tools and products that can do a lot of this. Hadoop likes to push work near the data and works best if the work is all Java. 3. Developers are surprisingly bad about estimating workload, especially if you have a few layers between you and the MR jobs. The best metric for how long/CPU-intensive/IO intensive a job will be is "what was like last time". > Need to add host capabilites / abilities > ---------------------------------------- > > Key: HADOOP-3999 > URL: https://issues.apache.org/jira/browse/HADOOP-3999 > Project: Hadoop Core > Issue Type: Improvement > Components: metrics > Environment: Any > Reporter: Kai Mosebach > > The MapReduce paradigma is limited to run MapReduce jobs with the lowest common factor of all nodes in the cluster. > On the one hand this is wanted (cloud computing, throw simple jobs in, nevermind who does it) > On the other hand this is limiting the possibilities quite a lot, for instance if you had data which could/needs to be fed to a 3rd party interface like Mathlab, R, BioConductor you could solve a lot more jobs via hadoop. > Furthermore it could be interesting to know about the OS, the architecture, the performance of the node in relation to the rest of the cluster. (Performance ranking) > i.e. if i'd know about a sub cluster of very computing performant nodes or a sub cluster of very fast disk-io nodes, the job tracker could select these nodes regarding a so called job profile (i.e. my job is a heavy computing job / heavy disk-io job), which can usually be estimated by a developer before. > To achieve this, node capabilities could be introduced and stored in the DFS, giving you > a1.) basic information about each node (OS, ARCH) > a2.) more sophisticated infos (additional software, path to software, version). > a3.) PKI collected about the node (disc-io, cpu power, memory) > a4.) network throughput to neighbor hosts, which might allow generating a network performance map over the cluster > This would allow you to > b1.) generate jobs that have a profile (computing intensive, disk io intensive, net io intensive) > b2.) generate jobs that have software dependencies (run on Linux only, run on nodes with MathLab only) > b3.) generate a performance map of the cluster (sub clusters of fast disk nodes, sub clusters of fast CPU nodes, network-speed-relation-map between nodes) > From step b3) you could then even acquire statistical information which could again be fed into the DFS Namenode to see if we could store data on fast disk subclusters only (that might need to be a tool outside of hadoop core though) -- This message is automatically generated by JIRA. - You can reply to this email to add a comment to the issue online.