Return-Path: X-Original-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-hdfs-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id B907CD0E6 for ; Sun, 26 Aug 2012 17:32:56 +0000 (UTC) Received: (qmail 14316 invoked by uid 500); 26 Aug 2012 17:32:52 -0000 Delivered-To: apmail-hadoop-hdfs-user-archive@hadoop.apache.org Received: (qmail 14202 invoked by uid 500); 26 Aug 2012 17:32:51 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 14195 invoked by uid 99); 26 Aug 2012 17:32:51 -0000 Received: from nike.apache.org (HELO nike.apache.org) (192.87.106.230) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 26 Aug 2012 17:32:51 +0000 X-ASF-Spam-Status: No, hits=2.2 required=5.0 tests=FSL_RCVD_USER,HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_NEUTRAL X-Spam-Check-By: apache.org Received-SPF: neutral (nike.apache.org: local policy) Received: from [209.85.210.176] (HELO mail-iy0-f176.google.com) (209.85.210.176) by apache.org (qpsmtpd/0.29) with ESMTP; Sun, 26 Aug 2012 17:32:42 +0000 Received: by iagt4 with SMTP id t4so7662587iag.35 for ; Sun, 26 Aug 2012 10:32:21 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20120113; h=mime-version:x-originating-ip:date:message-id:subject:from:to :content-type:x-gm-message-state; bh=c3mqVhHEvisP2huRwKLGt777u2axCijJSNfSunmtV+I=; b=JcfNIF0yemE9Sw1b1767atNply6Ai6D3h8SLpVqE4m/IUosqZb/P2LldLKsbtOnhqo EpIft79So7vjsfLXO1IvL+AX1JKX6bCcSh1aMIQa1T91ksjI2bPtqO7mybFinorDawjn mOK/wJJZ/t+jEOjUrZGkeW7HOqwfYpc0IVXXtaImft8DbbPuSFY1F/Iy6vGvynAQKy3D zb6+1KAv7tLeCWcPF/HJzssPuWvXHjS+eevh2f0+E+N3sNPO4WdvKRTXGJiU4Wtr6LFQ cAesIljulobmnlt2HU2yCyWm+1yZK7TF0sYBNRlO4m5mPEK69UAHi2jO91fRTUL6yHD/ 5I9A== MIME-Version: 1.0 Received: by 10.50.188.129 with SMTP id ga1mr7957206igc.6.1346002341681; Sun, 26 Aug 2012 10:32:21 -0700 (PDT) Received: by 10.231.7.3 with HTTP; Sun, 26 Aug 2012 10:32:21 -0700 (PDT) X-Originating-IP: [207.237.178.191] Date: Sun, 26 Aug 2012 13:32:21 -0400 Message-ID: Subject: Is there a way to turn off MAPREDUCE-2415? From: Koert Kuipers To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=14dae93407f7715bf104c82e976d X-Gm-Message-State: ALoCoQkX/f3YBYfli6cyjmaggVlAzQtugtF6/8MNFwD4mNF3d9R1Vjp1Z6yiqtXU5nRQxK11MrO6 --14dae93407f7715bf104c82e976d Content-Type: text/plain; charset=ISO-8859-1 We have smaller nodes (4 to 6 disks), and we used to write logs to the same disk as where the OS is. So if that disks goes then i don't really care about tasktrackers failing. Also, the fact that logs were written to a single partition meant that i could make sure they would not grow too large in case someone had too verbose logging on a large job. With MAPREDUCE-2415 a job that does massive amount of logging can fill up all the mapred.local.dir, which in our case are on the same partition as the hdfs data dirs, so now faulty logging can fill up hdfs storage, which i really don't like. Any ideas? --14dae93407f7715bf104c82e976d Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable We have smaller nodes (4 to 6 disks), = and we used to write logs to the same disk as where the OS is. So if that d= isks goes then i don't really care about tasktrackers failing. Also, th= e fact that logs were written to a single partition meant that i could make= sure they would not grow too large in case someone had too verbose logging= on a large job. With MAPREDUCE-2415 a job that does massive amount of logg= ing can fill up all the mapred.local.dir, which in our case are on the same= partition as the hdfs data dirs, so now faulty logging can fill up hdfs st= orage, which i really don't like. Any ideas?


--14dae93407f7715bf104c82e976d--