Return-Path: X-Original-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Delivered-To: apmail-hadoop-mapreduce-user-archive@minotaur.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 9355E108B9 for ; Wed, 23 Oct 2013 23:41:57 +0000 (UTC) Received: (qmail 22179 invoked by uid 500); 23 Oct 2013 23:41:52 -0000 Delivered-To: apmail-hadoop-mapreduce-user-archive@hadoop.apache.org Received: (qmail 22000 invoked by uid 500); 23 Oct 2013 23:41:52 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Delivered-To: moderator for user@hadoop.apache.org Received: (qmail 1639 invoked by uid 99); 23 Oct 2013 19:20:34 -0000 X-ASF-Spam-Status: No, hits=1.5 required=5.0 tests=HTML_MESSAGE,RCVD_IN_DNSWL_LOW,SPF_PASS X-Spam-Check-By: apache.org Received-SPF: pass (athena.apache.org: domain of chjasonwu@gmail.com designates 74.125.82.50 as permitted sender) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:date:message-id:subject:from:to:content-type; bh=J6vUjYqG0O7dTbYpFolUOyGQ0BULmXaW0/eDnuMiC54=; b=hp/sDivzLx8RGg9WYVtNBCwYHQz3rPGcgrDQs18pGoPYgk7rX4NqfCofzrJsisAN7Y PmCw2FQ3CnpFVwnVDRcYdjzTQBvZX8j62EV5/sILPURAYBP4sDh91HbG1cyXR0gVQVBm Ondct5uLTymq4Ju9/hCxTIocsdod6w/tB8HYM+MFEQhsGPQvFjVRjX9rQNVikquZ0hGP cHwMMyikY/VAKBUZBVyQ4KfTqTas3Lcjb+CtWsCXmlig+Df63aimp5jjyP629y3754tX 77i1dPREf1kSiONiJLTlouT65hOCMwIhuxswZANFmPyk8RYs/4ZASEysJL9arVRNwrcj zywQ== MIME-Version: 1.0 X-Received: by 10.180.182.68 with SMTP id ec4mr3473234wic.40.1382556008079; Wed, 23 Oct 2013 12:20:08 -0700 (PDT) Date: Wed, 23 Oct 2013 15:20:08 -0400 Message-ID: Subject: Hadoop 1.2.1 corrupt after restart from out of heap memory exception From: Chih-Hsien Wu To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b66f599be64f804e96d67b6 X-Virus-Checked: Checked by ClamAV on apache.org --047d7b66f599be64f804e96d67b6 Content-Type: text/plain; charset=ISO-8859-1 I uploaded data into distributed file system. Cluster summary shows there is enough heap size memory. However, whenever I try run Mahout 0.8 command. The system displays out of heap memory exception. I shutdown hadoop cluster and allocated more memory to mapred.child.java.opts. I then restarted the hadoop cluster and the namenode is corrupted. Any help is appreciated. --047d7b66f599be64f804e96d67b6 Content-Type: text/html; charset=ISO-8859-1 Content-Transfer-Encoding: quoted-printable
I uploaded data into distributed file system. Cluster summ= ary shows there is enough heap size memory. However, whenever I try run Mah= out 0.8 command. The system displays out of heap memory exception. I shutdo= wn hadoop cluster and allocated more memory to mapred.child.java.opts. I th= en restarted the hadoop cluster and the namenode is corrupted. Any help is = appreciated.
--047d7b66f599be64f804e96d67b6--