Return-Path: X-Original-To: apmail-hadoop-common-user-archive@www.apache.org Delivered-To: apmail-hadoop-common-user-archive@www.apache.org Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by minotaur.apache.org (Postfix) with SMTP id 1481018BD9 for ; Tue, 16 Jun 2015 10:54:44 +0000 (UTC) Received: (qmail 21328 invoked by uid 500); 16 Jun 2015 10:54:39 -0000 Delivered-To: apmail-hadoop-common-user-archive@hadoop.apache.org Received: (qmail 21204 invoked by uid 500); 16 Jun 2015 10:54:39 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Reply-To: user@hadoop.apache.org Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 21140 invoked by uid 99); 16 Jun 2015 10:54:38 -0000 Received: from Unknown (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Tue, 16 Jun 2015 10:54:38 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 68B53CE2B7 for ; Tue, 16 Jun 2015 10:54:38 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 3.13 X-Spam-Level: *** X-Spam-Status: No, score=3.13 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, FREEMAIL_ENVFROM_END_DIGIT=0.25, HTML_MESSAGE=3, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, SPF_PASS=-0.001, URIBL_BLOCKED=0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (2048-bit key) header.d=gmail.com Received: from mx1-us-east.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id rGCpNbi-ouJR for ; Tue, 16 Jun 2015 10:54:32 +0000 (UTC) Received: from mail-oi0-f50.google.com (mail-oi0-f50.google.com [209.85.218.50]) by mx1-us-east.apache.org (ASF Mail Server at mx1-us-east.apache.org) with ESMTPS id AF0DE47BE6 for ; Tue, 16 Jun 2015 10:54:31 +0000 (UTC) Received: by oiha141 with SMTP id a141so8603647oih.0 for ; Tue, 16 Jun 2015 03:54:25 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20120113; h=mime-version:in-reply-to:references:date:message-id:subject:from:to :content-type; bh=2uHhMTNToYWVQ8cDLDrK8LTRqFTXbj1nI8hQJEr8BkU=; b=mV9Oa+ZBBgSvNsCP++PYgIyBK5y6AcAIesVHCmd9FaPqaAxuaGdOZSQss7sxSU2uTb 8TYdMpl3+l6INs92qv9kgfV1PRhz4bekk0E1BNXGWMcPVQ1kcgCNPrdPZmJ3B2WSezDX gDfO6/6LiillUIFla2r7bmBOMzr1ClPlTjuNhtTRugmPlDL69slTq20/3w88DNQMbMrN 8AyhDZFeMBOF8G3n+cowFEA/DP4cO7CltZutXqz/SrurBIGCbKUa2WUfSoQr9n3bJGye DtPZ8Pg3JU/7RJb+0leRzWsOHVHQkoDNStBoNtah7SchKVD9xcnG2fjeV+xlEr8wD4fO wiOw== MIME-Version: 1.0 X-Received: by 10.60.131.147 with SMTP id om19mr27873995oeb.78.1434452065710; Tue, 16 Jun 2015 03:54:25 -0700 (PDT) Received: by 10.202.171.215 with HTTP; Tue, 16 Jun 2015 03:54:25 -0700 (PDT) In-Reply-To: <557FDE47.3070006@jd.com> References: <557FDE47.3070006@jd.com> Date: Tue, 16 Jun 2015 18:54:25 +0800 Message-ID: Subject: Re: FSImage from uncompress to compress change From: Yanbo Liang To: user@hadoop.apache.org Content-Type: multipart/alternative; boundary=047d7b41cc5cd2f5570518a0657e --047d7b41cc5cd2f5570518a0657e Content-Type: text/plain; charset=UTF-8 As far as I know, HDFS get image compression information from image file when loading fsimage. So you can correctly load fsimage file even you set different compression codec. I strongly recommend to do these operations with the same version and run "hdfs dfsadmin -saveNamespace" to save the new compacted fsimage to decrease storage size. 2015-06-16 16:28 GMT+08:00 Xiaoyu Wang : > Hi all! > My hadoop version is 2.0.0 > In my hadoop configuration config the dfs.image.compress=false > Now the cluster has been running for a long time. > the fsimage size is growing. I want to compress the fsimage. > can I change the dfs.image.compress=true and > dfs.image.compression.codec=xx.Codec and restart the namenode? > the no compress fsimage can be read? the data will lost or not ? > > Thanks! > --047d7b41cc5cd2f5570518a0657e Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
As far as I know, HDFS get image compression information f= rom image file when loading fsimage.=C2=A0
So you can correctly load fs= image file even you set different compression codec.
I strongly r= ecommend to do these operations with the same version and run "hdfs df= sadmin -saveNamespace" to save the new compacted fsimage to decrease s= torage size.

2015-06-16 = 16:28 GMT+08:00 Xiaoyu Wang <wangxiaoyu1@jd.com>:
=20 =20 =20
Hi all!
My hadoop version is 2.0.0
In my hadoop configuration config the dfs.image.compress=3Dfalse
Now the cluster has been running for a long time.
the fsimage size is growing. I want to compress the fsimage.
can I change the dfs.image.compress=3Dtrue and =20 dfs.image.compression.codec=3Dxx.Codec and restart the namenode?
the no compress fsimage can be read? the data will lost or not ?

Thanks!

--047d7b41cc5cd2f5570518a0657e--