Return-Path: X-Original-To: archive-asf-public-internal@cust-asf2.ponee.io Delivered-To: archive-asf-public-internal@cust-asf2.ponee.io Received: from cust-asf.ponee.io (cust-asf.ponee.io [163.172.22.183]) by cust-asf2.ponee.io (Postfix) with ESMTP id E61D2200C17 for ; Fri, 10 Feb 2017 09:38:13 +0100 (CET) Received: by cust-asf.ponee.io (Postfix) id E4C36160B5C; Fri, 10 Feb 2017 08:38:13 +0000 (UTC) Delivered-To: archive-asf-public@cust-asf.ponee.io Received: from mail.apache.org (hermes.apache.org [140.211.11.3]) by cust-asf.ponee.io (Postfix) with SMTP id 098B8160B5B for ; Fri, 10 Feb 2017 09:38:12 +0100 (CET) Received: (qmail 11193 invoked by uid 500); 10 Feb 2017 08:38:11 -0000 Mailing-List: contact user-help@hadoop.apache.org; run by ezmlm Precedence: bulk List-Help: List-Unsubscribe: List-Post: List-Id: Delivered-To: mailing list user@hadoop.apache.org Received: (qmail 11182 invoked by uid 99); 10 Feb 2017 08:38:10 -0000 Received: from pnap-us-west-generic-nat.apache.org (HELO spamd1-us-west.apache.org) (209.188.14.142) by apache.org (qpsmtpd/0.29) with ESMTP; Fri, 10 Feb 2017 08:38:10 +0000 Received: from localhost (localhost [127.0.0.1]) by spamd1-us-west.apache.org (ASF Mail Server at spamd1-us-west.apache.org) with ESMTP id 45DD5C3452 for ; Fri, 10 Feb 2017 08:38:10 +0000 (UTC) X-Virus-Scanned: Debian amavisd-new at spamd1-us-west.apache.org X-Spam-Flag: NO X-Spam-Score: 2.379 X-Spam-Level: ** X-Spam-Status: No, score=2.379 tagged_above=-999 required=6.31 tests=[DKIM_SIGNED=0.1, DKIM_VALID=-0.1, DKIM_VALID_AU=-0.1, HTML_MESSAGE=2, RCVD_IN_DNSWL_NONE=-0.0001, RCVD_IN_MSPIKE_H3=-0.01, RCVD_IN_MSPIKE_WL=-0.01, RCVD_IN_SORBS_SPAM=0.5, SPF_PASS=-0.001] autolearn=disabled Authentication-Results: spamd1-us-west.apache.org (amavisd-new); dkim=pass (1024-bit key) header.d=avast.com Received: from mx1-lw-us.apache.org ([10.40.0.8]) by localhost (spamd1-us-west.apache.org [10.40.0.7]) (amavisd-new, port 10024) with ESMTP id y14rEypRK-Hn for ; Fri, 10 Feb 2017 08:38:09 +0000 (UTC) Received: from mail-yw0-f180.google.com (mail-yw0-f180.google.com [209.85.161.180]) by mx1-lw-us.apache.org (ASF Mail Server at mx1-lw-us.apache.org) with ESMTPS id C8CF65F647 for ; Fri, 10 Feb 2017 08:38:08 +0000 (UTC) Received: by mail-yw0-f180.google.com with SMTP id w75so17306172ywg.1 for ; Fri, 10 Feb 2017 00:38:08 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=avast.com; s=google; h=mime-version:in-reply-to:references:from:date:message-id:subject:to :cc; bh=NJC9JetuEak3oXppo9bHIKvezsmghXhIUBseD1A3yM4=; b=A1eQwZwH30uCrtgJjA0sPJmxPKifJwhL8FSjWtZx5M3o8g+LZwQcsZ1YVgCC39vYmx A6nyT8gw38rJ0S6mgSi5J+AuzMxmnnd1tfeVuzXpOqS7hEdJvSEiLAJEsfnHkjal8ry5 sH0AuKDtbMe2KxnpBnbOPhMag1npdCKQs/euU= X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:in-reply-to:references:from:date :message-id:subject:to:cc; bh=NJC9JetuEak3oXppo9bHIKvezsmghXhIUBseD1A3yM4=; b=ZZ/c171aZC2TFSGrgxFoSwo6sDW1qSnfK0xI0f/uutRLeLlHGDu+sX7Nhqdzp24Um5 dkwxfe+q0k1Md51nSSP5BK78G7LaWouSy9nojGQmw1dr/wdEDI8rVVpc5tI90+G81At3 gxbIL2PcoT2YPfY87+i6juIY97blRitoV3/nbU5T47+YVzUeoraJo85Mn3fkg24y6PhD YToXWwv1Q1gFFuS26P/rKbX67qfGWdxmo75pZsaCVcdUJnjUs8yRBJbVy6gTVfv1OGXb neeO4+zIdJeQtvOEYkU5TP4C2nz0hFtNBmK+mcDfebYSz6j5lhVxMnUH4kGpkqLFez1e pDWg== X-Gm-Message-State: AMke39lNrCr1pURvbsh4xlaCbqQcTo7ItD0SplhO2qQJRjDRkcRUZj5KEUkJfup9j4VLJvyAQFmK4VYX+FnVHtb9 X-Received: by 10.13.216.195 with SMTP id a186mr5497173ywe.43.1486715888296; Fri, 10 Feb 2017 00:38:08 -0800 (PST) MIME-Version: 1.0 Received: by 10.129.38.70 with HTTP; Fri, 10 Feb 2017 00:38:07 -0800 (PST) In-Reply-To: References: From: =?UTF-8?B?Vml0w6FzZWssIExhZGlzbGF2?= Date: Fri, 10 Feb 2017 09:38:07 +0100 Message-ID: Subject: Re: HDFS Shell tool To: Ravi Prakash Cc: user Content-Type: multipart/alternative; boundary=001a114e3f2e67aafb05482904cd archived-at: Fri, 10 Feb 2017 08:38:14 -0000 --001a114e3f2e67aafb05482904cd Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Hello Ravi, I am glad you like it. Why should I use WebHDFS? Our cluster sysops, include me, prefer command line. :-) -Vity 2017-02-09 22:21 GMT+01:00 Ravi Prakash : > Great job Vity! > > Thanks a lot for sharing. Have you thought about using WebHDFS? > > Thanks > Ravi > > On Thu, Feb 9, 2017 at 7:12 AM, Vit=C3=A1sek, Ladislav > wrote: > >> Hello Hadoop fans, >> I would like to inform you about our tool we want to share. >> >> We created a new utility - HDFS Shell to work with HDFS more faster. >> >> https://github.com/avast/hdfs-shell >> >> *Feature highlights* >> - HDFS DFS command initiates JVM for each command call, HDFS Shell does >> it only once - which means great speed enhancement when you need to work >> with HDFS more often >> - Commands can be used in a short way - eg. *hdfs dfs -ls /*, *ls /* - >> both will work >> - *HDFS path completion using TAB key* >> - you can easily add any other HDFS manipulation function >> - there is a command history persisting in history log >> (~/.hdfs-shell/hdfs-shell.log) >> - support for relative directory + commands *cd* and *pwd* >> - it can be also launched as a daemon (using UNIX domain sockets) >> - 100% Java, it's open source >> >> You suggestions are welcome. >> >> -L. Vitasek aka Vity >> >> > --001a114e3f2e67aafb05482904cd Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable
Hello Ravi,
I am glad you like it.=
Why should I use WebHDFS? Our cluster sysops, include me, prefer= command line. :-)

-Vity
<= br>
2017-02-09 22:21 GMT+01:00 Ravi Prakash <= ravihadoop@gmail.com>:
Great job Vity!

Thanks a lot for s= haring. Have you thought about using WebHDFS?

Thanks
Ravi

On Thu, Feb 9, 2017 at 7:12 AM, Vit=C3=A1sek, Ladislav <vitasek= @avast.com> wrote:
Hello Hadoop fans,
I would like to inform you ab= out our tool we want to share.

We created a new utility - HDFS Shel= l to work with HDFS more faster.

https://github.com/avast/hdfs-shell=

Feature highlights
- HDFS DFS command initiates JVM for e= ach command call, HDFS Shell does it only once - which means great speed en= hancement when you need to work with HDFS more often
- Commands can be u= sed in a short way - eg. hdfs dfs -ls /, ls / - both will wor= k
- HDFS path completion using TAB key
- you can easily add an= y other HDFS manipulation function
- there is a command history persisti= ng in history log (~/.hdfs-shell/hdfs-shell.log)
- support for relative = directory + commands cd and pwd
- it can be als= o launched as a daemon (using UNIX domain sockets)
- 100% Java, it= 's open source

You suggestions are welcome.

-= L. Vitasek aka Vity



--001a114e3f2e67aafb05482904cd--