hadoop-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Alexey Eremihin <a.eremi...@corp.badoo.com.INVALID>
Subject Re: Is Hadoop basically not suitable for a photo archive?
Date Mon, 04 Sep 2017 19:32:27 GMT
Hi Ralph,
In general Hadoop is able to store such data. And even Har archives can be
used with conjunction with WebHDFS (by passing offset and limit
attributes). What are your reading requirements? FS meta data are not
distributed and reading the data is limited by the HDFS NameNode server
performance. So if you would like to download files with high RPS that
would not work well.

On Monday, September 4, 2017, Ralph Soika <ralph.soika@imixs.com> wrote:

> Hi,
>
> I know that the issue around the small-file problem was asked frequently,
> not only in this mailing list.
> I also have read already some books about Haddoop and I also started to
> work with Hadoop. But still I did not really understand if Hadoop is the
> right choice for my goals.
>
> To simplify my problem domain I would like to use the use case of a photo
> archive:
>
> - An external application produces about 10 million photos in one year.
> The files contain important business critical data.
> - A single photo file has a size between 1 and 10 MB.
> - The photos need to be stored over several years (10-30 years).
> - The data store should support replication over several servers.
> - A checksum-concept is needed to guarantee the data integrity of all
> files over a long period of time.
> - To write and read the files a Rest API is preferred.
>
> So far Hadoop seems to be absolutely the perfect solution. But my last
> requirement seems to throw Hadoop out of the race.
>
> - The photos need to be readable with very short latency from an external
> enterprise application
>
> With Hadoop HDFS and the Web Proxy everything seems perfect. But it seems
> that most of the Hadoop experts advise against this usage if the size of my
> data files (1-10 MB) are well below the Hadoop block size of 64 or 128 MB.
>
> I think I understood the concepts of HAR or sequential files.
> But if I pack, for example, my files together in a large file of many
> Gigabytes it is impossible to access one single photo from the Hadoop
> repository in a reasonable time. It makes no sense in my eyes to pack
> thousands of files into a large file just so that Hadoop jobs can handle it
> better. To simply access a single file from a web interface - as in my case
> - it seems to be all counterproductive.
>
> So my question is: Is Hadoop only feasible to archive large Web-server log
> files and not designed to handle big archives of small files with also
> business critical data?
>
>
> Thanks for your advice in advance.
>
> Ralph
> --
>
>

Mime
View raw message