hadoop-common-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Wayne Liu" <wayne1...@gmail.com>
Subject Re: Does Hadoop support Random Read/Write?
Date Tue, 22 May 2007 16:08:39 GMT
2007/5/22, Chad Walters <chad@powerset.com>:

> The HDFS is designed for writing data in large blocks and then later
> reading
> through those blocks. So the primary usages are sequential writing and
> sequential reading.
>
> Thank you,Chad.
This does meet most of my needs,but there are two more needs:

First : write a large image data(File1) into HDFS, and if you give me the
start position and the  length of  the  data that you want to get , then I
can locate it in File1 and get it.This is called Random Read.

Second: as you know, sometimes I just want to replace a section of data in
File1. So first I have to locate it and then replace it with another section
of data.And this is called Random Write.

How to solve the problem?
Waiting for your reply, thanks a lot!

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message