arrow-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jacob Quinn <quinn.jac...@gmail.com>
Subject Re: Does Arrow Support Larger-than-Memory Handling?
Date Thu, 22 Oct 2020 22:58:15 GMT
Hi Jacob,

Yes, the arrow format allows for larger-than-memory datasets. I can
describe a little what this looks like on the Julia side of things, which
should be pretty similar in other languages.

When you write a dataset to the arrow format, either on disk or in memory,
you're laying the data + metadata down in a specific, (mostly)
self-describing format, with the actual data in particular being written in
pre-determined, binary formats by supported type. Provisions are made for
even writing a long table in chunks, and writing columns with a dictionary
encoding (which can "compress" a column with low cardinality).

What this allows when _reading_, is you can "memory map" this entire region
of arrow memory to make it available to a running program, which means the
OS will "give you" the memory, without necessarily loading the entire
region into RAM at the same time, instead it "swaps" requested regions into
RAM as necessary. For the Arrow.jl Julia package, reading a table or stream
starts with getting access to an arrow memory region: if a file, it calls
`Mmap.mmap(file)`, or you can pass a `Vector{UInt8}` directly. For
`Arrow.Stream`, it only reads the initial schema metadata, and then you can
iterate the stream to get the group of columns for each batch (chunk). For
`Arrow.Table`, it will process all record batches, "chaining" each chunk
together using a ChainedVector type for each column. When the columns are
"read", they're really just custom types that wrap a specific region of the
arrow memory along with the metadata type information and length. This
means no new memory (well, practically none) is allocated when creating one
of these ArrowVectors or chaining them together, but they still satisfy the
AbstractArray interface which allows all sorts of operations on the data.

The Arrow.jl package also defines the Tables.jl interface for
`Arrow.Table`, which means, for example, you can operate on an
arrow-memory-backed DataFrame just by doing `df =
DataFrame(Arrow.Table(file))`. This builds the `Arrow.Table` like I
described above, then the `DataFrame` constructor uses the memory-mapped
columns directly in its construction. You can then use all of the useful
functionality of DataFrames directly on these arrow columns. Similarly for
other Tables.jl-compatible packages, they're just as accessible:
SQLite.load!(db, "arrow_data", arrow_table) to load arrow data into an
sqlite database, CSV.write("arrow.csv", arrow_table) to write arrow data
out as csv file, MySQL.load!(db, "arrow_data", arrow_table) to load data
into a mysql database table, and so on.

Sorry for the diatribe, but I've actually been meaning to write a bunch of
this down for some enhanced documentation for the Arrow.jl package, so
consider this a teaser!

Hope that helps!

-Jacob

On Thu, Oct 22, 2020 at 12:39 PM Jacob Zelko <jacobszelko@gmail.com> wrote:

> Hi all,
>
> Very basic question as I have seen conflicting sources. I come from the
> Julia community and was wondering if Arrow can handle larger-than-memory
> datasets? I saw this post by Wes McKinney here discussing that the tooling
> is being laid down:
>
> Table columns in Arrow C++ can be chunked, so that appending to a table is
> a zero copy operation, requiring no non-trivial computation or memory
> allocation. By designing up front for streaming, chunked tables, appending
> to existing in-memory tabler is computationally inexpensive relative to
> pandas now. Designing for chunked or streaming data is also essential for
> implementing out-of-core algorithms, so we are also laying the foundation
> for processing larger-than-memory datasets.
>
> ~ *Apache Arrow and the “10 Things I Hate About pandas”*
> <https://wesmckinney.com/blog/apache-arrow-pandas-internals/>
>
> And then in the docs I saw this:
>
> The pyarrow.dataset module provides functionality to efficiently work with
> tabular, potentially larger than memory and multi-file datasets:
>
>    - A unified interface for different sources: supporting different
>    sources and file formats (Parquet, Feather files) and different file
>    systems (local, cloud).
>    - Discovery of sources (crawling directories, handle directory-based
>    partitioned datasets, basic schema normalization, ..)
>    - Optimized reading with predicate pushdown (filtering rows),
>    projection (selecting columns), parallel reading or fine-grained managing
>    of tasks.
>
> Currently, only Parquet and Feather / Arrow IPC files are supported. The
> goal is to expand this in the future to other file formats and data sources
> (e.g. database connections).
>
> ~ *Tabular Datasets* <https://arrow.apache.org/docs/python/dataset.html>
>
> The article from Wes was from 2017 and the snippet on Tabular Datasets is
> from the current documentation for pyarrow.
>
> Could anyone answer this question or at least clear up my confusion for
> me? Thank you!
> --
> Jacob Zelko
> Georgia Institute of Technology - Biomedical Engineering B.S. '20
> Corning Community College - Engineering Science A.S. '17
> Cell Number: (607) 846-8947
>

Mime
View raw message