hudi-commits mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From GitBox <...@apache.org>
Subject [GitHub] [incubator-hudi] vinothchandar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
Date Wed, 22 Jan 2020 16:02:34 GMT
vinothchandar commented on a change in pull request #1267: [HUDI-403] Publish deployment guide
for writing to Hudi using HoodieDeltaStreamer and Spark Data Source
URL: https://github.com/apache/incubator-hudi/pull/1267#discussion_r369649210
 
 

 ##########
 File path: docs/_docs/2_6_deployment.md
 ##########
 @@ -23,15 +23,25 @@ All in all, Hudi deploys with no long running servers or additional infrastructu
 using existing infrastructure and its heartening to see other systems adopting similar approaches
as well. Hudi writing is done via Spark jobs (DeltaStreamer or custom Spark datasource jobs),
deployed per standard Apache Spark [recommendations](https://spark.apache.org/docs/latest/cluster-overview.html).
 Querying Hudi tables happens via libraries installed into Apache Hive, Apache Spark or Presto
and hence no additional infrastructure is necessary. 
 
+### DeltaStreamer
 
+[DeltaStreamer](/docs/writing_data.html#deltastreamer) is the standalone utility to incrementally
pull upstream changes from varied sources such as DFS, Kafka and DB Changelogs and ingest
them to hudi tables. It runs as a spark application in 2 modes.
+
+ - **Run Once Mode** : In this mode, Deltastreamer performs one ingestion round which includes
incrementally pulling events from upstream sources and ingesting them to hudi table. Background
operations like cleaning old file versions and archiving hoodie timeline are automatically
executed as part of the run. For Merge-On-Read tables, Compaction is also run inline as part
of ingestion unless disabled by passing the flag "--disable-compaction". By default, Compaction
is run inline for every ingestion run and this can be changed by setting the property "hoodie.compact.inline.max.delta.commits".
You can either manually run this spark application or use any cron trigger or workflow orchestrator
such as Apache Airflow to spawn this application.
+ - **Continuous Mode** :  Here, deltastreamer runs an infinite loop with each round performing
one ingestion round as described in **Run Once Mode**. For Merge-On-Read tables, Compaction
is run in asynchronous fashion concurrently with ingestion unless disabled by passing the
flag "--disable-compaction". Every ingestion run triggers a compaction request asynchronously
and this frequency can be changed by setting the property "hoodie.compact.inline.max.delta.commits".
As both ingestion and compaction is running in the same spark context, you can use resource
allocation configuration in DeltaStreamer CLI such as ("--delta-sync-scheduling-weight", "--compact-scheduling-weight",
""--delta-sync-scheduling-minshare", and "--compact-scheduling-minshare") to control executor
allocation between ingestion and compaction.
 
 Review comment:
   also worth noting here is the config for controlling the sync frequency.. `--min-sync-interval-seconds`


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

Mime
View raw message