hawq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Kyle Dunn <kd...@pivotal.io>
Subject Network interconnect settings in IaaS environments
Date Fri, 16 Sep 2016 19:07:28 GMT
In an ongoing evaluation of HAWQ in Azure, we've encountered some
sub-optimal network performance. It would be great to get some additional
information about a few server parameters related to the network:

- gp_max_packet_size
   The default is documented at 8192. Why was this number chosen? Should
this value be aligned with the network infrastructure's configured MTU,
accounting for the packet header size of the chosen interconnect type?
 (Azure only support MTU 1500 and has been showing better reliability using
TCP in Greenplum)

- gp_interconnect_type
    The docs claim UDPIFC is the default, UDP is the observed default. Do
the recommendations around which setting to use vary in an IaaS environment
(AWS or Azure)?

- gp_interconnect_queue_depth
   My naive read of this is performance can be traded off for (potentially
significant) RAM utilization. Is there additional detail around turning
this knob? How does the interaction between this and the underlying NIC
queue depth affect performance? As an example, in Azure, disabling TX
queuing (ifconfig eth0 txqueue 0) on the virtual NIC improved benchmark
performance, as the underlying HyperV host is doing it's own queuing anyway.

*Kyle Dunn | Data Engineering | Pivotal*
Direct: 303.905.3171 <3039053171> | Email: kdunn@pivotal.io

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message