hawq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ma Hongxu <inte...@outlook.com>
Subject Re: Thinking of how to fix HAWQ-1381
Date Fri, 10 Mar 2017 02:05:50 GMT
Thanks ming & paul.

Since 99999 segments is enough for hawq now,  I will enlarge the buffer to char[7] to support
-10000.


在 08/03/2017 12:23, Ming Li 写道:

Hi Hongxu,

I don't know the impact of this change. But I think if we want to solve
this problem completely, we should:

1. change segment id type from int to int32 or int64, which don't have
different range on different platforms.
2. The buffer length can contain the max value of that type
3. if it is better to use snprintf() instead of sprintf() to convert the
value.

Thanks.

On Wed, Mar 8, 2017 at 10:29 AM, Ma Hongxu <interma@outlook.com><mailto:interma@outlook.com>
wrote:



Hi all
I found a hawq core dump issue: https://issues.apache.org/
jira/browse/HAWQ-1381

Briefly:
buffer overflow here: src/backend/access/external/fileam.c:2610
sprintf(extvar->GP_SEGMENT_ID, "%d", GetQEIndex());

GetQEIndex() return -10000 on master and GP_SEGMENT_ID is char[6], no more
space for '\0', so it happend.

There are two ways to fix it:

  1.  enlarge GP_SEGMENT_ID buffer, from char[6] to char[7]
  2.  return other short interger instead of -10000 on master

I think 1 is more straight, but have some risks (some callers assume the
buffer size).
And 2 also seems it's a magic number, may influence many places.

Any suggestions? Thanks!


--
Regards,
Hongxu.







--
Regards,
Hongxu.
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message