hawq-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ming Li <...@apache.org>
Subject Re: Thinking of how to fix HAWQ-1381
Date Wed, 08 Mar 2017 04:23:39 GMT
Hi Hongxu,

I don't know the impact of this change. But I think if we want to solve
this problem completely, we should:

1. change segment id type from int to int32 or int64, which don't have
different range on different platforms.
2. The buffer length can contain the max value of that type
3. if it is better to use snprintf() instead of sprintf() to convert the
value.

Thanks.

On Wed, Mar 8, 2017 at 10:29 AM, Ma Hongxu <interma@outlook.com> wrote:

> Hi all
> I found a hawq core dump issue: https://issues.apache.org/
> jira/browse/HAWQ-1381
>
> Briefly:
> buffer overflow here: src/backend/access/external/fileam.c:2610
> sprintf(extvar->GP_SEGMENT_ID, "%d", GetQEIndex());
>
> GetQEIndex() return -10000 on master and GP_SEGMENT_ID is char[6], no more
> space for '\0', so it happend.
>
> There are two ways to fix it:
>
>   1.  enlarge GP_SEGMENT_ID buffer, from char[6] to char[7]
>   2.  return other short interger instead of -10000 on master
>
> I think 1 is more straight, but have some risks (some callers assume the
> buffer size).
> And 2 also seems it's a magic number, may influence many places.
>
> Any suggestions? Thanks!
>
>
> --
> Regards,
> Hongxu.
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message