corinthia-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Edward Zimmermann <Edward.Zimmerm...@cib.de>
Subject AW: Checking malloc success and adding perror()
Date Mon, 23 Feb 2015 11:47:19 GMT
Hi,

Been sort-of out the the discussion-- was on vacation last week-- so excuse me, in advance,
if I bring up a point already made.

First of all.. Corthinia is supposed to be C++? If so we don't want to use malloc. If its
plain C, of course, malloc is probably our first choice for memory allocation. 

With the issue of x= malloc(y). This gets more complicated. Linux, for example, adopted from
AIX an overbooking model of memory allocation. They literally sell more tickets than they
have (they call it optimistic, I call it !#@?) and hope that when the page address is ready
to be touched there is enough free memory one can map in. When Linux runs really out of memory---
and that is not just memory but pointers (in 32-bit Linux I can easily demonstrate how with
little effort I can awaken the OOM Killer to wreck havoc) via kernel segment maps. 

AIX in all fairness had a memory low signal: SIGDANGER (Signal 33). Well designed AIX programs
trap this signal to be able to try to recover sufficient pages to continue (if not it’s
the same kind of seemingly random process sniping we see in Linux)... SIGDANGER in Linux?
Nope. 

Generally in Linux the call malloc(x) can return a non-null pointer even when it has no memory.
We can configure the kernel to do things a bit differently via the "vm.memory_overcommit"
variable. See
https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/6/html/Performance_Tuning_Guide/s-memory-captun.html

Can we configure the OOM Killer to be a little nicer? Yes but really only process by process..
And when a process gets killed it's done so in a very silent way.. so if someone is "not in
the know" to spot the clues.. it can get quite mysterious why some programs stop working...
 

But let us even pretend that we get a NULL... What is the watermark to have gotten it? Can
we recover? Any buffers we can quickly dispose of? Recovery is not that easy!

Should we pretend that we can get a NULL? Of course. It's good programming practice. Should
we wrap malloc with an xmalloc for such testing? No. On systems where malloc might  return
a NULL we should have for different objects alternative strategies for dealing with an allocation
failure. A routine, for example, that wants to create a scratch buffer of x length but could
work, albeit slower, with less we might make smaller. Etc etc. etc.

Optimizing memory pools? If we have a lot of object of the exact same size it's the standard
paradigm. The most costly thing with most mallocs is not the malloc but the free. Being able
to keep a pool around saves a lot of time. Its downside, however, is that the memory gets
hogged and not returned generally until process end. We can also talk about the downsides
to garbage collection models etc. etc...

Windows since version 7 has a standard allocator with excellent performance. Current generation
Linux, Solaris, BSD etc. too have gotten pretty good mallocs "out of the box" in their libc
these days. Unless we are doing some hardcode mode allocations I don't think we should bother
with pools. These mallocs are pretty darn fast. Yes I can write a special malloc with its
own strategy that might provide better performance--- my search engine got its own tuned malloc---
but I don't think that is something Corithia needs (we don't I think need to worry about timer
ticks).

I'd suggest we keep to malloc and IF NEEDED-- and only if and when needed--- we use a drop-in
replacement (and chances are that we'll NEVER need it much less want one).

Part of the problem is that we might have different "best" approaches for different operating
systems. IOS, Android, Linux, BSD, ... making "best" not really the "best" goal.. 

Perror?  No.  Calling directly a function that is intended to write to a console is, in general,
a bad thing. 



-----Ursprüngliche Nachricht-----
Von: Peter Kelly [mailto:pmkelly@apache.org] 
Gesendet: Donnerstag, 19. Februar 2015 13:41
An: dev@corinthia.incubator.apache.org
Betreff: Re: Checking malloc success and adding perror()

> On 19 Feb 2015, at 7:06 pm, Dennis E. Hamilton <dennis.hamilton@acm.org> wrote:
> 
> +1 about a cheap check and common abort procedure for starters.
> 
> I think figuring out what to do about cleanup and exception unwinding, and even what
exception handling to use (if any) is a further platform-development issue that could be masked
with simple still-inlineable code, but needs much more architectural thought.

I’m fine with us using wrapper functions for these which do the checks - though please let’s
use xmalloc, xcalloc, xrealloc, and xstrdup instead of DFPlatform* (it’s nothing to do with
platform abstraction, and these names are easier to type). (as a side note we can probably
cut down on prefix usage a lot as long as we don’t export symbols; this was just to avoid
name clashes with other libraries)

In my previous mail I really just wanted to point out that by itself, this doesn’t really
solve anything - the issue is in reality far more complicated than a simple NULL pointer check.

I can think of two ways we could deal with the issue of graceful handling:

1) Allow the application to supply a callback, as Jan suggested

2) Adopt a “memory pool” type strategy where we create an memory pool object at the start
of conversion which tracks all allocations that occur between the beginning and end of a top-level
API call like DFGet, and put setjmp/longjmp-style exception handling in these API calls.

The second approach is in fact already used to a limited extent with the DOM API. Every document
maintains its own memory pool for storing Node objects (and the text values of nodes)… this
is freed when the document’s retainCount drops to zero. I did this because it was much faster
than traversing through the tree and releasing nodes individually (at least in comparison
to have nodes as Objective C objects - the ObjC runtime was undoubtedly part of that overhead).

—
Dr Peter M. Kelly
pmkelly@apache.org

PGP key: http://www.kellypmk.net/pgp-key <http://www.kellypmk.net/pgp-key> (fingerprint
5435 6718 59F0 DD1F BFA0 5E46 2523 BAA1 44AE 2966)

Mime
View raw message