So, I've been learning C (through "Learn C The Hard Way") to be able to contribute to various projects (PHP being one), and build my own system tools, as well as learning it so that I can understand Vala and Nimrod's underlying architecture and functionality better (Vala's GObject system seems like a leaky abstraction at times, and I get stuck as I'm no good with C...).
For my projects and the like, the other stuff I want to play with, is it worth using libs like these? They seem to "bring C into 2013", if you will, in terms of the features possible without re-inventing the wheel.
Speaking of re-inventing the wheel, I wish I knew where to look to get specific libraries for C, something like Ruby's Gems, and PHP's Composer. For now, I've just been using Google. Is there a better way? And does anyone have a good resource for explaining how to set-up a project with dependencies that are both binary and need to be compiled? I know a little bit of Makefiles now (I love it, and use it across all my languages now for various automation tooling), but feel like I'm not sure when to "expect a .so in the system" vs, brining the dependency with my archive and compiling locally/statically or dynamically, etc. There's a lot behind it all, and I don't know where to start, so I've been winging it!
I suspect most of your questions would be answered when the book is done; And probably in the "Next Steps" Appendix.
Mean while, I personally suggest you should only implement the performance driven parts in C functions while gluing most of your program logic to it using whatever language you feel comfortable with. If you want to "bring C into 2013", learn Go.
Talloc borrows the block hierarchy idea from halloc and extends it with destructors and some fluff. So if you need just the hierarchy, take a look at http://www.swapped.cc/halloc
I'm not going anywhere. Double-free is a known pitfall. People watch for it and a lot of standard libraries have safeguards against it.
On the other hand tfree() hangs if a prior call to talloc_set_parent() is passed certain arguments. Just more ammunition to shoot one's own foot. (p, p) might be a degenerate case, but real code will create cyclic references. In such case blaming programmer is the least productive way to handle it.
I've been looking into things that might help manage memory in some larger applications. I've been working with GObject recently, which has some benefits but using it is a bit of a full on lifestyle choice, and I'm not sure how suitable it is for some of the smaller embedded environments I work in.
This might be quite a good middle ground. Need to go and have a look at the impact on performance and memory usage.
I think C programmers would be better of moving to a small subset of C++ with destructors. This library tracks the link between a struct and its child objects for each instance of the struct (both the application and this library know about the link between a user and its username field). In C++, the knowledge 'free the pointer pointed to by field X' is kept only once. If you have lots of objects, that will add up.
Even if you don't want to move to C++, I don't think this is easier than writing separate functions 'allocAnX' and 'freeAnX', and being disciplined to always use them.
The 'reference counting' mentioned in the introduction may be a reason to use this instead, but it could not find it in the documentation elsewhere. Did I overlook that?
You're right that talloc has a memory overhead relative to smart pointers or a free_foo() function; talloc'ing up an int is very inefficient. Note that you can add a talloc-integrated "destructor" at zero marginal cost, so e.g. an array of a million pointers to individually-allocated int's could let talloc manage the vector and free the int's in the destructor. (That is, you can de-talloc part of a talloc'ed memory hierarchy.)
Additionally, malloc also has a runtime and memory cost; like talloc's, this cost is high enough to discourage arrays of pointers to individually allocated int's and low enough that it doesn't matter for arrays of int's. The one time I played around with talloc (for a network daemon), I didn't really run into any allocations where I'd hesitate to use talloc but not to use malloc. (I forgot the numbers; a quick Google suggests that talloc adds 96 bytes to malloc's 16 bytes of overhead.)
That said, just being disciplined about memory usage seems to work decently well, too. (For expert programmers with good tools, usually; it's a hard problem.)
> I wonder whether it was possible to make this more transparent for C programmers.
Actually, it is almost too transparent for C programmers:
>> From the programmer's point of view, the talloc context is completely equivalent to a pointer that would be returned by the memory routines from the C standard library.
You'd never be able to make it just a drop in replacement, because it's not just about the allocation and freeing. The interesting stuff is specifying the hierarchies so that freeing a higher object frees lower ones too. Existing code using malloc/free wouldn't be doing that.
It looks like they understand memory as a tree, but in complex structures it's a graph. I couldn't found how they handle ring topologies. I guess, this stealing stuff doesn't help.
IIRC, talloc basically doesn't handle rings. Note, however, that it's fairly easy to create a "reference to the entire ring" that cleans up some node/the entire ring when it's deallocated, or to explicitly deallocate some node of the ring.
For what it's worth, non-tree reference graphs are a lot rarer than you'd think. (Note, for instance, that Perl doesn't clean them up automatically except in a few specific cases; this is annoying, but Perl is clearly usable!)
http://www.void.at/exploits/hoagie_samba_packetchaining.c
https://lists.samba.org/archive/samba-technical/2013-October...