
GNU C Library 2.30 - jrepinc
https://sourceware.org/ml/libc-announce/2019/msg00001.html
======
pascal_cuoq
> * Memory allocation functions malloc, calloc, realloc, reallocarray, valloc,
> pvalloc, memalign, and posix_memalign fail now with total object size larger
> than PTRDIFF_MAX. This is to avoid potential undefined behavior with pointer
> subtraction within the allocated object, where results might overflow the
> ptrdiff_t type.

I did not think they would take this decision so soon, but it is, in my
opinion, the right decision to take. There will be complaints from users of
memory-heavy programs running on 32-bit platforms though.

For context, this blog post shows how things break when allocation functions
are allowed to create blocks of more than PTRDIFF_MAX: [https://trust-in-
soft.com/objects-larger-than-ptrdiff_max-by...](https://trust-in-
soft.com/objects-larger-than-ptrdiff_max-bytes/)

~~~
ajross
> There will be complaints from users of memory-heavy programs running on
> 32-bit platforms though

In all of recorded history, has a malloc() call for more than 2GB ever
actually succeeded anywhere? Most OSes on such platforms never supported any
more than that amount of addressible memory in a user process at all.

This is fine. Honestly it's seems like mostly pedantry on modern systems, but
it's clearly correct.

~~~
pascal_cuoq
> In all of recorded history, has a malloc() call for more than 2GB ever
> actually succeeded anywhere?

Yes, on OS X 10.5, and on 32-bit Linux with Glibc until two days ago.

The article I linked, written before Glibc 2.30 was released, is from a period
when every Unix had been allowing “malloc(0x80000001);” in 32-bit processes
until recently; only OS X had had the courage to make that allocation fail.
Sorry if the article doesn't make it clear enough that this is the context it
is written in, but in its defense, you only needed to try it (and still need
today to try it if you didn't upgrade Glibc) to see that it succeeds. Or do
you think that the Glibc developers wrote a Changelog entry to explain that
they changed something that didn't actually change?

Linux's default limit on 32-bit has been 3GiB for a while, i think:
[https://stackoverflow.com/a/5080778/139746](https://stackoverflow.com/a/5080778/139746)

Windows's limit is 2GiB by default, but this is only a default and 32-bit
processes can be allowed access to more memory, up to IIRC nearly all of the
theoretical maximum 4GiB for 32-bit processes running on 64-bit Windows.

~~~
ajross
The (sarcastic) point was about the fact that no real world code actually
_relied_ on a malloc() of half the address space.

I'm sure it "worked" in some sense, though I'd be really surprised if you
could make that happen with a default-linked C program on any distro that ever
shipped. The holes just aren't big enough. You'd need to link the app with
special care, and potentially write your own dynamic loader to keep the region
you wanted free. And if you do that... you might as well just mmap() the
thing.

The point was that doing this with the system heap on a 32 bit system was
never a serious thing. There are apps that would do management of memory
spaces that large, but they didn't do it with malloc.

------
fluffything
> * The twalk_r function has been added. It is similar to the existing twalk
> function, but it passes an additional caller-supplied argument to the
> callback function.

I thought this was standard practice for designing C APIs taking callbacks.

> * The Linux-specific <sys/sysctl.h> header and the sysctl function have been
> deprecated and will be removed from a future version of glibc. Application
> should directly access /proc instead. For obtaining random bits, the
> getentropy function can be used.

That's gonna break the world, a lot of code includes that header and uses the
sysctl function.

~~~
WillDaSilva
Well on the bright side it's perfectly reasonable for the function to exist in
a perpetual state of deprecation. Let's hope they don't do anything rash.

~~~
ronsor
If they do, Linus will probably scream at them for breaking compatibility.
(Even if this isn't the kernel)

~~~
jabl
This is a glibc wrapper for the sysctl system call, which has been deprecated
since forever in the kernel, is compiled in only if an option is specified
(major distros don't enable it), and is likely to be removed completely at
some point. Currently trying to use it, even if enabled, generates a warning
in the kernel log.

[http://man7.org/linux/man-
pages/man2/sysctl.2.html](http://man7.org/linux/man-pages/man2/sysctl.2.html)

------
yrro
Whoa, a gettid wrapper? What changed the maintainers' minds on making that
available?

------
zoobab
Static linking works now? Or I have to use Musl to have this feature working?

~~~
pragmaticlurker
static linking works also with glibc, AFAIK (using it)

~~~
jabl
IIRC NSS (/etc/nsswitch.conf etc.) needs dynamic linking for anything beyond
the basic files backend. But, again IIRC, musl has never supported NSS anyway
so that's kind of a moot point.

~~~
jcelerier
> IIRC NSS (/etc/nsswitch.conf etc.) needs dynamic linking for anything beyond
> the basic files backend. But, again IIRC, musl has never supported NSS
> anyway so that's kind of a moot point.

I frankly have never ever ever seen anyone actually configure NSS outside of
the defaults.

~~~
georgyo
You have never been in an organization that has used ldap or other user
backends then.

However even the defaults on Debian and Centos are affected here, as it means
that the dynamic user/host stuff in systemd also won't get picked up when
something doesn't read nsswitch

------
e12e
Anyone able to expand on:

"The dynamic linker accepts the --preload argument to preload shared objects,
in addition to the LD_PRELOAD environment variable."?

Does one ever invoke the dynamic linker directly? Why? How?

~~~
iso-8859-1

        $ /lib64/ld-linux-x86-64.so.2 /bin/true --version
        true (GNU coreutils) 8.28
        [...]
    

If you run it without arguments it will tell you usage.

~~~
e12e
Ah, yes of course. I actually do this often to look for missing runtime
dependencies. I hadn't thought about preload in that context - or ldd as a way
to run executables "by hand".

------
mort96
A lot of this sounds like great work, and the GNU project is doing great work.
However, I assume I'll have to prepare for more software breaking? When 2.28
rolled around, Electron and a bunch of GNU software (which relied on glibc
specific stuff which changed) broke.

~~~
vortico
Software compiled against glibc links to versioned symbols, which are
backwards compatible in ABI and behavior. I'm unsure of the reason you
experienced breaking software when upgrading your glibc version.

~~~
mort96
The GNU software which broke just didn't compile (with no upstream fix
available for a long time, which I found incredible; I had to go find arch
linux' repos' patch and apply that whenever I wanted to compile GNU build
tools).

The electron thing was apparently an LLVM linker thing according to your
sibling comment.

EDIT: the m4 patch in question:
[https://git.archlinux.org/svntogit/packages.git/tree/trunk/m...](https://git.archlinux.org/svntogit/packages.git/tree/trunk/m4-1.4.18-glibc-
change-work-around.patch?h=packages/m4) \- apparently they still use it.

~~~
shakna
> FIXME: Do not rely on glibc internals.

Seems that's less of a glibc breaking compatibility, and more developers
relying on something outside of the guaranteed API.

~~~
mort96
I mean, it's GNU M4. It's at the core of GNU's build system. It's GNU
developers depending on glibc internals. I'd be with you if it was just some
random project, but it's pretty bad of an update to glibc to break the GNU
toolchain.

------
metalforever
How is the support for the 68k in this release?

