Discussion:
Test-lock hang (not 100% reproducible) on GNU/Linux
(too old to reply)
Pádraig Brady
2016-12-23 17:25:49 UTC
Permalink
Raw Message
https://lists.gnu.org/archive/html/bug-gnulib/2015-07/msg00032.html
I would view this as a false positive. The test uses some 'volatile'
variables to communicate among threads, and 'valgrind --tool=helgrind'
does not know about it. There are no mentions of 'volatile' in the
helgrind documentation http://valgrind.org/docs/manual/hg-manual.html,
and there are similar reports at
https://sourceforge.net/p/valgrind/mailman/valgrind-users/thread/5038A955.6060108%40acm.org/#msg29721076
I've seen this test take a minute or so on a 40 core system.
The running time is a different problem. I observe a long running time
with "#define EXPLICIT_YIELD 0". But the default is "#define EXPLICIT_YIELD 1".
When I use the attached patch to avoid the use of 'volatile' and replace it
by a lock, like explained in
http://stackoverflow.com/questions/10091112/using-a-global-variable-to-control-a-while-loop-in-a-separate-thread,
$ time ./test-lock
Starting test_lock ... OK
Starting test_rwlock ... OK
Starting test_recursive_lock ... OK
Starting test_once ... OK
real 0m1.770s
user 0m1.240s
sys 0m10.268s
Can someone test how this looks like on a 40-core system?
Wow that's much better on a 40 core system:

Before your patch:
=================
$ time ./test-lock
Starting test_lock ... OK
Starting test_rwlock ... OK
Starting test_recursive_lock ... OK
Starting test_once ... OK

real 1m32.547s
user 1m32.455s
sys 13m21.532s

After your patch:
=================
$ time ./test-lock
Starting test_lock ... OK
Starting test_rwlock ... OK
Starting test_recursive_lock ... OK
Starting test_once ... OK

real 0m3.364s
user 0m3.087s
sys 0m25.477s

thanks!
Pádraig
Bruno Haible
2016-12-24 17:52:07 UTC
Permalink
Raw Message
Hi Pádraig,
Post by Pádraig Brady
=================
$ time ./test-lock
Starting test_lock ... OK
Starting test_rwlock ... OK
Starting test_recursive_lock ... OK
Starting test_once ... OK
real 1m32.547s
user 1m32.455s
sys 13m21.532s
=================
$ time ./test-lock
Starting test_lock ... OK
Starting test_rwlock ... OK
Starting test_recursive_lock ... OK
Starting test_once ... OK
real 0m3.364s
user 0m3.087s
sys 0m25.477s
Wow, a 30x speed increase by using a lock instead of 'volatile'!

Thanks for the testing. I cleaned up the patch to do less
code duplication and pushed it.

Still, I wonder about the cause of this speed difference.
It must be the read from the 'volatile' variable that is problematic,
because the program writes to 'volatile' variable only 6 times in total.

What happens when a program reads from a 'volatile' variable
at address xy in a multi-processor system? It must do a broadcast
to all other CPUs "please flush your internal write caches", wait
for these flushes to be completed, and then do a read at address xy.
But the same procedure must also happen when taking a lock at
address xy. So, where does the speed difference come from?
The 'volatile' handling must be implemented in a terrible way;
either GCC generates inefficient instructions? or these instructions
are executed in a horrible way by the hardware?

What is the hardware of your 40-core machine (just for reference)?

Bruno
Pádraig Brady
2016-12-24 18:40:59 UTC
Permalink
Raw Message
Post by Bruno Haible
Hi Pádraig,
Post by Pádraig Brady
=================
$ time ./test-lock
Starting test_lock ... OK
Starting test_rwlock ... OK
Starting test_recursive_lock ... OK
Starting test_once ... OK
real 1m32.547s
user 1m32.455s
sys 13m21.532s
=================
$ time ./test-lock
Starting test_lock ... OK
Starting test_rwlock ... OK
Starting test_recursive_lock ... OK
Starting test_once ... OK
real 0m3.364s
user 0m3.087s
sys 0m25.477s
Wow, a 30x speed increase by using a lock instead of 'volatile'!
Thanks for the testing. I cleaned up the patch to do less
code duplication and pushed it.
Still, I wonder about the cause of this speed difference.
It must be the read from the 'volatile' variable that is problematic,
because the program writes to 'volatile' variable only 6 times in total.
What happens when a program reads from a 'volatile' variable
at address xy in a multi-processor system? It must do a broadcast
to all other CPUs "please flush your internal write caches", wait
for these flushes to be completed, and then do a read at address xy.
But the same procedure must also happen when taking a lock at
address xy. So, where does the speed difference come from?
The 'volatile' handling must be implemented in a terrible way;
either GCC generates inefficient instructions? or these instructions
are executed in a horrible way by the hardware?
What is the hardware of your 40-core machine (just for reference)?
Might be NUMA related?
CPU is Intel(R) Xeon(R) CPU E5-2660 v2 @ 2.20GHz
Attached is the output from:
lstopo --no-legend -v -p --of png > test-lock.png

thanks again,
Pádraig
Bruno Haible
2016-12-24 21:24:37 UTC
Permalink
Raw Message
Post by Pádraig Brady
Post by Bruno Haible
What happens when a program reads from a 'volatile' variable
at address xy in a multi-processor system?
Looking at the assembler code produced by GCC: GCC does not emit
any barrier or similar instructions for reads or writes to 'volatile'
variables. So, the loop in test_lock actually waits until the other
CPU _happens_ to flush it cache.
Post by Pádraig Brady
Might be NUMA related?
Indeed. Probably the CPUs in a NUMA system flush their caches to
main memory not so frequently.

Whereas when we use a lock, the thread library or kernel executes
the appropriate cache flush instructions.

Bruno
Pavel Raiskup
2017-01-02 14:18:28 UTC
Permalink
Raw Message
Post by Bruno Haible
Wow, a 30x speed increase by using a lock instead of 'volatile'!
Thanks for the testing. I cleaned up the patch to do less
code duplication and pushed it.
Thanks, that's nice speedup! And sorry for the delay..

I still see infinite hang on ppc64le (sometimes), as discussed in [1]. It
looks like starvation of writers in test_rwlock().

Could we set PTHREAD_RWLOCK_PREFER_WRITER_NP (in test-lock.c) to avoid
those issues? One thing I'm afraid of is that writers could finish too
early. Could we could artificially slow them down?

[1] https://lists.fedoraproject.org/archives/list/***@lists.fedoraproject.org/thread/PQD576JZLERFY6ROI3GF7UYXKZIRI33G/

Pavel
Bruno Haible
2017-01-02 15:50:28 UTC
Permalink
Raw Message
Hi Pavel,
Post by Pavel Raiskup
One thing I'm afraid of is that writers could finish too
early. Could we could artificially slow them down?
In test_rwlock the test does this:

/* Wait for the threads to terminate. */
for (i = 0; i < THREAD_COUNT; i++)
gl_thread_join (threads[i], NULL);
set_atomic_int_value (&rwlock_checker_done, 1);
for (i = 0; i < THREAD_COUNT; i++)
gl_thread_join (checkerthreads[i], NULL);

It waits until all 10 mutator threads are terminated, then sets a
lock-protected variable rwlock_checker_done to 1, that signals to the
10 checker thread that they can terminate at the next occasion, and
then waits for them to terminate.

Are you saying that the kernel will schedule the 10 checker threads
with higher priority than the 10 mutator threads, although I have *not*
specified anything about priorities? That would be a kernel bug, IMO.

Especially since the problem occurs only on one architecture.
Post by Pavel Raiskup
Could we set PTHREAD_RWLOCK_PREFER_WRITER_NP (in test-lock.c) to avoid
those issues?
I disagree. The test is a minimal test of the kernel's multithreading
support. If among 10 mutator threads and 10 checker threads, all started
with the same priority, it has such a severe bias that the mutator threads
never get to run, you have a kernel bug. I should not need a non-portable
threading function in order to get 20 threads to run reasonably.

Imagine what scenarios you would then get with an application server and
400 threads.

Bruno
Pavel Raiskup
2017-01-02 16:37:25 UTC
Permalink
Raw Message
Post by Bruno Haible
Hi Pavel,
Post by Pavel Raiskup
One thing I'm afraid of is that writers could finish too
early. Could we could artificially slow them down?
/* Wait for the threads to terminate. */
for (i = 0; i < THREAD_COUNT; i++)
gl_thread_join (threads[i], NULL);
set_atomic_int_value (&rwlock_checker_done, 1);
for (i = 0; i < THREAD_COUNT; i++)
gl_thread_join (checkerthreads[i], NULL);
It waits until all 10 mutator threads are terminated, then sets a
lock-protected variable rwlock_checker_done to 1, that signals to the
10 checker thread that they can terminate at the next occasion, and
then waits for them to terminate.
Are you saying that the kernel will schedule the 10 checker threads
with higher priority than the 10 mutator threads, although I have *not*
specified anything about priorities? That would be a kernel bug, IMO.
That's what I'm not sure about, as discussed in [1], POSIX says (for
pthread_rwlock_wrlock()):

Implementations may favor writers over readers to avoid writer starvation.

But that's too far from 'shall favor' spelling. And when I had a look at my man
pthread_rwlockattr_setkind_np(3), there's written:

PTHREAD_RWLOCK_PREFER_READER_NP
This is the default. A thread may hold multiple read locks;
that is, read locks are recursive. According to The Single Unix
Specification, the behavior is unspecified when a reader tries
to place a lock, and there is no write lock but writers are
waiting. Giving preference to the reader, as is set by
PTHREAD_RWLOCK_PREFER_READER_NP, implies that the reader will
receive the requested lock, even if a writer is waiting. As
long as there are readers, the writer will be starved.
Post by Bruno Haible
Especially since the problem occurs only on one architecture.
I've been able to reproduce this on i686 in the meantime too, sorry -- I just
reported what I observed :(. See [1].
Post by Bruno Haible
Post by Pavel Raiskup
Could we set PTHREAD_RWLOCK_PREFER_WRITER_NP (in test-lock.c) to avoid
those issues?
I disagree. The test is a minimal test of the kernel's multithreading
support. If among 10 mutator threads and 10 checker threads, all started
with the same priority, it has such a severe bias that the mutator threads
never get to run, you have a kernel bug. I should not need a non-portable
threading function in order to get 20 threads to run reasonably.
Imagine what scenarios you would then get with an application server and
400 threads.
It might be bug in libpthread, too, but based on the POSIX specs and manual
pages, I am not sure whether this might be actually considered a bug.

[1] https://lists.fedoraproject.org/archives/list/***@lists.fedoraproject.org/thread/PQD576JZLERFY6ROI3GF7UYXKZIRI33G/

Pavel
Bernhard Voelker
2017-01-02 19:02:03 UTC
Permalink
Raw Message
Post by Pavel Raiskup
Post by Bruno Haible
Especially since the problem occurs only on one architecture.
I've been able to reproduce this on i686 in the meantime too, sorry -- I just
reported what I observed :(. See [1].
... or it is related to the KOJI environment? I've seen some of the
gnulib tests in the coreutils-testsuite failing on several non-x86 archs
on the openSUSE build service in the past (especially on newer like aarch64).
But at least in the past year, the tests on all of i586, x86_64, ppc,
ppc64, ppc64le, aarch64 and armv7l have been quite stable.

https://build.opensuse.org/package/live_build_log/Base:System/coreutils-testsuite/openSUSE_Factory_PowerPC/ppc64le

Have a nice day,
Berny
Pavel Raiskup
2017-01-03 15:30:32 UTC
Permalink
Raw Message
Hello Berny,
Post by Bernhard Voelker
Post by Pavel Raiskup
Post by Bruno Haible
Especially since the problem occurs only on one architecture.
I've been able to reproduce this on i686 in the meantime too, sorry -- I just
reported what I observed :(. See [1].
... or it is related to the KOJI environment?
Maybe, I was able to reproduce this on x86_64 VM, running the build (and
tests) in mock i386 _chroot_ (koji system "cross-compiles" packages in
i386 chroot).

So finally I was able to attach strace and gdb the process (on 8-core
machine), and gdb just confirmed that:

- the readers wait for main process (busy loop)
- main process waits for all writers to finish (thread join)
- writers wait indefinitely for the rwlock released by all readers
Post by Bernhard Voelker
I've seen some of the gnulib tests in the coreutils-testsuite failing on
several non-x86 archs on the openSUSE build service in the past
(especially on newer like aarch64). But at least in the past year, the
tests on all of i586, x86_64, ppc, ppc64, ppc64le, aarch64 and armv7l
have been quite stable.
Seems to be different issue.

I am able to reproduce very non-deterministically, under weird conditions.
But to make it a bit more deterministic -- please try the attached patch
which:

- prolongs the critical section in reader thread
- to be a bit more fair, the number of concurrent threads is decreased
to 3 readers (and 3 writers too), so the thread queue is not too long

After at most several iterations, there's only:

...
Checker 0x7fc9586b6700 after check unlock
Checker 0x7fc9586b6700 before check rdlock
Checker 0x7fc957eb5700 after check unlock
Checker 0x7fc957eb5700 before check rdlock
....

At least on my box ... is it the same for you? If yes, is there some
mistake in the patch? Because otherwise that would just prove that we are
testing behavior which is not guaranteed to happen; IOW we can't
guarantee that the critical sections _don't_ take always the same (long
enough) time period so there's always one reader with acquired lock.

I am afraid about the explic yield, which doesn't help because (probably?)
PTHREAD_RWLOCK_PREFER_READER_NP is the default? Dunno.

Pavel
Bruno Haible
2017-01-03 23:43:17 UTC
Permalink
Raw Message
Post by Pavel Raiskup
Implementations may favor writers over readers to avoid writer starvation.
But that's too far from 'shall favor' spelling.
You must be looking at an old version of POSIX [1]. POSIX:2008 specifies [2]:

[TPS] If ... the threads involved in the lock are executing with the scheduling
policies SCHED_FIFO or SCHED_RR, the calling thread shall not acquire the
lock if a writer holds the lock or if writers of higher or equal priority
are blocked on the lock; otherwise, the calling thread shall acquire the
lock.

That states pretty clearly that the goal is that if both threads have equal
priority, the "writer" thread gets the rwlock.
Post by Pavel Raiskup
It might be bug in libpthread, too, but based on the POSIX specs and manual
pages, I am not sure whether this might be actually considered a bug.
At least a quality of implementation issue, IMO.

Bruno

[1] http://pubs.opengroup.org/onlinepubs/007908799/xsh/pthread_rwlock_rdlock.html
[2] http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_rwlock_rdlock.html
Pavel Raiskup
2017-01-04 08:04:16 UTC
Permalink
Raw Message
Post by Bruno Haible
Post by Pavel Raiskup
Implementations may favor writers over readers to avoid writer starvation.
But that's too far from 'shall favor' spelling.
[TPS] If ... the threads involved in the lock are executing with the scheduling
policies SCHED_FIFO or SCHED_RR, the calling thread shall not acquire the
lock if a writer holds the lock or if writers of higher or equal priority
are blocked on the lock; otherwise, the calling thread shall acquire the
lock.
That states pretty clearly that the goal is that if both threads have equal
priority, the "writer" thread gets the rwlock.
Ah, I was looking at old specification. Those SCHED_FIFO (etc) seem to be
real time options disabled by default, thus also in test-lock.c, but in
[2] there's also:

The calling thread acquires the read lock if a writer does not hold the
lock and there are no writers blocked on the lock.

This sounds fair.. Thanks for pointing that out!

Remaining questions:

Can we assume all systems supporting pthreads are conforming to this
specs? That was pretty big (and pretty recent) change of specification.

If we really want to test _this_ behavior (writers preferred over readers,
i.e. no rdlock when at least one wrlock acquired), shouldn't we apply
something like the patch from [3]? Than that test would be detrerministic for
everybody, now it is really matter of luck.

Can we define (documment) what is the real issue we try to test by test_rwlock?
I mean: We should define what "fair" scheduler means, and fail if that's not
truth ... not hang. Now the definition of "fair" means "test ends in a finite
time".

Can there be a timeout, at least?

Simply, after spending some time on this issue (and I'm not the first
one), I'd like to see some fix in gnulib so nobody else in future will
face similar issues. Any other suggestion is welcome, I can possibly
try to have a look at the implementation.
Post by Bruno Haible
Post by Pavel Raiskup
It might be bug in libpthread, too, but based on the POSIX specs and manual
pages, I am not sure whether this might be actually considered a bug.
At least a quality of implementation issue, IMO.
Agreed. I'll try ask glibc guys, or kernel guys (Fedora/RHEL).

[3] https://lists.gnu.org/archive/html/bug-gnulib/2017-01/msg00024.html

Pavel
Post by Bruno Haible
Bruno
[1] http://pubs.opengroup.org/onlinepubs/007908799/xsh/pthread_rwlock_rdlock.html
[2] http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_rwlock_rdlock.html
Bruno Haible
2017-01-04 10:54:27 UTC
Permalink
Raw Message
Hi Pavel,
Post by Pavel Raiskup
Can we assume all systems supporting pthreads are conforming to this
specs? That was pretty big (and pretty recent) change of specification.
The change in the specification [4] mentions that the issue arose with glibc,
and it points to two tests from the Linux test project [5][6]. Can you run
these tests on your Koji system? It would be interesting to see if they
fail or hang as well.
Post by Pavel Raiskup
If we really want to test _this_ behavior (writers preferred over readers,
i.e. no rdlock when at least one wrlock acquired), shouldn't we apply
something like the patch from [3]? Than that test would be detrerministic for
everybody, now it is really matter of luck.
Can we define (documment) what is the real issue we try to test by test_rwlock?
The test_rwlock function in test-lock.c is meant to test the gl_rwlock_t API
from glthread/lock.h (which, in our case, is just a transparent layer over the
POSIX API) in a textbook situation. It is not meant to test against a particular
bug.

If you can think of another simple textbook way to use rwlocks - without hacks
and without using *_NP functions - then you're welcome to change or replace
the test_rwlock function. But if you cannot, then the Austin group should throw
the rwlocks out of POSIX. The sentence from [4]
"applications should probably be encouraged to use mutexes unless rwlocks
are needed for performance reasons"
also goes into this direction.
Post by Pavel Raiskup
I'll try ask glibc guys, or kernel guys (Fedora/RHEL).
Thanks! This is the only promising avenue, and you can do it because you
can reproduce the issue on your Koji system (whereas for me, test_rwlock always
completes after a couple of seconds).
Post by Pavel Raiskup
shouldn't we apply something like the patch from [3]?
Well, the patch in [3] would make the test hang on all platforms, because the
writers would never get a chance to take the lock. This is pointless.
Post by Pavel Raiskup
Simply, after spending some time on this issue (and I'm not the first
one), I'd like to see some fix in gnulib so nobody else in future will
face similar issues.
The pessimistic suggestion would be to change the gnulib documentation to
state that rwlocks are never reliable, because of the way they are implemented
in glibc, and should therefore never be used. If we agree on this, then I'm
willing to put a #if 0 around test_rwlock.

Bruno
Post by Pavel Raiskup
[3] https://lists.gnu.org/archive/html/bug-gnulib/2017-01/msg00024.html
[4] http://austingroupbugs.net/view.php?id=722&nbn=5
[5] https://github.com/linux-test-project/ltp/blob/master/testcases/open_posix_testsuite/conformance/interfaces/pthread_rwlock_rdlock/2-1.c
[6] https://github.com/linux-test-project/ltp/blob/master/testcases/open_posix_testsuite/conformance/interfaces/pthread_rwlock_rdlock/2-2.c
Pádraig Brady
2017-01-04 11:23:38 UTC
Permalink
Raw Message
Post by Bruno Haible
Hi Pavel,
Post by Pavel Raiskup
Can we assume all systems supporting pthreads are conforming to this
specs? That was pretty big (and pretty recent) change of specification.
The change in the specification [4] mentions that the issue arose with glibc,
and it points to two tests from the Linux test project [5][6]. Can you run
these tests on your Koji system? It would be interesting to see if they
fail or hang as well.
Now that test-lock.c is relatively fast on numa/multicore systems,
it seems like it would be useful to first alarm(30) or something
to protect against infinite hangs?

cheers,
Pádraig
Bruno Haible
2017-01-04 14:17:01 UTC
Permalink
Raw Message
Post by Pádraig Brady
Now that test-lock.c is relatively fast on numa/multicore systems,
it seems like it would be useful to first alarm(30) or something
to protect against infinite hangs?
If we could not pinpoint the origin of the problem, I agree, an alarm(30)
would be the right thing to prevent an infinite hang.

But by now, we know

1) It's a glibc bug: The test [6] fails even after it has set the
policies that POSIX expects for the "writers get the rwlock in preference
to readers guarantee".

2) Without this guarantee, a reader function that repeatedly spends
I milliseconds in a section protected by the rwlock,
O milliseconds without the rwlock being held,
in a system with N reader threads in parallel
will lead to
- a successful termination if N * I / (I + O) < 1.0
- an infinite hang if N * I / (I + O) > 1.0
(There is actually no discontinuity at 1.0; need to use probability
calculus for a more detailed analysis.)
So, in order to make test_rwlock hang-tree, I would need to introduce
a sleep() without the rwlock being held, and the duration of this sleep
would be at least (N - 1) * I.

Now, asking an application writer to add sleep()s in his code, with
a duration that depends both on the number of threads and on the time
spent in specific portions of the code, is outrageous.

So, as it stands, POSIX rwlock without a "writers get preference" guarantee
is unusable.

I propose to do what we usually do in gnulib, to work around bugs and unusable
APIs:
- Write a configure test for the guarantee, based on [6].
- Modify the 'lock' module to use its own implementation of rwlock.
- Add a unit test to verify the guarantee (so that we can also detect
if the same problem occurs in pth or Solaris), again based on [6].

Patch in preparation...

Bruno

[6] https://github.com/linux-test-project/ltp/blob/master/testcases/open_posix_testsuite/conformance/interfaces/pthread_rwlock_rdlock/2-2.c
Pavel Raiskup
2017-01-04 14:48:53 UTC
Permalink
Raw Message
Post by Bruno Haible
Post by Pádraig Brady
Now that test-lock.c is relatively fast on numa/multicore systems,
it seems like it would be useful to first alarm(30) or something
to protect against infinite hangs?
If we could not pinpoint the origin of the problem, I agree, an alarm(30)
would be the right thing to prevent an infinite hang.
But by now, we know
1) It's a glibc bug: The test [6] fails even after it has set the
policies that POSIX expects for the "writers get the rwlock in preference
to readers guarantee".
2) Without this guarantee, a reader function that repeatedly spends
I milliseconds in a section protected by the rwlock,
O milliseconds without the rwlock being held,
in a system with N reader threads in parallel
will lead to
- a successful termination if N * I / (I + O) < 1.0
- an infinite hang if N * I / (I + O) > 1.0
(There is actually no discontinuity at 1.0; need to use probability
calculus for a more detailed analysis.)
So, in order to make test_rwlock hang-tree, I would need to introduce
a sleep() without the rwlock being held, and the duration of this sleep
would be at least (N - 1) * I.
Now, asking an application writer to add sleep()s in his code, with
a duration that depends both on the number of threads and on the time
spent in specific portions of the code, is outrageous.
So, as it stands, POSIX rwlock without a "writers get preference" guarantee
is unusable.
If we don't played with probability a bit longer, I'm still afraid this
moves the problem somewhere else ... because if writers had preference,
and those were able to held rwlock all the time, readers would starve.

I agree that gl_pthread_rwlock* should match the specification, but at
least in the actual algorithm in test_rwlock() we should make sure that
some readers are actually doing something _during_ writers' typhoon...
(this is not hang of test_rwlock() anymore, but certainly we want to test
something..).

Pavel
Post by Bruno Haible
I propose to do what we usually do in gnulib, to work around bugs and unusable
- Write a configure test for the guarantee, based on [6].
- Modify the 'lock' module to use its own implementation of rwlock.
- Add a unit test to verify the guarantee (so that we can also detect
if the same problem occurs in pth or Solaris), again based on [6].
Patch in preparation...
Bruno
[6] https://github.com/linux-test-project/ltp/blob/master/testcases/open_posix_testsuite/conformance/interfaces/pthread_rwlock_rdlock/2-2.c
Bruno Haible
2017-01-04 11:58:37 UTC
Permalink
Raw Message
Post by Bruno Haible
and it points to two tests from the Linux test project [5][6]. Can you run
these tests on your Koji system?
For me, these two tests fail on a glibc-2.23 system:

$ wget https://raw.githubusercontent.com/linux-test-project/ltp/master/testcases/open_posix_testsuite/conformance/interfaces/pthread_rwlock_rdlock/2-1.c
$ wget https://raw.githubusercontent.com/linux-test-project/ltp/master/testcases/open_posix_testsuite/conformance/interfaces/pthread_rwlock_rdlock/2-2.c
$ wget https://raw.githubusercontent.com/linux-test-project/ltp/master/testcases/open_posix_testsuite/include/posixtest.h
$ gcc -O -Wall 2-1.c -lpthread
$ ./a.out
main: has priority: 3
main: attempt read lock
main: acquired read lock
main: create wr_thread, with priority: 2
wr_thread: attempt write lock
main: create rd_thread, with priority: 1
rd_thread: attempt read lock
rd_thread: acquired read lock
rd_thread: unlock read lock
Test FAILED: rd_thread did not block on read lock, when a reader owns the lock, and a higher priority writer is waiting for the lock
$ gcc -O -Wall 2-2.c -lpthread
$ ./a.out
main: attempt read lock
main: acquired read lock
main: create wr_thread, with priority: 2
wr_thread: attempt write lock
main: create rd_thread, with priority: 2
rd_thread: attempt read lock
rd_thread: acquired read lock
rd_thread: unlock read lock
Test FAILED: rd_thread did not block on read lock, when a reader owns the lock, and an equal priority writer is waiting for the lock

In an 'ltrace' invocation, I see calls to pthread_rwlock_rdlock.
In an 'strace' invocation, I only see calls to futex, rt_sigaction,
rt_sigprocmask, which can be assumed to be correctly implemented.
So, the blame is clearly on glibc.

Bruno
Pavel Raiskup
2017-01-04 12:19:36 UTC
Permalink
Raw Message
Hi Bruno,
Post by Bruno Haible
Hi Pavel,
Post by Pavel Raiskup
Can we assume all systems supporting pthreads are conforming to this
specs? That was pretty big (and pretty recent) change of specification.
The change in the specification [4] mentions that the issue arose with glibc,
and it points to two tests from the Linux test project [5][6]. Can you run
these tests on your Koji system? It would be interesting to see if they
fail or hang as well.
Thanks for the links!
Post by Bruno Haible
Post by Pavel Raiskup
If we really want to test _this_ behavior (writers preferred over readers,
i.e. no rdlock when at least one wrlock acquired), shouldn't we apply
something like the patch from [3]? Than that test would be detrerministic for
everybody, now it is really matter of luck.
Can we define (documment) what is the real issue we try to test by test_rwlock?
The test_rwlock function in test-lock.c is meant to test the gl_rwlock_t API
from glthread/lock.h (which, in our case, is just a transparent layer over the
POSIX API) in a textbook situation. It is not meant to test against a particular
bug.
If you can think of another simple textbook way to use rwlocks - without hacks
and without using *_NP functions - then you're welcome to change or replace
the test_rwlock function. But if you cannot, then the Austin group should throw
the rwlocks out of POSIX. The sentence from [4]
"applications should probably be encouraged to use mutexes unless rwlocks
are needed for performance reasons"
also goes into this direction.
Post by Pavel Raiskup
I'll try ask glibc guys, or kernel guys (Fedora/RHEL).
Thanks! This is the only promising avenue, and you can do it because you
can reproduce the issue on your Koji system (whereas for me, test_rwlock always
completes after a couple of seconds).
Done here https://bugzilla.redhat.com/show_bug.cgi?id=1410052

But what I've done (and others too at least in Fedora) is that I
explicitly disabled the 'test-lock' for Koji builds (temporarily in
rhbz#1406031 until this get's resolved/worked-around somewhere else).
Post by Bruno Haible
Post by Pavel Raiskup
shouldn't we apply something like the patch from [3]?
Well, the patch in [3] would make the test hang on all platforms, because the
writers would never get a chance to take the lock. This is pointless.
I don't see how the patch changes the problem, it IMO just makes this
test-case more deterministic. There's no "infinite rdlock" in the
testcase, right? Only the critical section is a bit longer for readers.
And there's always _at least_ one reader in critical section ... But what
happens (you claim on all platforms!) that reader every-time gets again
into critical section even when there are some writers waiting on lock..

Based on specs this should not happen, but it happens everywhere. And the
result is that, on some platform, the critical section is probably long enough
only because handling with rwlocks takes some time.
Post by Bruno Haible
Post by Pavel Raiskup
Simply, after spending some time on this issue (and I'm not the first
one), I'd like to see some fix in gnulib so nobody else in future will
face similar issues.
The pessimistic suggestion would be to change the gnulib documentation
to state that rwlocks are never reliable, because of the way they are
implemented in glibc, and should therefore never be used. If we agree on
this, then I'm willing to put a #if 0 around test_rwlock.
I don't want to claim rwlocks are not reliable. IMO rwlocks do what we
ask to do... One writer OR multiple readers.

The question is what should be the default policy ... who should be more
privileged by default (writers/readers). Specs recently changed from
"unspecified" to "privileged writers" by default. The *_np() function don't
seem to be backed by POSIX.

But consider that there will be writers in critical section a bit longer;
then _readers_ won't ever get get into critical section (according to
recent specs). So it is really just a matter of policy (set explicitly by
applications in general) we haven't specified in the textbook example so
far.

For me -- if test_rwlock() is just textbook example -- I am not against
moving it to docs.

Pavel
Post by Bruno Haible
Bruno
Post by Pavel Raiskup
[3] https://lists.gnu.org/archive/html/bug-gnulib/2017-01/msg00024.html
[4] http://austingroupbugs.net/view.php?id=722&nbn=5
[5] https://github.com/linux-test-project/ltp/blob/master/testcases/open_posix_testsuite/conformance/interfaces/pthread_rwlock_rdlock/2-1.c
[6] https://github.com/linux-test-project/ltp/blob/master/testcases/open_posix_testsuite/conformance/interfaces/pthread_rwlock_rdlock/2-2.c
Pavel Raiskup
2017-01-04 12:27:13 UTC
Permalink
Raw Message
Post by Pavel Raiskup
I don't want to claim rwlocks are not reliable. IMO rwlocks do what we
ask to do... One writer OR multiple readers.
The question is what should be the default policy ... who should be more
privileged by default (writers/readers). Specs recently changed from
"unspecified" to "privileged writers" by default. The *_np() function don't
seem to be backed by POSIX.
As gnulib is portability library, it would be probably nice if gnulib
automatically set appropriate policy according to actual specifications (even if
we had to set the policy by non-portable calls).

Pavel
Bruno Haible
2017-01-04 14:32:02 UTC
Permalink
Raw Message
Post by Pavel Raiskup
As gnulib is portability library, it would be probably nice if gnulib
automatically set appropriate policy according to actual specifications (even if
we had to set the policy by non-portable calls).
The 2-2.c test shows that it still fails, even if the right policy is set.

Bruno
Bruno Haible
2017-01-05 12:04:43 UTC
Permalink
Raw Message
Post by Pavel Raiskup
I still see infinite hang on ppc64le (sometimes), as discussed in [1]. It
looks like starvation of writers in test_rwlock().
Could we set PTHREAD_RWLOCK_PREFER_WRITER_NP (in test-lock.c) to avoid
those issues?
Here's what I'm pushing.

You were right with your intuition that some _NP API in glibc would help,
but I wanted to first understand the big picture (when writer starvation
can occur, what POSIX says about it, what glibc actually implements,
what PTHREAD_RWLOCK_PREFER_WRITER_NP actually does) and where to apply
the changes (in the test-lock.c source, in the 'lock' module, as a
pthread_rwlock_init override, etc.)


2017-01-05 Bruno Haible <***@clisp.org>

lock: Provide guarantee to avoid writer starvation for rwlocks.
The rationale is: 1) Read-preferring read-write locks are prone to
writer starvation if the number of reader threads multiplied by the
percentage of time they have the lock held is too high. 2) Write-
preferring read-write locks are the only reliable way to avoid this.
3) There have been reports of 'test-lock' hanging on glibc systems
http://lists.gnu.org/archive/html/bug-gnulib/2017-01/msg00009.html,
and glibc indeed implements read-preferring rwlocks by default, see
http://man7.org/linux/man-pages/man3/pthread_rwlockattr_setkind_np.3.html
and https://sourceware.org/bugzilla/show_bug.cgi?id=13701 .
* m4/pthread_rwlock_rdlock.m4: New file.
* m4/lock.m4 (gl_LOCK): Invoke gl_PTHREAD_RWLOCK_RDLOCK_PREFER_WRITER.
* lib/glthread/lock.h [USE_POSIX_THREADS]: Test
HAVE_PTHREAD_RWLOCK_RDLOCK_PREFER_WRITER. Use a different implementation
of rwlock initialization on glibc systems without
HAVE_PTHREAD_RWLOCK_RDLOCK_PREFER_WRITER. Use a different implementation
of rwlocks altogether on non-glibc systems without
HAVE_PTHREAD_RWLOCK_RDLOCK_PREFER_WRITER.
[USE_PTH_THREADS]: Use a different implementation of rwlocks altogether.
* lib/glthread/lock.c [USE_POSIX_THREADS]
(glthread_rwlock_init_for_glibc): New function.
[USE_POSIX_THREADS] (glthread_rwlock_rdlock_multithreaded): Update
comment.
[USE_PTH_THREADS]: New implementation of rwlocks.
[USE_WINDOWS_THREADS] (glthread_rwlock_rdlock_func): Prefer writers over
readers.
* modules/lock (Files): Add m4/pthread_rwlock_rdlock.m4.
(Depends-on): Add 'extensions'.
* tests/test-rwlock1.c: New file.
* lock-tests (Files): Add it.
(Depends-on): Add usleep.
(Makefile.am): Add test-rwlock1 to the tests.

diff --git a/m4/pthread_rwlock_rdlock.m4 b/m4/pthread_rwlock_rdlock.m4
new file mode 100644
index 0000000..da69865
--- /dev/null
+++ b/m4/pthread_rwlock_rdlock.m4
@@ -0,0 +1,163 @@
+# pthread_rwlock_rdlock.m4 serial 1
+dnl Copyright (C) 2017 Free Software Foundation, Inc.
+dnl This file is free software; the Free Software Foundation
+dnl gives unlimited permission to copy and/or distribute it,
+dnl with or without modifications, as long as this notice is preserved.
+
+dnl From Bruno Haible.
+dnl Inspired by
+dnl https://github.com/linux-test-project/ltp/blob/master/testcases/open_posix_testsuite/conformance/interfaces/pthread_rwlock_rdlock/2-2.c
+dnl by Intel Corporation.
+
+dnl Test whether in a situation where
+dnl - an rwlock is taken by a reader and has a writer waiting,
+dnl - an additional reader requests the lock,
+dnl - the waiting writer and the requesting reader threads have the same
+dnl priority,
+dnl the requesting reader thread gets blocked, so that at some point the
+dnl waiting writer can acquire the lock.
+dnl Without such a guarantee, when there a N readers and each of the readers
+dnl spends more than 1/Nth of the time with the lock held, there is a high
+dnl probability that the waiting writer will not get the lock in a given finite
+dnl time, a phenomenon called "writer starvation".
+dnl Without such a guarantee, applications have a hard time avoiding writer
+dnl starvation.
+dnl
+dnl POSIX:2008 makes this requirement only for implementations that support TPS
+dnl (Thread Priority Scheduling) and only for the scheduling policies SCHED_FIFO
+dnl and SCHED_RR, see
+dnl http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_rwlock_rdlock.html
+dnl but test verifies the guarantee regardless of TPS and regardless of
+dnl scheduling policy.
+AC_DEFUN([gl_PTHREAD_RWLOCK_RDLOCK_PREFER_WRITER],
+[
+ AC_REQUIRE([gl_THREADLIB_EARLY])
+ AC_CACHE_CHECK([whether pthread_rwlock_rdlock prefers a writer to a reader],
+ [gl_cv_pthread_rwlock_rdlock_prefer_writer],
+ [save_LIBS="$LIBS"
+ LIBS="$LIBS $LIBMULTITHREAD"
+ AC_RUN_IFELSE(
+ [AC_LANG_SOURCE([[
+#include <errno.h>
+#include <pthread.h>
+#include <stdlib.h>
+#include <unistd.h>
+
+#define SUCCEED() exit (0)
+#define FAILURE() exit (1)
+#define UNEXPECTED(n) (exit (10 + (n)))
+
+/* The main thread creates the waiting writer and the requesting reader threads
+ in the default way; this guarantees that they have the same priority.
+ We can reuse the main thread as first reader thread. */
+
+static pthread_rwlock_t lock;
+static pthread_t reader1;
+static pthread_t writer;
+static pthread_t reader2;
+static pthread_t timer;
+/* Used to pass control from writer to reader2 and from reader2 to timer,
+ as in a relay race.
+ Passing control from one running thread to another running thread
+ is most likely faster than to create the second thread. */
+static pthread_mutex_t baton;
+
+static void *
+timer_func (void *ignored)
+{
+ /* Step 13 (can be before or after step 12):
+ The timer thread takes the baton, then waits a moment to make sure
+ it can tell whether the second reader thread is blocked at step 12. */
+ if (pthread_mutex_lock (&baton))
+ UNEXPECTED (13);
+ usleep (100000);
+ /* By the time we get here, it's clear that the second reader thread is
+ blocked at step 12. This is the desired behaviour. */
+ SUCCEED ();
+}
+
+static void *
+reader2_func (void *ignored)
+{
+ int err;
+
+ /* Step 8 (can be before or after step 7):
+ The second reader thread takes the baton, then waits a moment to make sure
+ the writer thread has reached step 7. */
+ if (pthread_mutex_lock (&baton))
+ UNEXPECTED (8);
+ usleep (100000);
+ /* Step 9: The second reader thread requests the lock. */
+ err = pthread_rwlock_tryrdlock (&lock);
+ if (err == 0)
+ FAILURE ();
+ else if (err != EBUSY)
+ UNEXPECTED (9);
+ /* Step 10: Launch a timer, to test whether the next call blocks. */
+ if (pthread_create (&timer, NULL, timer_func, NULL))
+ UNEXPECTED (10);
+ /* Step 11: Release the baton. */
+ if (pthread_mutex_unlock (&baton))
+ UNEXPECTED (11);
+ /* Step 12: The second reader thread requests the lock. */
+ err = pthread_rwlock_rdlock (&lock);
+ if (err == 0)
+ FAILURE ();
+ else
+ UNEXPECTED (12);
+}
+
+static void *
+writer_func (void *ignored)
+{
+ /* Step 4: Take the baton, so that the second reader thread does not go ahead
+ too early. */
+ if (pthread_mutex_lock (&baton))
+ UNEXPECTED (4);
+ /* Step 5: Create the second reader thread. */
+ if (pthread_create (&reader2, NULL, reader2_func, NULL))
+ UNEXPECTED (5);
+ /* Step 6: Release the baton. */
+ if (pthread_mutex_unlock (&baton))
+ UNEXPECTED (6);
+ /* Step 7: The writer thread requests the lock. */
+ if (pthread_rwlock_wrlock (&lock))
+ UNEXPECTED (7);
+ return NULL;
+}
+
+int
+main ()
+{
+ reader1 = pthread_self ();
+
+ /* Step 1: The main thread initializes the lock and the baton. */
+ if (pthread_rwlock_init (&lock, NULL))
+ UNEXPECTED (1);
+ if (pthread_mutex_init (&baton, NULL))
+ UNEXPECTED (1);
+ /* Step 2: The main thread acquires the lock as a reader. */
+ if (pthread_rwlock_rdlock (&lock))
+ UNEXPECTED (2);
+ /* Step 3: Create the writer thread. */
+ if (pthread_create (&writer, NULL, writer_func, NULL))
+ UNEXPECTED (3);
+ /* Job done. Go to sleep. */
+ for (;;)
+ {
+ sleep (1);
+ }
+}
+]])],
+ [gl_cv_pthread_rwlock_rdlock_prefer_writer=yes],
+ [gl_cv_pthread_rwlock_rdlock_prefer_writer=no],
+ [gl_cv_pthread_rwlock_rdlock_prefer_writer="guessing yes"])
+ LIBS="$save_LIBS"
+ ])
+ case "$gl_cv_pthread_rwlock_rdlock_prefer_writer" in
+ *yes)
+ AC_DEFINE([HAVE_PTHREAD_RWLOCK_RDLOCK_PREFER_WRITER], [1],
+ [Define if the 'pthread_rwlock_rdlock' function prefers a writer to a reader.])
+ ;;
+ esac
+])
diff --git a/m4/lock.m4 b/m4/lock.m4
index 31aea18..cb04a67 100644
--- a/m4/lock.m4
+++ b/m4/lock.m4
@@ -1,4 +1,4 @@
-# lock.m4 serial 13 (gettext-0.18.2)
+# lock.m4 serial 14
dnl Copyright (C) 2005-2017 Free Software Foundation, Inc.
dnl This file is free software; the Free Software Foundation
dnl gives unlimited permission to copy and/or distribute it,
@@ -12,11 +12,16 @@ AC_DEFUN([gl_LOCK],
if test "$gl_threads_api" = posix; then
# OSF/1 4.0 and Mac OS X 10.1 lack the pthread_rwlock_t type and the
# pthread_rwlock_* functions.
+ has_rwlock=false
AC_CHECK_TYPE([pthread_rwlock_t],
- [AC_DEFINE([HAVE_PTHREAD_RWLOCK], [1],
+ [has_rwlock=true
+ AC_DEFINE([HAVE_PTHREAD_RWLOCK], [1],
[Define if the POSIX multithreading library has read/write locks.])],
[],
[#include <pthread.h>])
+ if $has_rwlock; then
+ gl_PTHREAD_RWLOCK_RDLOCK_PREFER_WRITER
+ fi
# glibc defines PTHREAD_MUTEX_RECURSIVE as enum, not as a macro.
AC_COMPILE_IFELSE([
AC_LANG_PROGRAM(
diff --git a/lib/glthread/lock.h b/lib/glthread/lock.h
index 3ea98d6..69f6e31 100644
--- a/lib/glthread/lock.h
+++ b/lib/glthread/lock.h
@@ -176,7 +176,7 @@ typedef pthread_mutex_t gl_lock_t;

/* ------------------------- gl_rwlock_t datatype ------------------------- */

-# if HAVE_PTHREAD_RWLOCK
+# if HAVE_PTHREAD_RWLOCK && (HAVE_PTHREAD_RWLOCK_RDLOCK_PREFER_WRITER || (__GNU_LIBRARY__ > 1))

# ifdef PTHREAD_RWLOCK_INITIALIZER

@@ -185,10 +185,18 @@ typedef pthread_rwlock_t gl_rwlock_t;
STORAGECLASS pthread_rwlock_t NAME;
# define gl_rwlock_define_initialized(STORAGECLASS, NAME) \
STORAGECLASS pthread_rwlock_t NAME = gl_rwlock_initializer;
-# define gl_rwlock_initializer \
- PTHREAD_RWLOCK_INITIALIZER
-# define glthread_rwlock_init(LOCK) \
- (pthread_in_use () ? pthread_rwlock_init (LOCK, NULL) : 0)
+# if HAVE_PTHREAD_RWLOCK_RDLOCK_PREFER_WRITER
+# define gl_rwlock_initializer \
+ PTHREAD_RWLOCK_INITIALIZER
+# define glthread_rwlock_init(LOCK) \
+ (pthread_in_use () ? pthread_rwlock_init (LOCK, NULL) : 0)
+# else /* glibc with bug https://sourceware.org/bugzilla/show_bug.cgi?id=13701 */
+# define gl_rwlock_initializer \
+ PTHREAD_RWLOCK_WRITER_NONRECURSIVE_INITIALIZER_NP
+# define glthread_rwlock_init(LOCK) \
+ (pthread_in_use () ? glthread_rwlock_init_for_glibc (LOCK) : 0)
+extern int glthread_rwlock_init_for_glibc (pthread_rwlock_t *lock);
+# endif
# define glthread_rwlock_rdlock(LOCK) \
(pthread_in_use () ? pthread_rwlock_rdlock (LOCK) : 0)
# define glthread_rwlock_wrlock(LOCK) \
@@ -427,6 +435,9 @@ typedef pth_mutex_t gl_lock_t;

/* ------------------------- gl_rwlock_t datatype ------------------------- */

+/* Pth pth_rwlock_acquire always prefers readers. No autoconf test so far. */
+# if HAVE_PTH_RWLOCK_ACQUIRE_PREFER_WRITER
+
typedef pth_rwlock_t gl_rwlock_t;
# define gl_rwlock_define(STORAGECLASS, NAME) \
STORAGECLASS pth_rwlock_t NAME;
@@ -445,6 +456,42 @@ typedef pth_rwlock_t gl_rwlock_t;
# define glthread_rwlock_destroy(LOCK) \
((void)(LOCK), 0)

+# else
+
+typedef struct
+ {
+ int initialized;
+ pth_mutex_t lock; /* protects the remaining fields */
+ pth_cond_t waiting_readers; /* waiting readers */
+ pth_cond_t waiting_writers; /* waiting writers */
+ unsigned int waiting_writers_count; /* number of waiting writers */
+ int runcount; /* number of readers running, or -1 when a writer runs */
+ }
+ gl_rwlock_t;
+# define gl_rwlock_define(STORAGECLASS, NAME) \
+ STORAGECLASS gl_rwlock_t NAME;
+# define gl_rwlock_define_initialized(STORAGECLASS, NAME) \
+ STORAGECLASS gl_rwlock_t NAME = gl_rwlock_initializer;
+# define gl_rwlock_initializer \
+ { 0 }
+# define glthread_rwlock_init(LOCK) \
+ (pth_in_use () ? glthread_rwlock_init_multithreaded (LOCK) : 0)
+# define glthread_rwlock_rdlock(LOCK) \
+ (pth_in_use () ? glthread_rwlock_rdlock_multithreaded (LOCK) : 0)
+# define glthread_rwlock_wrlock(LOCK) \
+ (pth_in_use () ? glthread_rwlock_wrlock_multithreaded (LOCK) : 0)
+# define glthread_rwlock_unlock(LOCK) \
+ (pth_in_use () ? glthread_rwlock_unlock_multithreaded (LOCK) : 0)
+# define glthread_rwlock_destroy(LOCK) \
+ (pth_in_use () ? glthread_rwlock_destroy_multithreaded (LOCK) : 0)
+extern int glthread_rwlock_init_multithreaded (gl_rwlock_t *lock);
+extern int glthread_rwlock_rdlock_multithreaded (gl_rwlock_t *lock);
+extern int glthread_rwlock_wrlock_multithreaded (gl_rwlock_t *lock);
+extern int glthread_rwlock_unlock_multithreaded (gl_rwlock_t *lock);
+extern int glthread_rwlock_destroy_multithreaded (gl_rwlock_t *lock);
+
+# endif
+
/* --------------------- gl_recursive_lock_t datatype --------------------- */

/* In Pth, mutexes are recursive by default. */
diff --git a/lib/glthread/lock.c b/lib/glthread/lock.c
index 6760bbd..061562b 100644
--- a/lib/glthread/lock.c
+++ b/lib/glthread/lock.c
@@ -30,9 +30,38 @@

/* ------------------------- gl_rwlock_t datatype ------------------------- */

-# if HAVE_PTHREAD_RWLOCK
+# if HAVE_PTHREAD_RWLOCK && (HAVE_PTHREAD_RWLOCK_RDLOCK_PREFER_WRITER || (__GNU_LIBRARY__ > 1))

-# if !defined PTHREAD_RWLOCK_INITIALIZER
+# ifdef PTHREAD_RWLOCK_INITIALIZER
+
+# if !HAVE_PTHREAD_RWLOCK_RDLOCK_PREFER_WRITER
+ /* glibc with bug https://sourceware.org/bugzilla/show_bug.cgi?id=13701 */
+
+int
+glthread_rwlock_init_for_glibc (pthread_rwlock_t *lock)
+{
+ pthread_rwlockattr_t attributes;
+ int err;
+
+ err = pthread_rwlockattr_init (&attributes);
+ if (err != 0)
+ return err;
+ /* Note: PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP is the only value that
+ causes the writer to be preferred. PTHREAD_RWLOCK_PREFER_WRITER_NP does not
+ do this; see
+ http://man7.org/linux/man-pages/man3/pthread_rwlockattr_setkind_np.3.html */
+ err = pthread_rwlockattr_setkind_np (&attributes,
+ PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP);
+ if (err == 0)
+ err = pthread_rwlock_init(lock, &attributes);
+ /* pthread_rwlockattr_destroy always returns 0. It cannot influence the
+ return value. */
+ pthread_rwlockattr_destroy (&attributes);
+ return err;
+}
+
+# endif
+# else

int
glthread_rwlock_init_multithreaded (gl_rwlock_t *lock)
@@ -152,11 +181,9 @@ glthread_rwlock_rdlock_multithreaded (gl_rwlock_t *lock)
if (err != 0)
return err;
/* Test whether only readers are currently running, and whether the runcount
- field will not overflow. */
- /* POSIX says: "It is implementation-defined whether the calling thread
- acquires the lock when a writer does not hold the lock and there are
- writers blocked on the lock." Let's say, no: give the writers a higher
- priority. */
+ field will not overflow, and whether no writer is waiting. The latter
+ condition is because POSIX recommends that "write locks shall take
+ precedence over read locks", to avoid "writer starvation". */
while (!(lock->runcount + 1 > 0 && lock->waiting_writers_count == 0))
{
/* This thread has to wait for a while. Enqueue it among the
@@ -481,6 +508,141 @@ glthread_once_singlethreaded (pthread_once_t *once_control)

/* ------------------------- gl_rwlock_t datatype ------------------------- */

+# if !HAVE_PTH_RWLOCK_ACQUIRE_PREFER_WRITER
+
+int
+glthread_rwlock_init_multithreaded (gl_rwlock_t *lock)
+{
+ if (!pth_mutex_init (&lock->lock))
+ return errno;
+ if (!pth_cond_init (&lock->waiting_readers))
+ return errno;
+ if (!pth_cond_init (&lock->waiting_writers))
+ return errno;
+ lock->waiting_writers_count = 0;
+ lock->runcount = 0;
+ lock->initialized = 1;
+ return 0;
+}
+
+int
+glthread_rwlock_rdlock_multithreaded (gl_rwlock_t *lock)
+{
+ if (!lock->initialized)
+ glthread_rwlock_init_multithreaded (lock);
+ if (!pth_mutex_acquire (&lock->lock, 0, NULL))
+ return errno;
+ /* Test whether only readers are currently running, and whether the runcount
+ field will not overflow, and whether no writer is waiting. The latter
+ condition is because POSIX recommends that "write locks shall take
+ precedence over read locks", to avoid "writer starvation". */
+ while (!(lock->runcount + 1 > 0 && lock->waiting_writers_count == 0))
+ {
+ /* This thread has to wait for a while. Enqueue it among the
+ waiting_readers. */
+ if (!pth_cond_await (&lock->waiting_readers, &lock->lock, NULL))
+ {
+ int err = errno;
+ pth_mutex_release (&lock->lock);
+ return err;
+ }
+ }
+ lock->runcount++;
+ return (!pth_mutex_release (&lock->lock) ? errno : 0);
+}
+
+int
+glthread_rwlock_wrlock_multithreaded (gl_rwlock_t *lock)
+{
+ if (!lock->initialized)
+ glthread_rwlock_init_multithreaded (lock);
+ if (!pth_mutex_acquire (&lock->lock, 0, NULL))
+ return errno;
+ /* Test whether no readers or writers are currently running. */
+ while (!(lock->runcount == 0))
+ {
+ /* This thread has to wait for a while. Enqueue it among the
+ waiting_writers. */
+ lock->waiting_writers_count++;
+ if (!pth_cond_await (&lock->waiting_writers, &lock->lock, NULL))
+ {
+ int err = errno;
+ lock->waiting_writers_count--;
+ pth_mutex_release (&lock->lock);
+ return err;
+ }
+ lock->waiting_writers_count--;
+ }
+ lock->runcount--; /* runcount becomes -1 */
+ return (!pth_mutex_release (&lock->lock) ? errno : 0);
+}
+
+int
+glthread_rwlock_unlock_multithreaded (gl_rwlock_t *lock)
+{
+ int err;
+
+ if (!lock->initialized)
+ return EINVAL;
+ if (!pth_mutex_acquire (&lock->lock, 0, NULL))
+ return errno;
+ if (lock->runcount < 0)
+ {
+ /* Drop a writer lock. */
+ if (!(lock->runcount == -1))
+ {
+ pth_mutex_release (&lock->lock);
+ return EINVAL;
+ }
+ lock->runcount = 0;
+ }
+ else
+ {
+ /* Drop a reader lock. */
+ if (!(lock->runcount > 0))
+ {
+ pth_mutex_release (&lock->lock);
+ return EINVAL;
+ }
+ lock->runcount--;
+ }
+ if (lock->runcount == 0)
+ {
+ /* POSIX recommends that "write locks shall take precedence over read
+ locks", to avoid "writer starvation". */
+ if (lock->waiting_writers_count > 0)
+ {
+ /* Wake up one of the waiting writers. */
+ if (!pth_cond_notify (&lock->waiting_writers, FALSE))
+ {
+ int err = errno;
+ pth_mutex_release (&lock->lock);
+ return err;
+ }
+ }
+ else
+ {
+ /* Wake up all waiting readers. */
+ if (!pth_cond_notify (&lock->waiting_readers, TRUE))
+ {
+ int err = errno;
+ pth_mutex_release (&lock->lock);
+ return err;
+ }
+ }
+ }
+ return (!pth_mutex_release (&lock->lock) ? errno : 0);
+}
+
+int
+glthread_rwlock_destroy_multithreaded (gl_rwlock_t *lock)
+{
+ lock->initialized = 0;
+ return 0;
+}
+
+# endif
+
/* --------------------- gl_recursive_lock_t datatype --------------------- */

/* -------------------------- gl_once_t datatype -------------------------- */
@@ -796,8 +958,10 @@ glthread_rwlock_rdlock_func (gl_rwlock_t *lock)
}
EnterCriticalSection (&lock->lock);
/* Test whether only readers are currently running, and whether the runcount
- field will not overflow. */
- if (!(lock->runcount + 1 > 0))
+ field will not overflow, and whether no writer is waiting. The latter
+ condition is because POSIX recommends that "write locks shall take
+ precedence over read locks", to avoid "writer starvation". */
+ if (!(lock->runcount + 1 > 0 && lock->waiting_writers.count == 0))
{
/* This thread has to wait for a while. Enqueue it among the
waiting_readers. */
diff --git a/modules/lock b/modules/lock
index 1231399..dafeeeb 100644
--- a/modules/lock
+++ b/modules/lock
@@ -5,8 +5,10 @@ Files:
lib/glthread/lock.h
lib/glthread/lock.c
m4/lock.m4
+m4/pthread_rwlock_rdlock.m4

Depends-on:
+extensions
threadlib

configure.ac:
diff --git a/tests/test-rwlock1.c b/tests/test-rwlock1.c
new file mode 100644
index 0000000..66dfa7b
--- /dev/null
+++ b/tests/test-rwlock1.c
@@ -0,0 +1,157 @@
+/* Test of glthread_rwlock_rdlock function.
+ Copyright (C) 2017 Free Software Foundation, Inc.
+
+ This program is free software: you can redistribute it and/or modify
+ it under the terms of the GNU General Public License as published by
+ the Free Software Foundation; either version 3 of the License, or
+ (at your option) any later version.
+
+ This program is distributed in the hope that it will be useful,
+ but WITHOUT ANY WARRANTY; without even the implied warranty of
+ MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
+ GNU General Public License for more details.
+
+ You should have received a copy of the GNU General Public License
+ along with this program. If not, see <http://www.gnu.org/licenses/>. */
+
+/* Written by Bruno Haible <***@clisp.org>, 2005.
+ Inspired by
+ https://github.com/linux-test-project/ltp/blob/master/testcases/open_posix_testsuite/conformance/interfaces/pthread_rwlock_rdlock/2-2.c
+ by Intel Corporation. */
+
+#include <config.h>
+
+#include "glthread/lock.h"
+
+#include <errno.h>
+#include <stdio.h>
+#include <stdlib.h>
+#include <unistd.h>
+
+#include "glthread/thread.h"
+#include "glthread/yield.h"
+
+/* Verify that in a situation where
+ - an rwlock is taken by a reader and has a writer waiting,
+ - an additional reader requests the lock,
+ - the waiting writer and the requesting reader threads have the same
+ priority,
+ the requesting reader thread gets blocked, so that at some point the
+ waiting writer can acquire the lock.
+ Without such a guarantee, when there a N readers and each of the readers
+ spends more than 1/Nth of the time with the lock held, there is a high
+ probability that the waiting writer will not get the lock in a given finite
+ time, a phenomenon called "writer starvation".
+ Without such a guarantee, applications have a hard time avoiding writer
+ starvation.
+
+ POSIX:2008 makes this requirement only for implementations that support TPS
+ (Thread Priority Scheduling) and only for the scheduling policies SCHED_FIFO
+ and SCHED_RR, see
+ http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_rwlock_rdlock.html
+ but test verifies the guarantee regardless of TPS and regardless of
+ scheduling policy. */
+
+#define SUCCEED() exit (0)
+#define FAILURE() exit (1)
+#define UNEXPECTED(n) (fprintf (stderr, "Unexpected outcome %d\n", n), abort ())
+
+/* The main thread creates the waiting writer and the requesting reader threads
+ in the default way; this guarantees that they have the same priority.
+ We can reuse the main thread as first reader thread. */
+
+static gl_rwlock_t lock;
+static gl_thread_t reader1;
+static gl_thread_t writer;
+static gl_thread_t reader2;
+static gl_thread_t timer;
+/* Used to pass control from writer to reader2 and from reader2 to timer,
+ as in a relay race.
+ Passing control from one running thread to another running thread
+ is most likely faster than to create the second thread. */
+static gl_lock_t baton;
+
+static void *
+timer_func (void *ignored)
+{
+ /* Step 13 (can be before or after step 12):
+ The timer thread takes the baton, then waits a moment to make sure
+ it can tell whether the second reader thread is blocked at step 12. */
+ if (glthread_lock_lock (&baton))
+ UNEXPECTED (13);
+ usleep (100000);
+ /* By the time we get here, it's clear that the second reader thread is
+ blocked at step 12. This is the desired behaviour. */
+ SUCCEED ();
+}
+
+static void *
+reader2_func (void *ignored)
+{
+ int err;
+
+ /* Step 8 (can be before or after step 7):
+ The second reader thread takes the baton, then waits a moment to make sure
+ the writer thread has reached step 7. */
+ if (glthread_lock_lock (&baton))
+ UNEXPECTED (8);
+ usleep (100000);
+ /* Step 9 omitted. */
+ /* Step 10: Launch a timer, to test whether the next call blocks. */
+ if (glthread_create (&timer, timer_func, NULL))
+ UNEXPECTED (10);
+ /* Step 11: Release the baton. */
+ if (glthread_lock_unlock (&baton))
+ UNEXPECTED (11);
+ /* Step 12: The second reader thread requests the lock. */
+ err = glthread_rwlock_rdlock (&lock);
+ if (err == 0)
+ FAILURE ();
+ else
+ UNEXPECTED (12);
+}
+
+static void *
+writer_func (void *ignored)
+{
+ /* Step 4: Take the baton, so that the second reader thread does not go ahead
+ too early. */
+ if (glthread_lock_lock (&baton))
+ UNEXPECTED (4);
+ /* Step 5: Create the second reader thread. */
+ if (glthread_create (&reader2, reader2_func, NULL))
+ UNEXPECTED (5);
+ /* Step 6: Release the baton. */
+ if (glthread_lock_unlock (&baton))
+ UNEXPECTED (6);
+ /* Step 7: The writer thread requests the lock. */
+ if (glthread_rwlock_wrlock (&lock))
+ UNEXPECTED (7);
+ return NULL;
+}
+
+int
+main ()
+{
+ reader1 = gl_thread_self ();
+
+ /* Step 1: The main thread initializes the lock and the baton. */
+ if (glthread_rwlock_init (&lock))
+ UNEXPECTED (1);
+ if (glthread_lock_init (&baton))
+ UNEXPECTED (1);
+ /* Step 2: The main thread acquires the lock as a reader. */
+ if (glthread_rwlock_rdlock (&lock))
+ UNEXPECTED (2);
+ /* Step 3: Create the writer thread. */
+ if (glthread_create (&writer, writer_func, NULL))
+ UNEXPECTED (3);
+ /* Job done. Go to sleep. */
+ for (;;)
+ {
+ /* In cooperative threads implementations (Pth), give other threads
+ a chance to run. */
+ gl_thread_yield ();
+ sleep (1);
+ }
+}
diff --git a/modules/lock-tests b/modules/lock-tests
index d0e5010..b42740c 100644
--- a/modules/lock-tests
+++ b/modules/lock-tests
@@ -1,13 +1,16 @@
Files:
+tests/test-rwlock1.c
tests/test-lock.c

Depends-on:
thread
+usleep
yield

configure.ac:

Makefile.am:
-TESTS += test-lock
-check_PROGRAMS += test-lock
+TESTS += test-rwlock1 test-lock
+check_PROGRAMS += test-rwlock1 test-lock
+test_rwlock1_LDADD = $(LDADD) @LIBMULTITHREAD@
test_lock_LDADD = $(LDADD) @LIBMULTITHREAD@ @YIELD_LIB@
Pavel Raiskup
2017-01-05 14:11:10 UTC
Permalink
Raw Message
Thanks. Minor report is that gl_thread_join() is not handled properly for
joined thread statuses.

This leads to situation that Koji build system tries to gently terminate
the build first (after two days) ... which (sometimes?) leads to successful
'test-lock' run in the end and the build succeeds too a bit later.

It would be nice to FAIL properly.

Pavel
Bruno Haible
2017-01-05 18:51:00 UTC
Permalink
Raw Message
Post by Pavel Raiskup
Thanks. Minor report is that gl_thread_join() is not handled properly for
joined thread statuses.
This leads to situation that Koji build system tries to gently terminate
the build first (after two days) ... which (sometimes?) leads to successful
'test-lock' run in the end and the build succeeds too a bit later.
I cannot explain why this happens and how to fix it. If pthread_join would
report an error code, glthread_join would as well, and gl_thread_join would
abort the process, and 'make check' would fail:

# define glthread_join(THREAD, RETVALP) \
(pthread_in_use () ? pthread_join (THREAD, RETVALP) : 0)

#define gl_thread_join(THREAD, RETVAL) \
do \
{ \
if (glthread_join (THREAD, RETVAL)) \
abort (); \
} \
while (0)

Bruno
Pavel Raiskup
2017-01-06 16:49:04 UTC
Permalink
Raw Message
Post by Bruno Haible
Post by Pavel Raiskup
Thanks. Minor report is that gl_thread_join() is not handled properly for
joined thread statuses.
This leads to situation that Koji build system tries to gently terminate
the build first (after two days) ... which (sometimes?) leads to successful
'test-lock' run in the end and the build succeeds too a bit later.
I cannot explain why this happens and how to fix it. If pthread_join would
report an error code, glthread_join would as well, and gl_thread_join would
# define glthread_join(THREAD, RETVALP) \
(pthread_in_use () ? pthread_join (THREAD, RETVALP) : 0)
#define gl_thread_join(THREAD, RETVAL) \
do \
{ \
if (glthread_join (THREAD, RETVAL)) \
abort (); \
} \
while (0)
It looks like RETVALP shouldn't be NULL. Dunno whether the success is
matter of luck or whether it is guaranteed thing.

Pavel
Torvald Riegel
2017-01-05 17:11:40 UTC
Permalink
Raw Message
IMO, users of reader-writer locks should treat them as a
mutual-exclusion mechanism. That is, a mechanism that just ensures that
two critical sections will not execute concurrently (except if both are
readers, of course), so at the same time. It is also important to
understand what this does not mean, for example any prioritization of
threads attempting to acquire a reader-writer lock. Claiming that
prefering writers is required for reader-writer locks to be usable is
wrong. If a program needs a particular sort of fairness or
starvation-freedom, there are several ways to ensure that, and this does
not require to be the same mechanism as the mutual exclusion mechanism.

Please also do note that you seem to be getting along fine with
exclusive mutexes that are not guaranteed to be fair or starvation-free.

If glibc is making a certain decision about semantics, it might be
worthwhile to consider (see http://austingroupbugs.net/view.php?id=1111
and https://sourceware.org/bugzilla/show_bug.cgi?id=13701).

IMHO, gnulib synchronization primitives should simply move towards the
semantics chosen by C++11 and more recent (if you dislike C++ for some
reason, just use the C++ semantics applied to C11; C is lacking behind
in terms of preciseness of the specification).

Of course you can build your own, but that requires expertise and time.
I found two bugs just by briefly scanning through the current
lib/glthread/lock.c. As a hint, I'll just mention double-checked
locking and order of destruction. And skipping destruction is of course
not correct in the general case either (unless I'm misreading
lib/glthread/cond.c, for example).
Bruno Haible
2017-01-05 19:15:42 UTC
Permalink
Raw Message
Post by Torvald Riegel
I found two bugs just by briefly scanning through the current
lib/glthread/lock.c. As a hint, I'll just mention double-checked
locking and order of destruction. And skipping destruction is of course
not correct in the general case either (unless I'm misreading
lib/glthread/cond.c, for example).
Can you give line numbers, please? lib/glthread/lock.c and lib/glthread/cond.c
are large files. If you have found bugs there, *of course* I want to have
them fixed.

Bruno
Torvald Riegel
2017-01-05 20:24:02 UTC
Permalink
Raw Message
Post by Bruno Haible
Post by Torvald Riegel
I found two bugs just by briefly scanning through the current
lib/glthread/lock.c. As a hint, I'll just mention double-checked
locking and order of destruction. And skipping destruction is of course
not correct in the general case either (unless I'm misreading
lib/glthread/cond.c, for example).
Can you give line numbers, please? lib/glthread/lock.c and lib/glthread/cond.c
are large files. If you have found bugs there, *of course* I want to have
them fixed.
I can understand both points. However, I'd suggest that you first try
to find them (or that whoever maintains these files does); this is often
a good exercise, and helps make oneself sensitive for concurrency
problems. If you don't want to do that first, I can give details.
Bruno Haible
2017-01-05 21:09:38 UTC
Permalink
Raw Message
Post by Torvald Riegel
Post by Bruno Haible
Can you give line numbers, please? lib/glthread/lock.c and lib/glthread/cond.c
are large files. If you have found bugs there, *of course* I want to have
them fixed.
I can understand both points. However, I'd suggest that you first try
to find them (or that whoever maintains these files does); this is often
a good exercise, and helps make oneself sensitive for concurrency
problems. If you don't want to do that first, I can give details.
We don't need to play "teacher and student" games here. I hold a 'Dr.',
like you. I have experience in other areas than you. If I wasn't
sensitive for concurrency problems, I wouldn't have introduced the
'lock' module in gnulib.

Can you please do me the favour and tell the line numbers of the two
observations that you spotted? Thanks.

Bruno
Torvald Riegel
2017-01-06 11:51:56 UTC
Permalink
Raw Message
Post by Bruno Haible
Post by Torvald Riegel
Post by Bruno Haible
Can you give line numbers, please? lib/glthread/lock.c and lib/glthread/cond.c
are large files. If you have found bugs there, *of course* I want to have
them fixed.
I can understand both points. However, I'd suggest that you first try
to find them (or that whoever maintains these files does); this is often
a good exercise, and helps make oneself sensitive for concurrency
problems. If you don't want to do that first, I can give details.
We don't need to play "teacher and student" games here. I hold a 'Dr.',
like you. I have experience in other areas than you. If I wasn't
sensitive for concurrency problems, I wouldn't have introduced the
'lock' module in gnulib.
Can you please do me the favour and tell the line numbers of the two
observations that you spotted? Thanks.
As you wish...

The double-checked locking in glthread_once_multithreaded is broken. It
has data races. Remember the conversation we had about atomics? Look
at glibc's pthread_once to see what needs to be done.

A similar problem exists regarding the (use of the) initialized fields
in the custom locks you implemented. This is not a correct
pthread_once, and it cannot reliably help you shield users from bad
consequences due to not calling ..._init().

I also think that glthread_rwlock_destroy_multithreaded should destroy
the condvars first, then the mutex. That may not be strictly required
depending on how one interprets the wording around references to mutexes
held by condvars.

And you should not skip calling pthread_cond_destroy() and other
destruction functions. Doing so is not standards-conforming, in
particular if you can ever reuse the memory for something else. Not
calling pthread_cond_destroy() will be broken when using current glibc,
for example.
Bruno Haible
2017-01-06 18:06:21 UTC
Permalink
Raw Message
Post by Torvald Riegel
The double-checked locking in glthread_once_multithreaded is broken. It
has data races. Remember the conversation we had about atomics? Look
at glibc's pthread_once to see what needs to be done.
A similar problem exists regarding the (use of the) initialized fields
in the custom locks you implemented. This is not a correct
pthread_once, and it cannot reliably help you shield users from bad
consequences due to not calling ..._init().
Good points, thanks a lot for these hints! Find attached the proposed fix.
It's gnulib's first use of <stdatomic.h>...
Post by Torvald Riegel
I also think that glthread_rwlock_destroy_multithreaded should destroy
the condvars first, then the mutex. That may not be strictly required
depending on how one interprets the wording around references to mutexes
held by condvars.
I don't think the order matters, since the condvars must be empty at this
point. It's surely undefined behaviour to destroy an rwlock that still has
readers or writers pending (although POSIX [1] does not say so).
Anyway, patch attached.
Post by Torvald Riegel
And you should not skip calling pthread_cond_destroy() and other
destruction functions. Doing so is not standards-conforming, in
particular if you can ever reuse the memory for something else. Not
calling pthread_cond_destroy() will be broken when using current glibc,
for example.
Yeah, I was a bit lazy in the error-handling case. Patch attached.

Bruno

[1] http://pubs.opengroup.org/onlinepubs/9699919799/functions/pthread_rwlock_destroy.html
Bruno Haible
2017-01-05 19:30:24 UTC
Permalink
Raw Message
Post by Torvald Riegel
IMHO, gnulib synchronization primitives should simply move towards the
semantics chosen by C++11 and more recent (if you dislike C++ for some
reason, just use the C++ semantics applied to C11; C is lacking behind
in terms of preciseness of the specification).
Of course you can build your own, but that requires expertise and time.
gnulib is there to help adoption of POSIX and newer C standards. For
instance, with gnulib you could use <stdint.h> since 2004, long before
all platforms had it. (Even recently, in glibc 2.24, there was an issue
with SIZE_MAX in <stdint.h> on s390 architecture.) gnulib does its best
to shield its users from such portability problems.

Regarding atomics, I agree it's good to have a standard in this area.
But I fear gnulib can do less than it does for <stdint.h>, because the
implementation is highly CPU and OS dependent. Just take a look at
GCC's libatomic...

gnulib code will, when all recently enough systems support it, be able
to use <stdatomic.h>. But this will take years. Maybe 5 years, more
likely something like 10 years.

Bruno
Torvald Riegel
2017-01-05 20:18:59 UTC
Permalink
Raw Message
Post by Bruno Haible
Post by Torvald Riegel
IMHO, gnulib synchronization primitives should simply move towards the
semantics chosen by C++11 and more recent (if you dislike C++ for some
reason, just use the C++ semantics applied to C11; C is lacking behind
in terms of preciseness of the specification).
Of course you can build your own, but that requires expertise and time.
gnulib is there to help adoption of POSIX and newer C standards. For
instance, with gnulib you could use <stdint.h> since 2004, long before
all platforms had it. (Even recently, in glibc 2.24, there was an issue
with SIZE_MAX in <stdint.h> on s390 architecture.) gnulib does its best
to shield its users from such portability problems.
Regarding atomics, I agree it's good to have a standard in this area.
But I fear gnulib can do less than it does for <stdint.h>, because the
implementation is highly CPU and OS dependent. Just take a look at
GCC's libatomic...
C11 and C++11 also provide basic threading and synchronization
primitives, for example mutexes and condvars. C++14 also has
reader-writer locks. If you compare the semantics of these against
POSIX, you'll see that they differ and that C/C++ often make less
guarantees to programs, in particular regarding odd corner cases, which
benefits a good design overall IMO.
Bruno Haible
2017-01-05 20:13:07 UTC
Permalink
Raw Message
Post by Torvald Riegel
IMO, users of reader-writer locks should treat them as a
mutual-exclusion mechanism. That is, a mechanism that just ensures that
two critical sections will not execute concurrently (except if both are
readers, of course), so at the same time. It is also important to
understand what this does not mean, for example any prioritization of
threads attempting to acquire a reader-writer lock. Claiming that
prefering writers is required for reader-writer locks to be usable is
wrong. If a program needs a particular sort of fairness or
starvation-freedom, there are several ways to ensure that, and this does
not require to be the same mechanism as the mutual exclusion mechanism.
Please also do note that you seem to be getting along fine with
exclusive mutexes that are not guaranteed to be fair or starvation-free.
If glibc is making a certain decision about semantics, it might be
worthwhile to consider (see http://austingroupbugs.net/view.php?id=1111
and https://sourceware.org/bugzilla/show_bug.cgi?id=13701).
Some background about the thinking and methodology that drives gnulib:

* Gnulib tries to makes POSIX (and other) APIs usable in an easy way.
We work around not only outright bugs, but also usability problems
that get in the way, such as:
* gnulib overrides the printf() function if this function crashes on
random data (the crash would be standards compliant, some people say,
but the data can be read from external files and we don't want our
programs to crash).
* gnulib overrides the fmal() function if it produces mathematically
incorrect values.
* gnulib overrides the iconv_open() function so that it will never report
that it cannot convert from "ISO-8859-1" to "UTF-8".
* gnulib overrides the re_search() function so that it understands
the most important parts of GNU regex syntax.

* Gnulib favours reliability over efficiency. For example:
* "cp --help > /dev/full" fails with an error message. Thanks to gnulib.
* Its date parser prints warnings when there are ambiguities.
* It makes it easy to check all memory allocations (module 'xalloc'),
all size_t computations (module 'xsize'), and to not overlook failures
of POSIX function such as freopen, getcwd, readlink, setenv, strtol,
vasprintf etc.

If it costs an allocation of a couple of memory words, or a couple of
extra instruction, to achieve these goals, we just don't care.

You as a libc implementor are under pressure of delivering something
optimized for speed, because glibc will [probably] be compared against
competitors on the basis of some benchmarks. (Well, at least Ulrich Drepper
was most proud of the speed of his NPTL.)

Whereas each of the gnulib maintainers is maintaining some programs.
And as application programmers the worst case for us is not that our
program got 20% slower, but that it crashes or - worse - hangs.
Post by Torvald Riegel
Claiming that prefering writers is required for reader-writer locks
to be usable is wrong.
With the background above, you will understand why I claim that a gnulib
'lock' module is not reliable when its own unit test hangs on some
platforms and not on others. We want a lock facility that can EASILY
achieve
- no writer starvation, and
- no reader starvation.
Post by Torvald Riegel
If a program needs a particular sort of fairness or
starvation-freedom, there are several ways to ensure that, and this does
not require to be the same mechanism as the mutual exclusion mechanism.
"If you want API A to be reliable, you need also to use API B and API C."
OK, we learned the hard way that
- in order to use stdout reliably, we need the 'closeout' module.
- in order to copy files reliably, we need the 'acl' module.
- and so on.

So, what is the fairness-ensuring mechanism that will make use of locks
starvation-free? Can it be *easily* and *portably* put in place?

As far as I understood, for rwlocks, one can
* avoid writer starvation through "prefer writers" in pthread_rwlock_rdlock,
AND, at the same time,
* avoid reader starvation through "prefer readers during unlock from a writer"
in pthread_rwlock_unlock [1].

Bruno

[1] http://www.doc.ic.ac.uk/~jnm/concurrency/online/monitors/sld026.htm
Torvald Riegel
2017-01-05 20:47:24 UTC
Permalink
Raw Message
Post by Bruno Haible
Post by Torvald Riegel
IMO, users of reader-writer locks should treat them as a
mutual-exclusion mechanism. That is, a mechanism that just ensures that
two critical sections will not execute concurrently (except if both are
readers, of course), so at the same time. It is also important to
understand what this does not mean, for example any prioritization of
threads attempting to acquire a reader-writer lock. Claiming that
prefering writers is required for reader-writer locks to be usable is
wrong. If a program needs a particular sort of fairness or
starvation-freedom, there are several ways to ensure that, and this does
not require to be the same mechanism as the mutual exclusion mechanism.
Please also do note that you seem to be getting along fine with
exclusive mutexes that are not guaranteed to be fair or starvation-free.
If glibc is making a certain decision about semantics, it might be
worthwhile to consider (see http://austingroupbugs.net/view.php?id=1111
and https://sourceware.org/bugzilla/show_bug.cgi?id=13701).
* Gnulib tries to makes POSIX (and other) APIs usable in an easy way.
We work around not only outright bugs, but also usability problems
* gnulib overrides the printf() function if this function crashes on
random data (the crash would be standards compliant, some people say,
but the data can be read from external files and we don't want our
programs to crash).
* gnulib overrides the fmal() function if it produces mathematically
incorrect values.
* gnulib overrides the iconv_open() function so that it will never report
that it cannot convert from "ISO-8859-1" to "UTF-8".
* gnulib overrides the re_search() function so that it understands
the most important parts of GNU regex syntax.
* "cp --help > /dev/full" fails with an error message. Thanks to gnulib.
* Its date parser prints warnings when there are ambiguities.
* It makes it easy to check all memory allocations (module 'xalloc'),
all size_t computations (module 'xsize'), and to not overlook failures
of POSIX function such as freopen, getcwd, readlink, setenv, strtol,
vasprintf etc.
If it costs an allocation of a couple of memory words, or a couple of
extra instruction, to achieve these goals, we just don't care.
You as a libc implementor are under pressure of delivering something
optimized for speed, because glibc will [probably] be compared against
competitors on the basis of some benchmarks. (Well, at least Ulrich Drepper
was most proud of the speed of his NPTL.)
This is not about benchmark competitions. Efficiency matters, at the
very least regarding energy usage. Data centers use a lot of energy,
and when things can't be computed fast enough, people buy more machines.
Post by Bruno Haible
Whereas each of the gnulib maintainers is maintaining some programs.
And as application programmers the worst case for us is not that our
program got 20% slower, but that it crashes or - worse - hangs.
That's the optimistic perspective. But if you slow down
synchronization, you will likely decrease exploitable parallelism in
your program; this means this isn't linear anymore, but bounds total
achievable speedup.
Post by Bruno Haible
Post by Torvald Riegel
Claiming that prefering writers is required for reader-writer locks
to be usable is wrong.
With the background above, you will understand why I claim that a gnulib
'lock' module is not reliable when its own unit test hangs on some
platforms and not on others. We want a lock facility that can EASILY
achieve
- no writer starvation, and
- no reader starvation.
Prefering writers conflicts with wanting to have no reader starvation.

Also, "easy to use" does not mean enabling misuse of a tool, or trying
to make a tool be usable outside it's intended purpose. In such cases,
it's often easier to use the tool as intended.

This isn't changed by the fact that misuse of the tool can be
demonstrated (ie, the unit test).

I'd claim that it is not that hard to understand when a program needs
fairness. Simplified, whenever you have a bounded amount of parallel
work, you don't care what gets executed first.
Post by Bruno Haible
Post by Torvald Riegel
If a program needs a particular sort of fairness or
starvation-freedom, there are several ways to ensure that, and this does
not require to be the same mechanism as the mutual exclusion mechanism.
"If you want API A to be reliable, you need also to use API B and API C."
OK, we learned the hard way that
- in order to use stdout reliably, we need the 'closeout' module.
- in order to copy files reliably, we need the 'acl' module.
- and so on.
So, what is the fairness-ensuring mechanism that will make use of locks
starvation-free? Can it be *easily* and *portably* put in place?
You control the incoming work. The rwlock itself is not the best
mechanism for that. It gives you just mutual exclusion, and that is
okay.

Look at your unit test, for example. Typically, the transactions would
be the result of some input operation, not spawned as fast as possible.
Programs have a choice to not read input, typically. If you really have
potentially more work to do than you can compute fast enough, you have
much bigger problems than no fairness.
If you want to spawn random work, then you can easily throttle that, for
example with a mutex/condvar combination.

If gnulib really wants to make concurrency simpler, than it should look
at higher-level abstractions first. Especially parallelism. But maybe
it should then just wait for what C++ specifies, or contribute to that
(look at ISO C++ Study Group 1's work, in particular).
Paul Eggert
2017-01-05 21:19:38 UTC
Permalink
Raw Message
Post by Torvald Riegel
If gnulib really wants to make concurrency simpler, than it should look
at higher-level abstractions first. Especially parallelism. But maybe
it should then just wait for what C++ specifies, or contribute to that
I'm afraid this overestimates the amount of development resources we
have. Gnulib is contributed to only fitfully, and it mostly consists of
reasonably simple portability hacks. It has to be that way, otherwise
package developers won't understand and trust it. It's not a suitable
place to do concurrency research.

Concurrency with shared memory is a hard problem, and the C++ (and C)
folks have messed it up for decades. Although they may get something
reasonable eventually, in the meantime Gnulib will probably prefer to
limp along with something on the safer side of the axis.
Torvald Riegel
2017-01-06 12:22:38 UTC
Permalink
Raw Message
Post by Paul Eggert
Post by Torvald Riegel
If gnulib really wants to make concurrency simpler, than it should look
at higher-level abstractions first. Especially parallelism. But maybe
it should then just wait for what C++ specifies, or contribute to that
I'm afraid this overestimates the amount of development resources we
have. Gnulib is contributed to only fitfully, and it mostly consists of
reasonably simple portability hacks. It has to be that way, otherwise
package developers won't understand and trust it. It's not a suitable
place to do concurrency research.
Concurrency with shared memory is a hard problem, and the C++ (and C)
folks have messed it up for decades. Although they may get something
reasonable eventually, in the meantime Gnulib will probably prefer to
limp along with something on the safer side of the axis.
I'm not quite sure what you are trying to say.

First, are you familiar with what C++ has put out recently (eg, parallel
algorithms in the C++17 draft?). Are you familiar with the memory model
introduced in C11 and C++11?
Second, C and C++ are supposed to be close to the hardware. Do you
dislike that? If not, what would you have done differently? (Besides
trying to get something like the memory model specified much earlier?)

I'd agree regarding the concurrency research, but I'd see that as reason
to not try to deviate from the semantics and the overall design of the
synchronization primitives that POSIX / C provide to you -- in
particular if that becomes nontrivial. Is that what you wanted to say?
Paul Eggert
2017-01-06 20:39:15 UTC
Permalink
Raw Message
Post by Torvald Riegel
First, are you familiar with what C++ has put out recently (eg, parallel
algorithms in the C++17 draft?). Are you familiar with the memory model
introduced in C11 and C++11?
I know a bit about the 2011 models, as well as the JMM. I do not know
the C++17 draft. I am by no means an expert, just a sometimes-interested
(and often-appalled) observer.
Post by Torvald Riegel
what would you have done differently? (Besides
trying to get something like the memory model specified much earlier?)
Isn't hindsight wonderful? :-)
Post by Torvald Riegel
I'd agree regarding the concurrency research, but I'd see that as reason
to not try to deviate from the semantics and the overall design of the
synchronization primitives that POSIX / C provide to you -- in
particular if that becomes nontrivial. Is that what you wanted to say?
I was trying to be even more conservative than that. Gnulib should not
use C++17, and should be chary even of using C11 and C++11 features.
(Maybe the latter in 10 years, as Bruno suggested.)

Bruno Haible
2017-01-05 23:43:12 UTC
Permalink
Raw Message
Post by Torvald Riegel
whenever you have a bounded amount of parallel
work, you don't care what gets executed first.
...
You control the incoming work. ...
Look at your unit test, for example. Typically, the transactions would
be the result of some input operation, not spawned as fast as possible.
Programs have a choice to not read input, typically. If you really have
potentially more work to do than you can compute fast enough, you have
much bigger problems than no fairness.
If you want to spawn random work, then you can easily throttle that ...
While I agree that the vast majority of computation tasks can be controlled
in size, some may not so easily: think of the just-in-time compilation threads
or garbage-collection threads in a Java VM, for example.

The rwlock writer starvation problem is not solved by throttling to a fixed
percentage of CPU time: If every reader thread spends 10% of its time with the
rwlock held, it will work fine with 4 CPUs, but it will hang with 24 CPUs.

For comparison:

While it is true that the vast majority of integer overflows in a C program
can be detected by code inspection and value checks before the operation, I'm
very glad that there are techniques (gcc -ftrapv and multi-precision arithmetic),
each for different use-cases, that solve the problem altogether.

While it is true that the vast majority of memory allocations can be tracked
manually and free()d [C] / delete'd [C++], I'm very glad that there are
techniques (garbage collection, in languages such as Lisp) that solve the
problem altogether.

That's what I'd like to have here as well. Even if it costs performance.
Post by Torvald Riegel
This isn't changed by the fact that misuse of the tool can be
demonstrated (ie, the unit test).
I disagree. The suboptimal unit test clearly shows the starvation problem is
not solved altogether, when one uses plain POSIX APIs on glibc.

Bruno
Torvald Riegel
2017-01-06 12:08:53 UTC
Permalink
Raw Message
Post by Bruno Haible
Post by Torvald Riegel
whenever you have a bounded amount of parallel
work, you don't care what gets executed first.
...
You control the incoming work. ...
Look at your unit test, for example. Typically, the transactions would
be the result of some input operation, not spawned as fast as possible.
Programs have a choice to not read input, typically. If you really have
potentially more work to do than you can compute fast enough, you have
much bigger problems than no fairness.
If you want to spawn random work, then you can easily throttle that ...
While I agree that the vast majority of computation tasks can be controlled
in size, some may not so easily: think of the just-in-time compilation threads
or garbage-collection threads in a Java VM, for example.
I think you need to more precise in your examples, or I can't see what
you have in mind. I can easily think of JIT scenarios / implementations
that don't need fair rwlocks. Same for GC; for example, if it's a
stop-the-world GC, then the overall GC you can do is bounded (ie,
there's only so much that's not used anymore).
Post by Bruno Haible
The rwlock writer starvation problem is not solved by throttling to a fixed
percentage of CPU time: If every reader thread spends 10% of its time with the
rwlock held, it will work fine with 4 CPUs, but it will hang with 24 CPUs.
Well, obviously, you need to throttle in such a way that all work can be
performed eventually before new work arrives. For example, don't accept
new work for a while if old work hasn't been done yet. Making the whole
system slower is not going to necessarily change anything for
imbalances.
Post by Bruno Haible
While it is true that the vast majority of integer overflows in a C program
can be detected by code inspection and value checks before the operation, I'm
very glad that there are techniques (gcc -ftrapv and multi-precision arithmetic),
each for different use-cases, that solve the problem altogether.
While it is true that the vast majority of memory allocations can be tracked
manually and free()d [C] / delete'd [C++], I'm very glad that there are
techniques (garbage collection, in languages such as Lisp) that solve the
problem altogether.
I don't think these are analogies to the rwlock case. But either way,
your not claiming that C integer semantics are bad or useless, for
example. You can of course want to use *a different tool*, such as
multi-precision arithmetic.
Post by Bruno Haible
That's what I'd like to have here as well. Even if it costs performance.
ISTM you first need to make up your mind what you actually want. In
precise terms, for example regarding forward progress. The examples you
bring up, such as GC, suggest that you want abstractions at a much
higher level than explicit threading and locks, as provided by POSIX and
C11. However, you previously also said that you want to make POSIX and
ISO C easier to use, so there's a gap there.
Post by Bruno Haible
Post by Torvald Riegel
This isn't changed by the fact that misuse of the tool can be
demonstrated (ie, the unit test).
I disagree. The suboptimal unit test clearly shows the starvation problem is
not solved altogether, when one uses plain POSIX APIs on glibc.
But that's the point. You test for a feature that the tool does not
intend to provide to you. To exaggerate, that's like testing that
x86_64 int overflows after 2^32.
Bruno Haible
2017-01-06 14:25:44 UTC
Permalink
Raw Message
Post by Torvald Riegel
Post by Bruno Haible
The rwlock writer starvation problem is not solved by throttling to a fixed
percentage of CPU time: If every reader thread spends 10% of its time with the
rwlock held, it will work fine with 4 CPUs, but it will hang with 24 CPUs.
Well, obviously, you need to throttle in such a way that all work can be
performed eventually before new work arrives. For example, don't accept
new work for a while if old work hasn't been done yet.
You suggest to implement throttling, but in order to make it work reliably,
you have to implement a system of dynamic control (of multiple parameters)
around it. Such systems are *very hard* to build. Just two examples:
- It took a long long time until the OOM killer in Linux could reliably
prevent fork bombs from taking down a system.
- The Mac OS X 10.5 memory management example from my other mail.
I don't want to get into this area, unless absolutely absolutely necessary.
Post by Torvald Riegel
The examples you
bring up, such as GC, suggest that you want abstractions at a much
higher level than explicit threading and locks, as provided by POSIX and
C11. However, you previously also said that you want to make POSIX and
ISO C easier to use, so there's a gap there.
You're probably right here. I am dissatisfied with how hard it is to make
multithreaded software work in a verifiably reliable way. I don't know what
the answer is, i.e. what abstractions will provide this reliability.
Post by Torvald Riegel
Post by Bruno Haible
The suboptimal unit test clearly shows the starvation problem is
not solved altogether, when one uses plain POSIX APIs on glibc.
But that's the point. You test for a feature that the tool does not
intend to provide to you. To exaggerate, that's like testing that
x86_64 int overflows after 2^32.
Well, the current POSIX ([TPS] clause) and Solaris and Mac OS X all provide at
least half of what I want: namely, writer-preference and therefore avoidance
of writer starvation.

Bruno
Torvald Riegel
2017-01-06 14:46:54 UTC
Permalink
Raw Message
Post by Bruno Haible
Post by Torvald Riegel
Post by Bruno Haible
The rwlock writer starvation problem is not solved by throttling to a fixed
percentage of CPU time: If every reader thread spends 10% of its time with the
rwlock held, it will work fine with 4 CPUs, but it will hang with 24 CPUs.
Well, obviously, you need to throttle in such a way that all work can be
performed eventually before new work arrives. For example, don't accept
new work for a while if old work hasn't been done yet.
You suggest to implement throttling, but in order to make it work reliably,
you have to implement a system of dynamic control (of multiple parameters)
- It took a long long time until the OOM killer in Linux could reliably
prevent fork bombs from taking down a system.
- The Mac OS X 10.5 memory management example from my other mail.
I don't want to get into this area, unless absolutely absolutely necessary.
All sequential execution "throttles" in the sense of ensuring that all
work is performed before new work is allowed to complete. Fork/join
parallelism does that when joining.
Post by Bruno Haible
Post by Torvald Riegel
The examples you
bring up, such as GC, suggest that you want abstractions at a much
higher level than explicit threading and locks, as provided by POSIX and
C11. However, you previously also said that you want to make POSIX and
ISO C easier to use, so there's a gap there.
You're probably right here. I am dissatisfied with how hard it is to make
multithreaded software work in a verifiably reliable way. I don't know what
the answer is, i.e. what abstractions will provide this reliability.
Focus on simple forms of parallelism first. When you have to deal with
actual concurrency (eg, a web server that has to serve http requests
concurrently), transform it into a simple parallel problem, if possible
(eg, put all requests into a bounded work queue, from which parallel
tasks pull work).
Bruno Haible
2017-01-06 13:17:56 UTC
Permalink
Raw Message
Post by Torvald Riegel
Post by Bruno Haible
So, what is the fairness-ensuring mechanism that will make use of locks
starvation-free? Can it be *easily* and *portably* put in place?
Textbooks from university teacher state that alternating between reader
preference and writer preference with solve both writer starvation and
writer starvation:
- [1] page 12
- [2] slides 62..63

These publications predate the filing of patent [3], so this mechanism
is OK to use.

I think this fits my requirements of being easy to implement and robust
in all circumstances.
Post by Torvald Riegel
You control the incoming work.
...
you can easily throttle that
When throttling is used as an approach to avoid disaster, someone (sys admin
or user) has to monitor dynamic parameters. This is time-consuming and cannot
be done for programs that run on remote build machines (Kamil Dudka's point).
Therefore, if it can be avoided, I prefer to avoid it.

I'm using throttling already as a workaround in the following situations:

* I have a laptop that, when it has high load for 5 minutes, reboots.
(The Linux OS notices that the temperature goes too high and prefers to
shutdown, than to damage the hardware.)
Workaround: Constantly monitor the load, stop processes.

* When I 'rm -rf' more than ca. 500 GB at once on a disk of my NAS, it
just reboots. It's Linux with ext2fs and some ftruncate() bug.
Workaround: Delete directories in chunks of max. 100 GB at once.

* When I create and use databases (Apache Derby or DB2) on a particular SSD
disk, the machine reboots. The SSD is buggy hardware - proved by the fact
that it works on a different SSD in the same machine.
Workaround: Throttling is impossible here - I don't control the program that
creates the databases.

* When I move a 3 GB file from an internal disk (on Mac OS X 10.5) to an SMB
file system, the machine becomes unusable for ca. 15 minutes, because of
the memory management algorithm in the OS.
Workaround: Stop the 'mv' command after 1 GB and restart it soon afterwards.

* When I download files for more than 2 minutes, it saturates my DSL connection
bandwidth, and some programs lose their internet connection because they
cannot access DNS any more.
Workaround: Use wget with the --limit option.

You see, throttling becomes necessary when engineers have either not
constructed robust solutions or not tested it for some supposedly "extreme"
parameters.

Bruno

[1] https://www.doc.ic.ac.uk/research/technicalreports/1999/DTR99-3.pdf
[2] http://www0.cs.ucl.ac.uk/staff/W.Emmerich/lectures/Z01-99-00/z01_day4.pdf
[3] http://www.google.ch/patents/US20040255086
Torvald Riegel
2017-01-06 14:40:32 UTC
Permalink
Raw Message
Post by Bruno Haible
Post by Torvald Riegel
Post by Bruno Haible
So, what is the fairness-ensuring mechanism that will make use of locks
starvation-free? Can it be *easily* and *portably* put in place?
Textbooks from university teacher state that alternating between reader
preference and writer preference with solve both writer starvation and
- [1] page 12
- [2] slides 62..63
Turning on writer preference doesn't help if recursive rdlocks are
allowed, which POSIX does.
Post by Bruno Haible
Post by Torvald Riegel
You control the incoming work.
...
you can easily throttle that
When throttling is used as an approach to avoid disaster, someone (sys admin
or user) has to monitor dynamic parameters. This is time-consuming and cannot
be done for programs that run on remote build machines (Kamil Dudka's point).
It seems it's still not clear what I mean by throttling. Maybe the term
is confusing for you. This isn't at all about prevening disasters or
any of that.

There are parallel/concurrent compute problems that need fairness or
no-starvation guarantees, and there are problems that do not need that.
For example, all kinds of fork/join parallelism do not because
sequential execution is an allowed execution. That's the most common
case of parallelism, I'd say.
Anything where you have a dependency structure building up automatically
doesn't need fairness typically, because in the end, you'll be blocked
on the single task that didn't get executed so far. Such problems
"throttle" automatically.

If your problem should not happen to have that characteristic, which is
not the common case I'd say, then it's easy to turn it into that
something that has the characteristic: introduce a dependency that
represents the kind of fairness you require. If your fairness
requirement is that you want to complete a writer at least after 1000
readers but no other writer have completed, make that a dependency in
your program.
Doing that has the additional benefit that you'll see whether this is
actually possible for the workload you have at hand. For example, if
you allow recursive readers, you'll see that this is not possible unless
you track whether a reader has acquired a specific rdlock as a reader
already, or the user provides this information to you. You can see that
by the fact that your fairness dependency would introduce a deadlock
when combined with the other dependencies.
Post by Bruno Haible
Therefore, if it can be avoided, I prefer to avoid it.
* I have a laptop that, when it has high load for 5 minutes, reboots.
(The Linux OS notices that the temperature goes too high and prefers to
shutdown, than to damage the hardware.)
Workaround: Constantly monitor the load, stop processes.
* When I 'rm -rf' more than ca. 500 GB at once on a disk of my NAS, it
just reboots. It's Linux with ext2fs and some ftruncate() bug.
Workaround: Delete directories in chunks of max. 100 GB at once.
* When I create and use databases (Apache Derby or DB2) on a particular SSD
disk, the machine reboots. The SSD is buggy hardware - proved by the fact
that it works on a different SSD in the same machine.
Workaround: Throttling is impossible here - I don't control the program that
creates the databases.
* When I move a 3 GB file from an internal disk (on Mac OS X 10.5) to an SMB
file system, the machine becomes unusable for ca. 15 minutes, because of
the memory management algorithm in the OS.
Workaround: Stop the 'mv' command after 1 GB and restart it soon afterwards.
* When I download files for more than 2 minutes, it saturates my DSL connection
bandwidth, and some programs lose their internet connection because they
cannot access DNS any more.
Workaround: Use wget with the --limit option.
You see, throttling becomes necessary when engineers have either not
constructed robust solutions or not tested it for some supposedly "extreme"
parameters.
I hope this isn't meant to be a serious argument. Maybe it's caused
because you got confused by me using the term throttling.
Kamil Dudka
2017-01-05 21:20:05 UTC
Permalink
Raw Message
Post by Bruno Haible
Post by Torvald Riegel
IMO, users of reader-writer locks should treat them as a
mutual-exclusion mechanism. That is, a mechanism that just ensures that
two critical sections will not execute concurrently (except if both are
readers, of course), so at the same time. It is also important to
understand what this does not mean, for example any prioritization of
threads attempting to acquire a reader-writer lock. Claiming that
prefering writers is required for reader-writer locks to be usable is
wrong. If a program needs a particular sort of fairness or
starvation-freedom, there are several ways to ensure that, and this does
not require to be the same mechanism as the mutual exclusion mechanism.
Please also do note that you seem to be getting along fine with
exclusive mutexes that are not guaranteed to be fair or starvation-free.
If glibc is making a certain decision about semantics, it might be
worthwhile to consider (see http://austingroupbugs.net/view.php?id=1111
and https://sourceware.org/bugzilla/show_bug.cgi?id=13701).
* Gnulib tries to makes POSIX (and other) APIs usable in an easy way.
We work around not only outright bugs, but also usability problems
* gnulib overrides the printf() function if this function crashes on
random data (the crash would be standards compliant, some people say,
but the data can be read from external files and we don't want our
programs to crash).
* gnulib overrides the fmal() function if it produces mathematically
incorrect values.
* gnulib overrides the iconv_open() function so that it will never report
that it cannot convert from "ISO-8859-1" to "UTF-8".
* gnulib overrides the re_search() function so that it understands
the most important parts of GNU regex syntax.
* "cp --help > /dev/full" fails with an error message. Thanks to gnulib.
* Its date parser prints warnings when there are ambiguities.
* It makes it easy to check all memory allocations (module 'xalloc'),
all size_t computations (module 'xsize'), and to not overlook failures
of POSIX function such as freopen, getcwd, readlink, setenv, strtol,
vasprintf etc.
If it costs an allocation of a couple of memory words, or a couple of
extra instruction, to achieve these goals, we just don't care.
Unfortunately, it is not the only cost. Please take into consideration also
the manpower spent on debugging random build hangs in each single GNU project
that happened to pull in gnulib's lock module. When you run tests in parallel
and have no direct access to build machines, it is not even obvious which of
the tests hangs. Additionally, I guess that most of the projects whose builds
hanged for no apparent reason do not care about rwlocks at all.
Post by Bruno Haible
You as a libc implementor are under pressure of delivering something
optimized for speed, because glibc will [probably] be compared against
competitors on the basis of some benchmarks. (Well, at least Ulrich Drepper
was most proud of the speed of his NPTL.)
Whereas each of the gnulib maintainers is maintaining some programs.
And as application programmers the worst case for us is not that our
program got 20% slower, but that it crashes or - worse - hangs.
Post by Torvald Riegel
Claiming that prefering writers is required for reader-writer locks
to be usable is wrong.
With the background above, you will understand why I claim that a gnulib
'lock' module is not reliable when its own unit test hangs on some
platforms and not on others.
Exactly. Even worse when it hangs only occasionally on remote builders.
Post by Bruno Haible
We want a lock facility that can EASILY achieve
- no writer starvation, and
- no reader starvation.
Which gnulib-based projects have the above requirements (besides the test)?
Post by Bruno Haible
Post by Torvald Riegel
If a program needs a particular sort of fairness or
starvation-freedom, there are several ways to ensure that, and this does
not require to be the same mechanism as the mutual exclusion mechanism.
"If you want API A to be reliable, you need also to use API B and API C."
Reliable with regards to what? Are the fairness guarantees supposedly
provided by gnulib's rwlock implementation documented anywhere?

Kamil
Post by Bruno Haible
OK, we learned the hard way that
- in order to use stdout reliably, we need the 'closeout' module.
- in order to copy files reliably, we need the 'acl' module.
- and so on.
So, what is the fairness-ensuring mechanism that will make use of locks
starvation-free? Can it be *easily* and *portably* put in place?
As far as I understood, for rwlocks, one can
* avoid writer starvation through "prefer writers" in
pthread_rwlock_rdlock, AND, at the same time,
* avoid reader starvation through "prefer readers during unlock from a
writer" in pthread_rwlock_unlock [1].
Bruno
[1] http://www.doc.ic.ac.uk/~jnm/concurrency/online/monitors/sld026.htm
Torvald Riegel
2017-01-06 12:16:08 UTC
Permalink
Raw Message
Post by Kamil Dudka
Post by Bruno Haible
Post by Torvald Riegel
IMO, users of reader-writer locks should treat them as a
mutual-exclusion mechanism. That is, a mechanism that just ensures that
two critical sections will not execute concurrently (except if both are
readers, of course), so at the same time. It is also important to
understand what this does not mean, for example any prioritization of
threads attempting to acquire a reader-writer lock. Claiming that
prefering writers is required for reader-writer locks to be usable is
wrong. If a program needs a particular sort of fairness or
starvation-freedom, there are several ways to ensure that, and this does
not require to be the same mechanism as the mutual exclusion mechanism.
Please also do note that you seem to be getting along fine with
exclusive mutexes that are not guaranteed to be fair or starvation-free.
If glibc is making a certain decision about semantics, it might be
worthwhile to consider (see http://austingroupbugs.net/view.php?id=1111
and https://sourceware.org/bugzilla/show_bug.cgi?id=13701).
* Gnulib tries to makes POSIX (and other) APIs usable in an easy way.
We work around not only outright bugs, but also usability problems
* gnulib overrides the printf() function if this function crashes on
random data (the crash would be standards compliant, some people say,
but the data can be read from external files and we don't want our
programs to crash).
* gnulib overrides the fmal() function if it produces mathematically
incorrect values.
* gnulib overrides the iconv_open() function so that it will never report
that it cannot convert from "ISO-8859-1" to "UTF-8".
* gnulib overrides the re_search() function so that it understands
the most important parts of GNU regex syntax.
* "cp --help > /dev/full" fails with an error message. Thanks to gnulib.
* Its date parser prints warnings when there are ambiguities.
* It makes it easy to check all memory allocations (module 'xalloc'),
all size_t computations (module 'xsize'), and to not overlook failures
of POSIX function such as freopen, getcwd, readlink, setenv, strtol,
vasprintf etc.
If it costs an allocation of a couple of memory words, or a couple of
extra instruction, to achieve these goals, we just don't care.
Unfortunately, it is not the only cost. Please take into consideration also
the manpower spent on debugging random build hangs in each single GNU project
that happened to pull in gnulib's lock module. When you run tests in parallel
and have no direct access to build machines, it is not even obvious which of
the tests hangs. Additionally, I guess that most of the projects whose builds
hanged for no apparent reason do not care about rwlocks at all.
What are the rwlock users, actually? A web search for gl_rwlock_t seems
to only turn up lock.h, but no users (unlike gl_lock_t, for example).
Post by Kamil Dudka
Post by Bruno Haible
We want a lock facility that can EASILY achieve
- no writer starvation, and
- no reader starvation.
Which gnulib-based projects have the above requirements (besides the test)?
I'd second that (as a follow-up to my previous question).
Bruno Haible
2017-01-06 14:08:59 UTC
Permalink
Raw Message
Post by Torvald Riegel
What are the rwlock users, actually? A web search for gl_rwlock_t seems
to only turn up lock.h, but no users (unlike gl_lock_t, for example).
You're right, there are no users so far. But there may be in the future:

Gnulib occasionally grabs some piece of code from glibc and makes it available
on non-glibc systems.

Many of the gnulib modules need to be multithread-safe. (For example, module
'fstrcmp' needs to be, because GNU gettext invokes it from within an OpenMP-
parallelized region.)

glibc uses an internal API <libc-lock.h> for its locking. It defines normal
locks, rwlocks, recursive locks, and once-only control. Since this is what
glibc code needs, in general, this is what the gnulib module 'lock' provides.

And the test-lock.c code tests it, because it is good practice, in gnulib,
to provide unit tests.

Bruno
Torvald Riegel
2017-01-06 14:52:30 UTC
Permalink
Raw Message
Post by Bruno Haible
Post by Torvald Riegel
What are the rwlock users, actually? A web search for gl_rwlock_t seems
to only turn up lock.h, but no users (unlike gl_lock_t, for example).
So we're spending all this time here on something that has no users, and
it's wasting the time of developers in other projects because the tests
are questionable?
Just remove it and revive it when there are actual users, please. And
then just use the C++ reader-writer lock semantics.
Post by Bruno Haible
Gnulib occasionally grabs some piece of code from glibc and makes it available
on non-glibc systems.
Many of the gnulib modules need to be multithread-safe. (For example, module
'fstrcmp' needs to be, because GNU gettext invokes it from within an OpenMP-
parallelized region.)
glibc uses an internal API <libc-lock.h> for its locking. It defines normal
locks, rwlocks, recursive locks, and once-only control. Since this is what
glibc code needs, in general, this is what the gnulib module 'lock' provides.
This is internal to glibc. How one derive a need to expose any of this
externally is unclear to me.
Post by Bruno Haible
And the test-lock.c code tests it, because it is good practice, in gnulib,
to provide unit tests.
It tests for feature that glibc does not provide.
Loading...