Talk:Non-blocking algorithm

Latest comment: 3 years ago by DavidCary in topic non-blocking in persistent memory

Untitled

edit

Article merged: See old talk page for merged in article talk:Lock-free and wait-free algorithms

Merge request

edit

It has been suggested by someone else that both "Non-blocking algorithm" and "Lock-free and wait-free algorithms" be merged into this document. That said, I agree that non-blocking algorithm could be merged in/redirected (as algorithms to implement non-blocking sychronization is what is discussed, albeit very briefly, by that article). Similarly, as "Lock-free and wait-free algorithms" are also examples of non-blocking synchronization algorithms they could also be incorporated - however since the lock-free and wait-free conditions are developed concepts of their own, they probably should only be linked/referenced by this page.—The preceding unsigned comment was added by 216.16.237.2 (talkcontribs) .

"Non-blocking synchronization" should mean only what it says: that a synchronization primitive does not block. There are many ways to achieve this that do not involve lock-free techniques (another is by using callbacks), and merging the articles would impede fixing the current overemphasis on lock-free techniques in the article. DavidHopwood 21:35, 17 June 2006 (UTC)Reply

Could you give me a reference to such a use of the term associated with such a technique? I wasn't aware of the dual use, and would just like to see it confirmed. It would then seem appropriate to move the current page contents to Non-blocking algorithm and reclaim this page-name for the more general idea. Agreed? --Chris Purcell 23:23, 17 June 2006 (UTC)Reply

Merge request reopened

edit

Requesting a merge with non-blocking synchronization, due to the fact that:

  • non-blocking sync is the more general topic
  • obstruction-freedom is also a non-blocking property and isn't covered in this article
  • wait-free and lock-free are two different concepts, but both are non-blocking, hence it would makes only sense to a) merge with non-blocking or b)generate two articles one on wait-free algos and one for lock-free (and of course one for obstruction-free algos)

Rattusdatorum 15:00, 16 January 2007 (UTC)Reply

In support of the merge, I would add that the Lock-free_and_wait-free_algorithms page is currently a subset of the Non-blocking synchronization page (apart from the links sections, which can be merged). If "non-blocking synchronization" is not the agreed term, the page should be renamed to Non-blocking algorithm, or simply Non-blocking (progress guarantee), but the merge should still be performed. --Chris Purcell 16:29, 16 January 2007 (UTC)Reply

Lock free inter-task communication

edit

I disagree very strongly.

Lock free and wait free algorithms are totally asynchronous - there is no synchronisation at all - they are the very antithesis of any form of synchronisation, blocking or otherwise.

Lock free and wait free algorithms were very heavily used in Sinclair_QDOS (1983) (which I am told was the operating system the Linus Torvalds cut his teeth on - is this true?) and subsequent systems (SMS and variants, Stella). The complete lack of synchronisation mechanisms and their associated problems in QDOS gave rise to the operating system's original name: Domestos, the trade mark for a bleach / toilet cleaner that claimed to destroy 99.9% of all known bugs.

TonyTebby 08:43, 29 March 2007 (UTC)Reply

So is what you're saying that this page should be rewritten completely, to actually discuss non-blocking synchronization? Because I'd approve of that. It could include a non-blocking algorithms section, linking to the full discussion on that page. --Chris Purcell 15:33, 29 March 2007 (UTC)Reply
I can see that I am going to regret ever having stuck my oar in! But here goes. I got here from the "Lock-free and wait-free algorithms page" and not the non-blocking synchronisation page. The problem is that non-blocking synchonisation as defined in the first paragraph seems to indicate, I would think correctly, that non-blocking synchronisation is about ways of implementing semaphore like operations without indefinite waits. I.e. pre-1965 resource locking. While this may be wait-free in the sense that the thread is not blocked it is not wait-free in the sense described on the "Lock-free and wait-free algorithms" page ""Wait-free" refers to the fact that a thread can complete any operation in a finite number of steps, regardless of the actions of other threads". With non-blocking semaphores, although there is no wait, the operation may not be completed. Furthermore synchronisation of two threads implies that one or other the threads may have to be delayed to synchronise with the other. This means that synchronisation cannot be wait-free by definition.
The rest of the "Non-blocking synchronization" page does not, however, seem to have much to do with non-blocking semaphores or synchronisation at all. The problem is possibly just the title - if the title were changed to "Non-blocking resource sharing" would the problem go away? Merging the two pages would still be a heavy job! TonyTebby 16:31, 30 March 2007 (UTC)Reply

This is where I get confused: "non-blocking synchronization" has, in my academic experience, always been a synonym for a non-blocking algorithm. This makes sense in a way: callbacks, for instance, are only non-blocking if implemented with a non-blocking queue or suchlike. If some research were cited using "non-blocking synchronization" in a more general sense, that would make it clearer to me this isn't people getting the wrong end of the misnamed stick. If not, the page should be removed in favour of the better-defined "non-blocking algorithms". --Chris Purcell 05:28, 5 April 2007 (UTC)Reply

Bumping topic

edit

this merge discussion has fallen of the radar without being resolved so i am replacing the date in

{{Mergefrom|Lock-free and wait-free algorithms|date=January 2007}}
{{Mergeto|Non-blocking_synchronization|date=January 2007}}

to May 2008

"No lock-free implementation"

edit

Someone quoth: "Unfortunately, even with special atomic primitives, some common data structures -- such as doubly-linked lists -- have no known lock-free implementation."

Herb Sutter is phrasing things badly. All algorithms have a lock-free implementation, proven decades ago using universal constructions. More recently, software transactional memory has provided reasonably efficient implementations, too. What he must presumably mean is "no direct implementation exists". —Chris Purcell , 13:08, 21 December 2008 (UTC)Reply

Obstruction freedom and occ

edit

From the article: "Obstruction-freedom is also called optimistic concurrency control."

Is this correct, it seems those things are different from the rest of the text

Dekker's algorithm is lock free?

edit

The article says that Dekker's algorithm is lock-free. I think this statement is very vague, and, depending on the meaning, incorrect. Dekker's algorithm is a method for implementing critical sections. A program that uses it is, by definition, not lock-free (the program uses critical sections => a thread that crashes in the critical section will bring the whole system to a halt). If you agree, I would remove the mention of Dekker's algorithm. --Gigiultraplus (talk) 21:58, 11 April 2010 (UTC)Reply

I agree that Dekker's algorithm is not lock free, and it is not wait free. If one of the threads crashes while executing the critical section, all other threads will deadlock. There is also further inconsistencies between the Non-blocking synchronization page (It says that Dekker is lock free only) and the Dekker's algorithm page (it says it is lock and wait free). Dorozcog (talk) 21:04, 3 August 2010 (UTC)Reply

I agree. I've removed the link. However, I can't see any errors on the algorithm page, which says deadlock-free and starvation-free (necessary but not sufficient for lock-free and wait-free respectively). --Chris Purcell (talk) 14:55, 4 August 2010 (UTC)Reply

FastFlow

edit

Is FastFlow a programming paradigm or an operation that can be implemented in lock-free fashion? —Preceding unsigned comment added by 75.85.180.100 (talk) 08:36, 1 August 2010 (UTC)Reply

Yes, FastFlow has been implemented in a lock-free fashion, according to:

Would it be useful to mention FastFlow in this article, possibly in the "Implementation" section? --DavidCary (talk) 19:31, 20 September 2021 (UTC)Reply

Requested move

edit
The following is a closed discussion of the proposal. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.

The result of the proposal was Move Parsecboy (talk) 14:45, 25 August 2010 (UTC)Reply

Non-blocking synchronizationNon-blocking algorithm — As discussed above, this page appears to be incorrectly named: changing "synchronization" to "algorithm" would disambiguate, and allow the more general term to be reclaimed. --Chris Purcell (talk) 15:05, 4 August 2010 (UTC)Reply

The above discussion is preserved as an archive of the proposal. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.

Lock-free algorithms avoid priority inversion?

edit

I don't see how the claim, "Lock-free algorithms...avoid priority inversion", is supported by citations. It's certainly not true in general, as the system-wide progress guarantee can still be satisfied if the only thread making progress is the lowest-priority one. Is there work showing all lock-free algorithms can be modified to be priority-aware and prevent priority inversion?

(I intend to, at some future point, remove all reference to priority inversion if nobody clarifies this, by the way.)

--Chris Purcell (talk) 16:06, 13 January 2012 (UTC)Reply

I removed the text. --Chris Purcell (talk) 13:09, 18 April 2012 (UTC)Reply

I have a system with two tasks, and sometimes "the only thread making progress is the lowest-priority one". It may sound like priority inversion. However, the actual problem commonly called "the priority inversion problem" never occurs when I have only two tasks. The priority inversion article specifically says
"one way to avoid priority inversion is to avoid blocking, for example by using Non-blocking synchronization".
To me, that sounds like all lock-free algorithms already prevent priority inversion without any "priority-awareness" modification.
So am I confusing "the priority inversion problem" with some other kind of problem, or does maybe the "priority inversion" article need more clarification? --DavidCary (talk) 02:03, 13 September 2015 (UTC)Reply

Sounds like the priority inversion article should be linking to wait-free algorithms. --Chris Purcell (talk) 13:59, 25 September 2015 (UTC)Reply

What does 'weak' mean here

edit

What does 'weak' mean here" "some data structures are weak enough to be implemented". It's use needs to be defined for the non-expert. -- Dougher (talk) 01:05, 5 May 2012 (UTC)Reply


edit

1) Article "Simple, Fast, and Practical Non-Blocking and Blocking Concurrent Queue Algorithms" by Maged M. Michael and Michael L. Scott

3) Survey "Some Notes on Lock-Free and Wait-Free Algorithms" by Ross Bencina

4) java.util.concurrent.atomic – supports lock-free and thread-safe programming on single variables

5) System.Threading.Interlocked - Provides atomic operations for variables that are shared by multiple threads (.NET Framework)

6) The Jail-Ust Container Library

7) Practical lock-free data structures

8) Thesis "Efficient and Practical Non-Blocking Data Structures" (1414 KB) by Per Håkan Sundell

9) WARPing - Wait-free techniques for Real-time Processing

10) Non-blocking Synchronization: Algorithms and Performance Evaluation. (1926 KB) by Yi Zhang

11) "Design and verification of lock-free parallel algorithms" by Hui Gao

12) "Asynchronous Data Sharing in Multiprocessor Real-Time Systems Using Process Consensus" by Jing Chen and Alan Burns

13) Atomic Ptr Plus Project - collection of various lock-free synchronization primitives

14) AppCore: A Portable High-Performance Thread Synchronization Library - An Effective Marriage between Lock-Free and Lock-Based Algorithms

15) WaitFreeSynchronization and LockFreeSynchronization at the Portland Pattern Repository

16) Multiplatform library with atomic operations

17) A simple C++ lock-free LIFO implementation

18) libcds - C++ library of lock-free containers and safe memory reclamation schema

19) Concurrency Kit - A C library for non-blocking system design and implementation


moved from the article page. it has been tagged as a link farm for over 6 months. per WP:EL each link needs to be individually justified as appropriate. (I have removed some that clearly fail WP:ELNO.)

Which of the above meet the external link criteria and should be returned, and why? -- TRPoD aka The Red Pen of Doom 20:21, 28 October 2012 (UTC)Reply

I vote none of them. Also, many of the internal "See also" links appear extraneous to me. --Chris Purcell (talk) 10:30, 5 November 2012 (UTC)Reply

Realy non-blocking?

edit

"With few exceptions, non-blocking algorithms use atomic read-modify-write primitives that the hardware must provide"

How do atomic operations not block all other cores from the same resource?

Dear reader,
Non-blocking algorithms are "non-blocking" in the same way a normal "read" instruction is considered "non-blocking".
In a multiprocessor system, often two CPUs simultaneously try to read data from different locations in the same RAM chip.
Since a typical RAM chip only has one set of address pins, it can only retrieve data from one location at a time.
In order for the system to work as expected, a properly constructed memory arbiter must somehow choose one of the CPUs to drive the address pins of that RAM chip, and somehow delay the other CPU and keep its desired address disconnected from the address pins of the RAM until the next memory cycle.
The read instruction running on the delayed CPU always appears to the software to have completed successfully.
Atomic instructions work the same way, except that a properly designed arbiter sometimes needs to delay the other CPUs several memory cycles in order for multi-memory-cycle atomic instructions to complete and actually appear atomic to those other CPUs -- but those delayed CPUs will resume and finish whatever instruction they are working on within "several memory cycles".
(By "properly designed", I'm excluding bugs like the Cyrix coma bug).
In this context, delaying a CPU such that it takes a little longer to execute an instruction, but that instruction always completes successfully within "several memory cycles", is not considered a "block".
In this context, a "block" is when some other CPU does something that could keep this CPU from doing anything useful for an unlimited amount of time, perhaps more than a hundred memory cycles.
How can we clarify this article to make this easier to understand? Does it need a distinction between "short delay" and "indefinite block"? --DavidCary (talk) 18:42, 11 March 2014 (UTC)Reply

opinions about non-blocking algorithms

edit

Someone once wrote, in a magazine that appears to be a reliable source, several articles criticizing lock-free code for giving "a false sense of security".

My understanding of Wikipedia's WP:YESPOV policy is that each Wikipedia article should mention all of the significant views that have been published by reliable sources on a topic, even the ones I don't agree with.

In an attempt to follow policy, I added a single sentence with a very brief and watered-down mention of that criticism, using articles in that magazine as a reference. (I used WP:BUNDLING to cram several articles into a single reference -- not sure if that was appropriate or not).

When that sentence (and the reference) was deleted, I more-or-less reverted, and the sentence (and the reference) was deleted again.[1][2]

Rather than deleting statements about the topic that people sincerely believe but are non-neutral (or even wrong), I feel it improves a Wikipedia article on that topic to mention those statements, and then give reliable sources that show that those statements are not facts but merely opinions -- or perhaps even sources that show that experts now consider those statements to be wrong.

This is the "discussion" part of the WP:BRD process. --DavidCary (talk) 20:04, 11 March 2014 (UTC)Reply

I can't speak for the original editor who reverted you, but I found your addition to be out-of-place and in insufficient context. Which part of the WP article do you feel contradicts the statement you are adding? That may be a better candidate for your NPOV improvement. --Chris Purcell (talk) 19:50, 22 April 2014 (UTC)Reply

Huh? I don't see anything in the Wikipedia article "Non-blocking algorithm" that contradicts the sentence I added ("... it is difficult to write lock-free code that is correct ..."), and I am mystified at how anyone would come to the conclusion that I thought there was such a contradiction.

I hope that someday soon it will be easy to write lock-free code. After that happens, I hope that this Wikipedia article will be updated, with references that support the idea that it is now easy to write lock-free code -- statements and references directly contradicting the sentence I added.

I honestly thought that sentence had something to do with implementation, so I stuck it in the "Implementation" section.[3] If you think it is out-of-place there, please move it to a more appropriate place. --DavidCary (talk) 02:47, 6 June 2014 (UTC)Reply

Looks good to me. --Chris Purcell (talk) 16:29, 1 July 2014 (UTC)Reply

Does lock-free really mean "without locks"?

edit

I noticed that the Motivation section is entirely unsourced, and it seems that it uses a much stricter definition of lock-free than the lead paragraph: "A non-blocking algorithm is lock-free if there is guaranteed system-wide progress" only seems to imply a guaranteed absence of deadlock (or livelock), rather than of locks (the synchronization structure). QVVERTYVS (hm?) 20:52, 25 March 2014 (UTC)Reply

If a system has locks, it cannot guarantee progress if the lock-holder is suspended indefinitely, unlike a non-blocking algorithm. I'll drop a citation in, thanks for the nudge. I'll try and improve the definition while I'm at it. --Chris Purcell (talk) 19:41, 22 April 2014 (UTC)Reply

In fact, I won't drop a citation in, because I thought you were talking about the lead paragraph. I have no idea what sources the motivation draws on. I added "regardless of scheduling" to the definition, which I hope makes the locking thing clearer. --Chris Purcell (talk) 01:57, 24 April 2014 (UTC)Reply

"Interactions with preemptive scheduler" or "deadlock"?

edit

Chris Purcell initially added this text:

Literature up to the turn of the century used "non-blocking" synonymously with lock-free: an algorithm that was free from deadlock and livelock. However, since 2003 [1], the term has been weakened to deny only deadlock.

but later changed this text to the present:

Literature up to the turn of the century used "non-blocking" synonymously with lock-free: an algorithm with guaranteed system-wide progress. However, since 2003 [1], the term has been weakened to only prevent progress-blocking interactions with a preemptive scheduler.

In both cases the text explains further:

In modern usage, therefore, an algorithm is non-blocking if the suspension of one or more threads will not stop the potential progress of the remaining threads"

Why was this changed? Would they both in this context be formally correct descriptions of "non-blocking" (and thus implicitly of "blocking")? If they are semantically the same (or slightly different), could it be an idea to merge these two, to include both deadlock and pathologic interaction with the preemptive scheduler?

It is possible to deadlock also with a non-preemptive scheduler, f.ex. cooperative scheduling. This is often used with CSP-type schedulers, where channel communication often is synchronous. This could mean that if such a system deadlocks (=cannot proceed) then "blocking on a channel" is not the same usage of the word "blocking" in this article's context, since all other processes that are ready to run may then in fact be scheduled? "Blocking on a channel" (synchronous or full buffer if buffered) is standard procedure in a multi-process system. Any needed asynchronism is added by building on this primitive.

So the "blocking" that "non-blocking algorithm" is meant to avoid is not the same as the "blocking on a channel"? I know that Wikipedia sets up an article about Non-blocking I/O (redirected to Asynchronous I/O) - but I/O and communication is just another side of communicating over shared variables of any sort, which I believe is discussed here? There's no need to even think about non-blocking algorithms if the ultimate purpose is not to communicate?

I have added a blog note (with a figure) trying to set this in context. But am I right? (Disclaimer: I wrote the blog note, and I don't have any ads in my site - so no money involved)

Øyvind Teig (talk) 07:19, 10 September 2014 (UTC)--Øyvind Teig (talk) 06:41, 19 September 2014 (UTC)Reply

@Øyvind Teig: I agree, this needs work. It's probably incorrect in its current form. How about this?
Literature up to the turn of the century used "non-blocking" synonymously with lock-free: an algorithm with guaranteed system-wide progress, which therefore cannot suffer deadlock or livelock. However, in 2003 [1], the term was weakened to allow livelock, and now means the same as obstruction-free: an algorithm guaranteeing that at any point, if all threads bar one are suspended, the remaining thread will be able to make progress. In particular, that means the system cannot deadlock.
To the rest of your question, yes, "non-blocking algorithm" is different from "non-blocking I/O".
--Chris Purcell (talk) 15:51, 30 October 2014 (UTC)Reply
Update: I found the following quote which I intend to quote verbatim (and cite) if nobody objects:
An algorithm is called nonblocking if failure or suspension of any thread cannot cause failure or suspension of another thread. Java Concurrency in Practice, p.41
I will probably move the pre-2003 definition out of the introduction, too.
--Chris Purcell (talk) 15:22, 29 December 2014 (UTC)Reply
Now done. --Chris Purcell (talk) 23:07, 7 January 2015 (UTC)Reply

Single reader/writer ring buffer w/o atomic ops has size constraints?

edit

In what way does a single reader/writer lock-free ring buffer without atomic read-modify-write primitives require a size which evenly divides the overflow of an unsigned int as stated in the "Implementation" section? Take this example of a wait-free implementation that works with any size and without atomic read-modify-write primitives: [1]. It is untested but assumed to work (or close to it). IzzetteIO (talk) 05:43, 26 May 2017 (UTC) Isabell CowanReply

References

non-blocking in persistent memory

edit

Currently the "persistent memory" article has a section Persistent memory#The read-of-non-persistent-write problem that discusses a "problem ... for lock-free programs". Is this problem something that also needs to be discussed in this "non-blocking algorithm" article? --DavidCary (talk) 23:35, 20 September 2021 (UTC)Reply