What is the difference between mutex and critical section?
Please explain from Linux, Windows perspectives?
I am programming in C#, would these two terms make a difference. Please post as much as you can, with examples and such....
Thanks
windows linux multithreading programming-languages
add a comment |
Please explain from Linux, Windows perspectives?
I am programming in C#, would these two terms make a difference. Please post as much as you can, with examples and such....
Thanks
windows linux multithreading programming-languages
add a comment |
Please explain from Linux, Windows perspectives?
I am programming in C#, would these two terms make a difference. Please post as much as you can, with examples and such....
Thanks
windows linux multithreading programming-languages
Please explain from Linux, Windows perspectives?
I am programming in C#, would these two terms make a difference. Please post as much as you can, with examples and such....
Thanks
windows linux multithreading programming-languages
windows linux multithreading programming-languages
edited Jul 6 '12 at 9:52
asked Apr 29 '09 at 0:23
ultraman
add a comment |
add a comment |
9 Answers
9
active
oldest
votes
For Windows, critical sections are lighter-weight than mutexes.
Mutexes can be shared between processes, but always result in a system call to the kernel which has some overhead.
Critical sections can only be used within one process, but have the advantage that they only switch to kernel mode in the case of contention - Uncontended acquires, which should be the common case, are incredibly fast. In the case of contention, they enter the kernel to wait on some synchronization primitive (like an event or semaphore).
I wrote a quick sample app that compares the time between the two of them. On my system for 1,000,000 uncontended acquires and releases, a mutex takes over one second. A critical section takes ~50 ms for 1,000,000 acquires.
Here's the test code, I ran this and got similar results if mutex is first or second, so we aren't seeing any other effects.
HANDLE mutex = CreateMutex(NULL, FALSE, NULL);
CRITICAL_SECTION critSec;
InitializeCriticalSection(&critSec);
LARGE_INTEGER freq;
QueryPerformanceFrequency(&freq);
LARGE_INTEGER start, end;
// Force code into memory, so we don't see any effects of paging.
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
}
QueryPerformanceCounter(&end);
int totalTimeCS = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
// Force code into memory, so we don't see any effects of paging.
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
}
QueryPerformanceCounter(&end);
int totalTime = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
printf("Mutex: %d CritSec: %dn", totalTime, totalTimeCS);
beats me - maybe you should post your code. I voted you up one if it makes you feel better
– 1800 INFORMATION
Apr 29 '09 at 1:04
1
Well done. Upvoted.
– ApplePieIsGood
Apr 29 '09 at 3:18
1
Not sure if this relates or not (since I haven't compiled and tried your code), but I've found that calling WaitForSingleObject with INFINITE results in poor performance. Passing it a timeout value of 1 then looping while checking it's return has made a huge difference in the performance of some of my code. This is mostly in the context of waiting for an external process handle, however... Not a mutex. YMMV. I'd be interested in seeing how mutex performs with that modification. The resulting time difference from this test seems bigger than should be expected.
– Troy Howard
Jul 23 '09 at 5:37
5
@TroyHoward aren't you basically just spin locking at that point?
– dss539
Feb 21 '13 at 14:54
2
@TroyHoward try forcing your CPU to run at 100% all the time and see if INFINITE works better. The power strategy can take as long as 40ms on my machine (Dell XPS-8700) to crawl back up to full speed after it decides to slow down, which it may not do if you sleep or wait for only a millisecond.
– Stevens Miller
Aug 15 '16 at 18:14
|
show 1 more comment
From a theoretical perspective, a critical section is a piece of code that must not be run by multiple threads at once because the code accesses shared resources.
A mutex is an algorithm (and sometimes the name of a data structure) that is used to protect critical sections.
Semaphores and Monitors are common implementations of a mutex.
In practice there are many mutex implementation availiable in windows. They mainly differ as consequence of their implementation by their level of locking, their scopes, their costs, and their performance under different levels of contention. See CLR Inside Out -
Using concurrency for scalability for an chart of the costs of different mutex implementations.
Availiable synchronization primitives.
- Monitor
- Mutex
- Semaphore
- ReaderWriterLock
- ReaderWriterLockSlim
- Interlocked
The lock(object)
statement is implemented using a Monitor
- see MSDN for reference.
In the last years much research is done on non-blocking synchronization. The goal is to implement algorithms in a lock-free or wait-free way. In such algorithms a process helps other processes to finish their work so that the process can finally finish its work. In consequence a process can finish its work even when other processes, that tried to perform some work, hang. Usinig locks, they would not release their locks and prevent other processes from continuing.
Seeing the accepted answer, I was thinking maybe I remembered the concept of critical sections wrong, till I saw that Theoretical Perspective you wrote. :)
– Anirudh Ramanathan
Oct 11 '12 at 5:17
1
Practical lock free programming is like Shangri La, except it exists. Keir Fraser's paper (PDF) explores this rather interestingly (going back to 2004). And we're still struggling with it in 2012. We suck.
– Tim Post♦
Oct 11 '12 at 15:07
add a comment |
In addition to the other answers, the following details are specific to critical sections on windows:
- in the absence of contention, acquiring a critical section is as simple as an
InterlockedCompareExchange
operation - the critical section structure holds room for a mutex. It is initially unallocated
- if there is contention between threads for a critical section, the mutex will be allocated and used. The performance of the critical section will degrade to that of the mutex
- if you anticipate high contention, you can allocate the critical section specifying a spin count.
- if there is contention on a critical section with a spin count, the thread attempting to acquire the critical section will spin (busy-wait) for that many processor cycles. This can result in better performance than sleeping, as the number of cycles to perform a context switch to another thread can be much higher than the number of cycles taken by the owning thread to release the mutex
- if the spin count expires, the mutex will be allocated
- when the owning thread releases the critical section, it is required to check if the mutex is allocated, if it is then it will set the mutex to release a waiting thread
In linux, I think that they have a "spin lock" that serves a similar purpose to the critical section with a spin count.
Unfortunately a Window critical section involves doing a CAS operation in kernel mode, which is massively more expensive than the actual interlocked operation. Also, Windows critical sections can have spin counts associated with them.
– Promit
Apr 29 '09 at 1:10
1
That is definitly not true. CAS can be done with cmpxchg in user mode.
– Michael
Apr 29 '09 at 1:12
I thought the default spin count was zero if you called InitializeCriticalSection - you have to call InitializeCriticalSectionAndSpinCount if you want a spin count applied. Do you have a reference for that?
– 1800 INFORMATION
Apr 29 '09 at 1:24
add a comment |
Critical Section and Mutex are not Operating system specific, their concepts of multithreading/multiprocessing.
Critical Section
Is a piece of code that must only run by it self at any given time (for example, there are 5 threads running simultaneously and a function called "critical_section_function" which updates a array... you don't want all 5 threads updating the array at once. So when the program is running critical_section_function(), none of the other threads must run their critical_section_function.
mutex*
Mutex is a way of implementing the critical section code (think of it like a token... the thread must have possession of it to run the critical_section_code)
1
Also, mutexes can be shared across processes.
– configurator
Apr 29 '09 at 1:07
add a comment |
A mutex is an object that a thread can acquire, preventing other threads from acquiring it. It is advisory, not mandatory; a thread can use the resource the mutex represents without acquiring it.
A critical section is a length of code that is guaranteed by the operating system to not be interupted. In pseudo-code, it would be like:
StartCriticalSection();
DoSomethingImportant();
DoSomeOtherImportantThing();
EndCriticalSection();
2
Am I incorrect? I would appreciate it if down voters would comment with a reason.
– Zifre
Apr 29 '09 at 1:18
+1 because the down vote confuses me. :p I'd say this is more correct than the statements that hint to Mutex and Critical Section being two different mechanisms for multithreading. Critical section is any section of code which ought to be accessed only by one thread. Using mutexes is one way to implement critical sections.
– Mikko Rantanen
Apr 29 '09 at 1:22
1
I think the poster was talking about user mode synchronization primitives, like a win32 Critical section object, which just provides mutual exclusion. I don't know about Linux, but Windows kernel has critical regions which behave like you describe - non-interruptable.
– Michael
Apr 29 '09 at 1:22
1
I don't know why you got downvoted. There's the concept of a critical section, which you've described correctly, which is different from the Windows kernel object called a CriticalSection, which is a type of mutex. I believe the OP was asking about the latter definition.
– Adam Rosenfield
Apr 29 '09 at 1:22
At least I got confused by the language agnostic tag. But in any case this is what we get for Microsoft naming their implementation the same as their base class. Bad coding practice!
– Mikko Rantanen
Apr 29 '09 at 1:27
|
show 1 more comment
The 'fast' Windows equal of critical selection in Linux would be a futex, which stands for fast user space mutex. The difference between a futex and a mutex is that with a futex, the kernel only becomes involved when arbitration is required, so you save the overhead of talking to the kernel each time the atomic counter is modified. That .. can save a significant amount of time negotiating locks in some applications.
A futex can also be shared amongst processes, using the means you would employ to share a mutex.
Unfortunately, futexes can be very tricky to implement (PDF). (2018 update, they aren't nearly as scary as they were in 2009).
Beyond that, its pretty much the same across both platforms. You're making atomic, token driven updates to a shared structure in a manner that (hopefully) does not cause starvation. What remains is simply the method of accomplishing that.
add a comment |
In Windows, a critical section is local to your process. A mutex can be shared/accessed across processes. Basically, critical sections are much cheaper. Can't comment on Linux specifically, but on some systems they're just aliases for the same thing.
add a comment |
Just to add my 2 cents, critical Sections are defined as a structure and operations on them are performed in user-mode context.
ntdll!_RTL_CRITICAL_SECTION
+0x000 DebugInfo : Ptr32 _RTL_CRITICAL_SECTION_DEBUG
+0x004 LockCount : Int4B
+0x008 RecursionCount : Int4B
+0x00c OwningThread : Ptr32 Void
+0x010 LockSemaphore : Ptr32 Void
+0x014 SpinCount : Uint4B
Whereas mutex are kernel objects (ExMutantObjectType) created in the Windows object directory. Mutex operations are mostly implemented in kernel-mode. For instance, when creating a Mutex, you end up calling nt!NtCreateMutant in kernel.
What happens when a program that initializes and uses a Mutex object, crashes? Does the Mutex object gets automatically deallocated? No, I would say. Right?
– Ankur
Oct 26 '09 at 12:30
5
Kernel objects have a reference count. Closing a handle to an object decrements the reference count and when it reaches 0 the object is freed. When a process crashes, all of its handles are automatically closed, so a mutex that only that process has a handle to would be automatically deallocated.
– Michael
Nov 18 '09 at 17:19
add a comment |
Great answer from Michael. I've added a third test for the mutex class introduced in C++11. The result is somewhat interesting, and still supports his original endorsement of CRITICAL_SECTION objects for single processes.
mutex m;
HANDLE mutex = CreateMutex(NULL, FALSE, NULL);
CRITICAL_SECTION critSec;
InitializeCriticalSection(&critSec);
LARGE_INTEGER freq;
QueryPerformanceFrequency(&freq);
LARGE_INTEGER start, end;
// Force code into memory, so we don't see any effects of paging.
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
}
QueryPerformanceCounter(&end);
int totalTimeCS = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
// Force code into memory, so we don't see any effects of paging.
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
}
QueryPerformanceCounter(&end);
int totalTime = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
// Force code into memory, so we don't see any effects of paging.
m.lock();
m.unlock();
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
m.lock();
m.unlock();
}
QueryPerformanceCounter(&end);
int totalTimeM = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
printf("C++ Mutex: %d Mutex: %d CritSec: %dn", totalTimeM, totalTime, totalTimeCS);
My results were 217, 473, and 19 (note that my ratio of times for the last two is roughly comparable to Michael's, but my machine is at least four years younger than his, so you can see evidence of increased speed between 2009 and 2013, when the XPS-8700 came out). The new mutex class is twice as fast as the Windows mutex, but still less than a tenth the speed of the Windows CRITICAL_SECTION object. Note that I only tested the non-recursive mutex. CRITICAL_SECTION objects are recursive (one thread can enter them repeatedly, provided it leaves the same number of times).
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f800383%2fwhat-is-the-difference-between-mutex-and-critical-section%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
9 Answers
9
active
oldest
votes
9 Answers
9
active
oldest
votes
active
oldest
votes
active
oldest
votes
For Windows, critical sections are lighter-weight than mutexes.
Mutexes can be shared between processes, but always result in a system call to the kernel which has some overhead.
Critical sections can only be used within one process, but have the advantage that they only switch to kernel mode in the case of contention - Uncontended acquires, which should be the common case, are incredibly fast. In the case of contention, they enter the kernel to wait on some synchronization primitive (like an event or semaphore).
I wrote a quick sample app that compares the time between the two of them. On my system for 1,000,000 uncontended acquires and releases, a mutex takes over one second. A critical section takes ~50 ms for 1,000,000 acquires.
Here's the test code, I ran this and got similar results if mutex is first or second, so we aren't seeing any other effects.
HANDLE mutex = CreateMutex(NULL, FALSE, NULL);
CRITICAL_SECTION critSec;
InitializeCriticalSection(&critSec);
LARGE_INTEGER freq;
QueryPerformanceFrequency(&freq);
LARGE_INTEGER start, end;
// Force code into memory, so we don't see any effects of paging.
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
}
QueryPerformanceCounter(&end);
int totalTimeCS = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
// Force code into memory, so we don't see any effects of paging.
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
}
QueryPerformanceCounter(&end);
int totalTime = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
printf("Mutex: %d CritSec: %dn", totalTime, totalTimeCS);
beats me - maybe you should post your code. I voted you up one if it makes you feel better
– 1800 INFORMATION
Apr 29 '09 at 1:04
1
Well done. Upvoted.
– ApplePieIsGood
Apr 29 '09 at 3:18
1
Not sure if this relates or not (since I haven't compiled and tried your code), but I've found that calling WaitForSingleObject with INFINITE results in poor performance. Passing it a timeout value of 1 then looping while checking it's return has made a huge difference in the performance of some of my code. This is mostly in the context of waiting for an external process handle, however... Not a mutex. YMMV. I'd be interested in seeing how mutex performs with that modification. The resulting time difference from this test seems bigger than should be expected.
– Troy Howard
Jul 23 '09 at 5:37
5
@TroyHoward aren't you basically just spin locking at that point?
– dss539
Feb 21 '13 at 14:54
2
@TroyHoward try forcing your CPU to run at 100% all the time and see if INFINITE works better. The power strategy can take as long as 40ms on my machine (Dell XPS-8700) to crawl back up to full speed after it decides to slow down, which it may not do if you sleep or wait for only a millisecond.
– Stevens Miller
Aug 15 '16 at 18:14
|
show 1 more comment
For Windows, critical sections are lighter-weight than mutexes.
Mutexes can be shared between processes, but always result in a system call to the kernel which has some overhead.
Critical sections can only be used within one process, but have the advantage that they only switch to kernel mode in the case of contention - Uncontended acquires, which should be the common case, are incredibly fast. In the case of contention, they enter the kernel to wait on some synchronization primitive (like an event or semaphore).
I wrote a quick sample app that compares the time between the two of them. On my system for 1,000,000 uncontended acquires and releases, a mutex takes over one second. A critical section takes ~50 ms for 1,000,000 acquires.
Here's the test code, I ran this and got similar results if mutex is first or second, so we aren't seeing any other effects.
HANDLE mutex = CreateMutex(NULL, FALSE, NULL);
CRITICAL_SECTION critSec;
InitializeCriticalSection(&critSec);
LARGE_INTEGER freq;
QueryPerformanceFrequency(&freq);
LARGE_INTEGER start, end;
// Force code into memory, so we don't see any effects of paging.
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
}
QueryPerformanceCounter(&end);
int totalTimeCS = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
// Force code into memory, so we don't see any effects of paging.
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
}
QueryPerformanceCounter(&end);
int totalTime = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
printf("Mutex: %d CritSec: %dn", totalTime, totalTimeCS);
beats me - maybe you should post your code. I voted you up one if it makes you feel better
– 1800 INFORMATION
Apr 29 '09 at 1:04
1
Well done. Upvoted.
– ApplePieIsGood
Apr 29 '09 at 3:18
1
Not sure if this relates or not (since I haven't compiled and tried your code), but I've found that calling WaitForSingleObject with INFINITE results in poor performance. Passing it a timeout value of 1 then looping while checking it's return has made a huge difference in the performance of some of my code. This is mostly in the context of waiting for an external process handle, however... Not a mutex. YMMV. I'd be interested in seeing how mutex performs with that modification. The resulting time difference from this test seems bigger than should be expected.
– Troy Howard
Jul 23 '09 at 5:37
5
@TroyHoward aren't you basically just spin locking at that point?
– dss539
Feb 21 '13 at 14:54
2
@TroyHoward try forcing your CPU to run at 100% all the time and see if INFINITE works better. The power strategy can take as long as 40ms on my machine (Dell XPS-8700) to crawl back up to full speed after it decides to slow down, which it may not do if you sleep or wait for only a millisecond.
– Stevens Miller
Aug 15 '16 at 18:14
|
show 1 more comment
For Windows, critical sections are lighter-weight than mutexes.
Mutexes can be shared between processes, but always result in a system call to the kernel which has some overhead.
Critical sections can only be used within one process, but have the advantage that they only switch to kernel mode in the case of contention - Uncontended acquires, which should be the common case, are incredibly fast. In the case of contention, they enter the kernel to wait on some synchronization primitive (like an event or semaphore).
I wrote a quick sample app that compares the time between the two of them. On my system for 1,000,000 uncontended acquires and releases, a mutex takes over one second. A critical section takes ~50 ms for 1,000,000 acquires.
Here's the test code, I ran this and got similar results if mutex is first or second, so we aren't seeing any other effects.
HANDLE mutex = CreateMutex(NULL, FALSE, NULL);
CRITICAL_SECTION critSec;
InitializeCriticalSection(&critSec);
LARGE_INTEGER freq;
QueryPerformanceFrequency(&freq);
LARGE_INTEGER start, end;
// Force code into memory, so we don't see any effects of paging.
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
}
QueryPerformanceCounter(&end);
int totalTimeCS = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
// Force code into memory, so we don't see any effects of paging.
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
}
QueryPerformanceCounter(&end);
int totalTime = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
printf("Mutex: %d CritSec: %dn", totalTime, totalTimeCS);
For Windows, critical sections are lighter-weight than mutexes.
Mutexes can be shared between processes, but always result in a system call to the kernel which has some overhead.
Critical sections can only be used within one process, but have the advantage that they only switch to kernel mode in the case of contention - Uncontended acquires, which should be the common case, are incredibly fast. In the case of contention, they enter the kernel to wait on some synchronization primitive (like an event or semaphore).
I wrote a quick sample app that compares the time between the two of them. On my system for 1,000,000 uncontended acquires and releases, a mutex takes over one second. A critical section takes ~50 ms for 1,000,000 acquires.
Here's the test code, I ran this and got similar results if mutex is first or second, so we aren't seeing any other effects.
HANDLE mutex = CreateMutex(NULL, FALSE, NULL);
CRITICAL_SECTION critSec;
InitializeCriticalSection(&critSec);
LARGE_INTEGER freq;
QueryPerformanceFrequency(&freq);
LARGE_INTEGER start, end;
// Force code into memory, so we don't see any effects of paging.
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
}
QueryPerformanceCounter(&end);
int totalTimeCS = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
// Force code into memory, so we don't see any effects of paging.
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
}
QueryPerformanceCounter(&end);
int totalTime = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
printf("Mutex: %d CritSec: %dn", totalTime, totalTimeCS);
edited Apr 29 '09 at 1:21
Zifre
19.2k875101
19.2k875101
answered Apr 29 '09 at 0:38
Michael
46.6k594128
46.6k594128
beats me - maybe you should post your code. I voted you up one if it makes you feel better
– 1800 INFORMATION
Apr 29 '09 at 1:04
1
Well done. Upvoted.
– ApplePieIsGood
Apr 29 '09 at 3:18
1
Not sure if this relates or not (since I haven't compiled and tried your code), but I've found that calling WaitForSingleObject with INFINITE results in poor performance. Passing it a timeout value of 1 then looping while checking it's return has made a huge difference in the performance of some of my code. This is mostly in the context of waiting for an external process handle, however... Not a mutex. YMMV. I'd be interested in seeing how mutex performs with that modification. The resulting time difference from this test seems bigger than should be expected.
– Troy Howard
Jul 23 '09 at 5:37
5
@TroyHoward aren't you basically just spin locking at that point?
– dss539
Feb 21 '13 at 14:54
2
@TroyHoward try forcing your CPU to run at 100% all the time and see if INFINITE works better. The power strategy can take as long as 40ms on my machine (Dell XPS-8700) to crawl back up to full speed after it decides to slow down, which it may not do if you sleep or wait for only a millisecond.
– Stevens Miller
Aug 15 '16 at 18:14
|
show 1 more comment
beats me - maybe you should post your code. I voted you up one if it makes you feel better
– 1800 INFORMATION
Apr 29 '09 at 1:04
1
Well done. Upvoted.
– ApplePieIsGood
Apr 29 '09 at 3:18
1
Not sure if this relates or not (since I haven't compiled and tried your code), but I've found that calling WaitForSingleObject with INFINITE results in poor performance. Passing it a timeout value of 1 then looping while checking it's return has made a huge difference in the performance of some of my code. This is mostly in the context of waiting for an external process handle, however... Not a mutex. YMMV. I'd be interested in seeing how mutex performs with that modification. The resulting time difference from this test seems bigger than should be expected.
– Troy Howard
Jul 23 '09 at 5:37
5
@TroyHoward aren't you basically just spin locking at that point?
– dss539
Feb 21 '13 at 14:54
2
@TroyHoward try forcing your CPU to run at 100% all the time and see if INFINITE works better. The power strategy can take as long as 40ms on my machine (Dell XPS-8700) to crawl back up to full speed after it decides to slow down, which it may not do if you sleep or wait for only a millisecond.
– Stevens Miller
Aug 15 '16 at 18:14
beats me - maybe you should post your code. I voted you up one if it makes you feel better
– 1800 INFORMATION
Apr 29 '09 at 1:04
beats me - maybe you should post your code. I voted you up one if it makes you feel better
– 1800 INFORMATION
Apr 29 '09 at 1:04
1
1
Well done. Upvoted.
– ApplePieIsGood
Apr 29 '09 at 3:18
Well done. Upvoted.
– ApplePieIsGood
Apr 29 '09 at 3:18
1
1
Not sure if this relates or not (since I haven't compiled and tried your code), but I've found that calling WaitForSingleObject with INFINITE results in poor performance. Passing it a timeout value of 1 then looping while checking it's return has made a huge difference in the performance of some of my code. This is mostly in the context of waiting for an external process handle, however... Not a mutex. YMMV. I'd be interested in seeing how mutex performs with that modification. The resulting time difference from this test seems bigger than should be expected.
– Troy Howard
Jul 23 '09 at 5:37
Not sure if this relates or not (since I haven't compiled and tried your code), but I've found that calling WaitForSingleObject with INFINITE results in poor performance. Passing it a timeout value of 1 then looping while checking it's return has made a huge difference in the performance of some of my code. This is mostly in the context of waiting for an external process handle, however... Not a mutex. YMMV. I'd be interested in seeing how mutex performs with that modification. The resulting time difference from this test seems bigger than should be expected.
– Troy Howard
Jul 23 '09 at 5:37
5
5
@TroyHoward aren't you basically just spin locking at that point?
– dss539
Feb 21 '13 at 14:54
@TroyHoward aren't you basically just spin locking at that point?
– dss539
Feb 21 '13 at 14:54
2
2
@TroyHoward try forcing your CPU to run at 100% all the time and see if INFINITE works better. The power strategy can take as long as 40ms on my machine (Dell XPS-8700) to crawl back up to full speed after it decides to slow down, which it may not do if you sleep or wait for only a millisecond.
– Stevens Miller
Aug 15 '16 at 18:14
@TroyHoward try forcing your CPU to run at 100% all the time and see if INFINITE works better. The power strategy can take as long as 40ms on my machine (Dell XPS-8700) to crawl back up to full speed after it decides to slow down, which it may not do if you sleep or wait for only a millisecond.
– Stevens Miller
Aug 15 '16 at 18:14
|
show 1 more comment
From a theoretical perspective, a critical section is a piece of code that must not be run by multiple threads at once because the code accesses shared resources.
A mutex is an algorithm (and sometimes the name of a data structure) that is used to protect critical sections.
Semaphores and Monitors are common implementations of a mutex.
In practice there are many mutex implementation availiable in windows. They mainly differ as consequence of their implementation by their level of locking, their scopes, their costs, and their performance under different levels of contention. See CLR Inside Out -
Using concurrency for scalability for an chart of the costs of different mutex implementations.
Availiable synchronization primitives.
- Monitor
- Mutex
- Semaphore
- ReaderWriterLock
- ReaderWriterLockSlim
- Interlocked
The lock(object)
statement is implemented using a Monitor
- see MSDN for reference.
In the last years much research is done on non-blocking synchronization. The goal is to implement algorithms in a lock-free or wait-free way. In such algorithms a process helps other processes to finish their work so that the process can finally finish its work. In consequence a process can finish its work even when other processes, that tried to perform some work, hang. Usinig locks, they would not release their locks and prevent other processes from continuing.
Seeing the accepted answer, I was thinking maybe I remembered the concept of critical sections wrong, till I saw that Theoretical Perspective you wrote. :)
– Anirudh Ramanathan
Oct 11 '12 at 5:17
1
Practical lock free programming is like Shangri La, except it exists. Keir Fraser's paper (PDF) explores this rather interestingly (going back to 2004). And we're still struggling with it in 2012. We suck.
– Tim Post♦
Oct 11 '12 at 15:07
add a comment |
From a theoretical perspective, a critical section is a piece of code that must not be run by multiple threads at once because the code accesses shared resources.
A mutex is an algorithm (and sometimes the name of a data structure) that is used to protect critical sections.
Semaphores and Monitors are common implementations of a mutex.
In practice there are many mutex implementation availiable in windows. They mainly differ as consequence of their implementation by their level of locking, their scopes, their costs, and their performance under different levels of contention. See CLR Inside Out -
Using concurrency for scalability for an chart of the costs of different mutex implementations.
Availiable synchronization primitives.
- Monitor
- Mutex
- Semaphore
- ReaderWriterLock
- ReaderWriterLockSlim
- Interlocked
The lock(object)
statement is implemented using a Monitor
- see MSDN for reference.
In the last years much research is done on non-blocking synchronization. The goal is to implement algorithms in a lock-free or wait-free way. In such algorithms a process helps other processes to finish their work so that the process can finally finish its work. In consequence a process can finish its work even when other processes, that tried to perform some work, hang. Usinig locks, they would not release their locks and prevent other processes from continuing.
Seeing the accepted answer, I was thinking maybe I remembered the concept of critical sections wrong, till I saw that Theoretical Perspective you wrote. :)
– Anirudh Ramanathan
Oct 11 '12 at 5:17
1
Practical lock free programming is like Shangri La, except it exists. Keir Fraser's paper (PDF) explores this rather interestingly (going back to 2004). And we're still struggling with it in 2012. We suck.
– Tim Post♦
Oct 11 '12 at 15:07
add a comment |
From a theoretical perspective, a critical section is a piece of code that must not be run by multiple threads at once because the code accesses shared resources.
A mutex is an algorithm (and sometimes the name of a data structure) that is used to protect critical sections.
Semaphores and Monitors are common implementations of a mutex.
In practice there are many mutex implementation availiable in windows. They mainly differ as consequence of their implementation by their level of locking, their scopes, their costs, and their performance under different levels of contention. See CLR Inside Out -
Using concurrency for scalability for an chart of the costs of different mutex implementations.
Availiable synchronization primitives.
- Monitor
- Mutex
- Semaphore
- ReaderWriterLock
- ReaderWriterLockSlim
- Interlocked
The lock(object)
statement is implemented using a Monitor
- see MSDN for reference.
In the last years much research is done on non-blocking synchronization. The goal is to implement algorithms in a lock-free or wait-free way. In such algorithms a process helps other processes to finish their work so that the process can finally finish its work. In consequence a process can finish its work even when other processes, that tried to perform some work, hang. Usinig locks, they would not release their locks and prevent other processes from continuing.
From a theoretical perspective, a critical section is a piece of code that must not be run by multiple threads at once because the code accesses shared resources.
A mutex is an algorithm (and sometimes the name of a data structure) that is used to protect critical sections.
Semaphores and Monitors are common implementations of a mutex.
In practice there are many mutex implementation availiable in windows. They mainly differ as consequence of their implementation by their level of locking, their scopes, their costs, and their performance under different levels of contention. See CLR Inside Out -
Using concurrency for scalability for an chart of the costs of different mutex implementations.
Availiable synchronization primitives.
- Monitor
- Mutex
- Semaphore
- ReaderWriterLock
- ReaderWriterLockSlim
- Interlocked
The lock(object)
statement is implemented using a Monitor
- see MSDN for reference.
In the last years much research is done on non-blocking synchronization. The goal is to implement algorithms in a lock-free or wait-free way. In such algorithms a process helps other processes to finish their work so that the process can finally finish its work. In consequence a process can finish its work even when other processes, that tried to perform some work, hang. Usinig locks, they would not release their locks and prevent other processes from continuing.
edited Jan 15 '13 at 17:41
answered Apr 29 '09 at 1:14
Daniel Brückner
50.5k1080130
50.5k1080130
Seeing the accepted answer, I was thinking maybe I remembered the concept of critical sections wrong, till I saw that Theoretical Perspective you wrote. :)
– Anirudh Ramanathan
Oct 11 '12 at 5:17
1
Practical lock free programming is like Shangri La, except it exists. Keir Fraser's paper (PDF) explores this rather interestingly (going back to 2004). And we're still struggling with it in 2012. We suck.
– Tim Post♦
Oct 11 '12 at 15:07
add a comment |
Seeing the accepted answer, I was thinking maybe I remembered the concept of critical sections wrong, till I saw that Theoretical Perspective you wrote. :)
– Anirudh Ramanathan
Oct 11 '12 at 5:17
1
Practical lock free programming is like Shangri La, except it exists. Keir Fraser's paper (PDF) explores this rather interestingly (going back to 2004). And we're still struggling with it in 2012. We suck.
– Tim Post♦
Oct 11 '12 at 15:07
Seeing the accepted answer, I was thinking maybe I remembered the concept of critical sections wrong, till I saw that Theoretical Perspective you wrote. :)
– Anirudh Ramanathan
Oct 11 '12 at 5:17
Seeing the accepted answer, I was thinking maybe I remembered the concept of critical sections wrong, till I saw that Theoretical Perspective you wrote. :)
– Anirudh Ramanathan
Oct 11 '12 at 5:17
1
1
Practical lock free programming is like Shangri La, except it exists. Keir Fraser's paper (PDF) explores this rather interestingly (going back to 2004). And we're still struggling with it in 2012. We suck.
– Tim Post♦
Oct 11 '12 at 15:07
Practical lock free programming is like Shangri La, except it exists. Keir Fraser's paper (PDF) explores this rather interestingly (going back to 2004). And we're still struggling with it in 2012. We suck.
– Tim Post♦
Oct 11 '12 at 15:07
add a comment |
In addition to the other answers, the following details are specific to critical sections on windows:
- in the absence of contention, acquiring a critical section is as simple as an
InterlockedCompareExchange
operation - the critical section structure holds room for a mutex. It is initially unallocated
- if there is contention between threads for a critical section, the mutex will be allocated and used. The performance of the critical section will degrade to that of the mutex
- if you anticipate high contention, you can allocate the critical section specifying a spin count.
- if there is contention on a critical section with a spin count, the thread attempting to acquire the critical section will spin (busy-wait) for that many processor cycles. This can result in better performance than sleeping, as the number of cycles to perform a context switch to another thread can be much higher than the number of cycles taken by the owning thread to release the mutex
- if the spin count expires, the mutex will be allocated
- when the owning thread releases the critical section, it is required to check if the mutex is allocated, if it is then it will set the mutex to release a waiting thread
In linux, I think that they have a "spin lock" that serves a similar purpose to the critical section with a spin count.
Unfortunately a Window critical section involves doing a CAS operation in kernel mode, which is massively more expensive than the actual interlocked operation. Also, Windows critical sections can have spin counts associated with them.
– Promit
Apr 29 '09 at 1:10
1
That is definitly not true. CAS can be done with cmpxchg in user mode.
– Michael
Apr 29 '09 at 1:12
I thought the default spin count was zero if you called InitializeCriticalSection - you have to call InitializeCriticalSectionAndSpinCount if you want a spin count applied. Do you have a reference for that?
– 1800 INFORMATION
Apr 29 '09 at 1:24
add a comment |
In addition to the other answers, the following details are specific to critical sections on windows:
- in the absence of contention, acquiring a critical section is as simple as an
InterlockedCompareExchange
operation - the critical section structure holds room for a mutex. It is initially unallocated
- if there is contention between threads for a critical section, the mutex will be allocated and used. The performance of the critical section will degrade to that of the mutex
- if you anticipate high contention, you can allocate the critical section specifying a spin count.
- if there is contention on a critical section with a spin count, the thread attempting to acquire the critical section will spin (busy-wait) for that many processor cycles. This can result in better performance than sleeping, as the number of cycles to perform a context switch to another thread can be much higher than the number of cycles taken by the owning thread to release the mutex
- if the spin count expires, the mutex will be allocated
- when the owning thread releases the critical section, it is required to check if the mutex is allocated, if it is then it will set the mutex to release a waiting thread
In linux, I think that they have a "spin lock" that serves a similar purpose to the critical section with a spin count.
Unfortunately a Window critical section involves doing a CAS operation in kernel mode, which is massively more expensive than the actual interlocked operation. Also, Windows critical sections can have spin counts associated with them.
– Promit
Apr 29 '09 at 1:10
1
That is definitly not true. CAS can be done with cmpxchg in user mode.
– Michael
Apr 29 '09 at 1:12
I thought the default spin count was zero if you called InitializeCriticalSection - you have to call InitializeCriticalSectionAndSpinCount if you want a spin count applied. Do you have a reference for that?
– 1800 INFORMATION
Apr 29 '09 at 1:24
add a comment |
In addition to the other answers, the following details are specific to critical sections on windows:
- in the absence of contention, acquiring a critical section is as simple as an
InterlockedCompareExchange
operation - the critical section structure holds room for a mutex. It is initially unallocated
- if there is contention between threads for a critical section, the mutex will be allocated and used. The performance of the critical section will degrade to that of the mutex
- if you anticipate high contention, you can allocate the critical section specifying a spin count.
- if there is contention on a critical section with a spin count, the thread attempting to acquire the critical section will spin (busy-wait) for that many processor cycles. This can result in better performance than sleeping, as the number of cycles to perform a context switch to another thread can be much higher than the number of cycles taken by the owning thread to release the mutex
- if the spin count expires, the mutex will be allocated
- when the owning thread releases the critical section, it is required to check if the mutex is allocated, if it is then it will set the mutex to release a waiting thread
In linux, I think that they have a "spin lock" that serves a similar purpose to the critical section with a spin count.
In addition to the other answers, the following details are specific to critical sections on windows:
- in the absence of contention, acquiring a critical section is as simple as an
InterlockedCompareExchange
operation - the critical section structure holds room for a mutex. It is initially unallocated
- if there is contention between threads for a critical section, the mutex will be allocated and used. The performance of the critical section will degrade to that of the mutex
- if you anticipate high contention, you can allocate the critical section specifying a spin count.
- if there is contention on a critical section with a spin count, the thread attempting to acquire the critical section will spin (busy-wait) for that many processor cycles. This can result in better performance than sleeping, as the number of cycles to perform a context switch to another thread can be much higher than the number of cycles taken by the owning thread to release the mutex
- if the spin count expires, the mutex will be allocated
- when the owning thread releases the critical section, it is required to check if the mutex is allocated, if it is then it will set the mutex to release a waiting thread
In linux, I think that they have a "spin lock" that serves a similar purpose to the critical section with a spin count.
answered Apr 29 '09 at 1:03
1800 INFORMATION
98k23137223
98k23137223
Unfortunately a Window critical section involves doing a CAS operation in kernel mode, which is massively more expensive than the actual interlocked operation. Also, Windows critical sections can have spin counts associated with them.
– Promit
Apr 29 '09 at 1:10
1
That is definitly not true. CAS can be done with cmpxchg in user mode.
– Michael
Apr 29 '09 at 1:12
I thought the default spin count was zero if you called InitializeCriticalSection - you have to call InitializeCriticalSectionAndSpinCount if you want a spin count applied. Do you have a reference for that?
– 1800 INFORMATION
Apr 29 '09 at 1:24
add a comment |
Unfortunately a Window critical section involves doing a CAS operation in kernel mode, which is massively more expensive than the actual interlocked operation. Also, Windows critical sections can have spin counts associated with them.
– Promit
Apr 29 '09 at 1:10
1
That is definitly not true. CAS can be done with cmpxchg in user mode.
– Michael
Apr 29 '09 at 1:12
I thought the default spin count was zero if you called InitializeCriticalSection - you have to call InitializeCriticalSectionAndSpinCount if you want a spin count applied. Do you have a reference for that?
– 1800 INFORMATION
Apr 29 '09 at 1:24
Unfortunately a Window critical section involves doing a CAS operation in kernel mode, which is massively more expensive than the actual interlocked operation. Also, Windows critical sections can have spin counts associated with them.
– Promit
Apr 29 '09 at 1:10
Unfortunately a Window critical section involves doing a CAS operation in kernel mode, which is massively more expensive than the actual interlocked operation. Also, Windows critical sections can have spin counts associated with them.
– Promit
Apr 29 '09 at 1:10
1
1
That is definitly not true. CAS can be done with cmpxchg in user mode.
– Michael
Apr 29 '09 at 1:12
That is definitly not true. CAS can be done with cmpxchg in user mode.
– Michael
Apr 29 '09 at 1:12
I thought the default spin count was zero if you called InitializeCriticalSection - you have to call InitializeCriticalSectionAndSpinCount if you want a spin count applied. Do you have a reference for that?
– 1800 INFORMATION
Apr 29 '09 at 1:24
I thought the default spin count was zero if you called InitializeCriticalSection - you have to call InitializeCriticalSectionAndSpinCount if you want a spin count applied. Do you have a reference for that?
– 1800 INFORMATION
Apr 29 '09 at 1:24
add a comment |
Critical Section and Mutex are not Operating system specific, their concepts of multithreading/multiprocessing.
Critical Section
Is a piece of code that must only run by it self at any given time (for example, there are 5 threads running simultaneously and a function called "critical_section_function" which updates a array... you don't want all 5 threads updating the array at once. So when the program is running critical_section_function(), none of the other threads must run their critical_section_function.
mutex*
Mutex is a way of implementing the critical section code (think of it like a token... the thread must have possession of it to run the critical_section_code)
1
Also, mutexes can be shared across processes.
– configurator
Apr 29 '09 at 1:07
add a comment |
Critical Section and Mutex are not Operating system specific, their concepts of multithreading/multiprocessing.
Critical Section
Is a piece of code that must only run by it self at any given time (for example, there are 5 threads running simultaneously and a function called "critical_section_function" which updates a array... you don't want all 5 threads updating the array at once. So when the program is running critical_section_function(), none of the other threads must run their critical_section_function.
mutex*
Mutex is a way of implementing the critical section code (think of it like a token... the thread must have possession of it to run the critical_section_code)
1
Also, mutexes can be shared across processes.
– configurator
Apr 29 '09 at 1:07
add a comment |
Critical Section and Mutex are not Operating system specific, their concepts of multithreading/multiprocessing.
Critical Section
Is a piece of code that must only run by it self at any given time (for example, there are 5 threads running simultaneously and a function called "critical_section_function" which updates a array... you don't want all 5 threads updating the array at once. So when the program is running critical_section_function(), none of the other threads must run their critical_section_function.
mutex*
Mutex is a way of implementing the critical section code (think of it like a token... the thread must have possession of it to run the critical_section_code)
Critical Section and Mutex are not Operating system specific, their concepts of multithreading/multiprocessing.
Critical Section
Is a piece of code that must only run by it self at any given time (for example, there are 5 threads running simultaneously and a function called "critical_section_function" which updates a array... you don't want all 5 threads updating the array at once. So when the program is running critical_section_function(), none of the other threads must run their critical_section_function.
mutex*
Mutex is a way of implementing the critical section code (think of it like a token... the thread must have possession of it to run the critical_section_code)
answered Apr 29 '09 at 0:31
The Unknown
8,239266489
8,239266489
1
Also, mutexes can be shared across processes.
– configurator
Apr 29 '09 at 1:07
add a comment |
1
Also, mutexes can be shared across processes.
– configurator
Apr 29 '09 at 1:07
1
1
Also, mutexes can be shared across processes.
– configurator
Apr 29 '09 at 1:07
Also, mutexes can be shared across processes.
– configurator
Apr 29 '09 at 1:07
add a comment |
A mutex is an object that a thread can acquire, preventing other threads from acquiring it. It is advisory, not mandatory; a thread can use the resource the mutex represents without acquiring it.
A critical section is a length of code that is guaranteed by the operating system to not be interupted. In pseudo-code, it would be like:
StartCriticalSection();
DoSomethingImportant();
DoSomeOtherImportantThing();
EndCriticalSection();
2
Am I incorrect? I would appreciate it if down voters would comment with a reason.
– Zifre
Apr 29 '09 at 1:18
+1 because the down vote confuses me. :p I'd say this is more correct than the statements that hint to Mutex and Critical Section being two different mechanisms for multithreading. Critical section is any section of code which ought to be accessed only by one thread. Using mutexes is one way to implement critical sections.
– Mikko Rantanen
Apr 29 '09 at 1:22
1
I think the poster was talking about user mode synchronization primitives, like a win32 Critical section object, which just provides mutual exclusion. I don't know about Linux, but Windows kernel has critical regions which behave like you describe - non-interruptable.
– Michael
Apr 29 '09 at 1:22
1
I don't know why you got downvoted. There's the concept of a critical section, which you've described correctly, which is different from the Windows kernel object called a CriticalSection, which is a type of mutex. I believe the OP was asking about the latter definition.
– Adam Rosenfield
Apr 29 '09 at 1:22
At least I got confused by the language agnostic tag. But in any case this is what we get for Microsoft naming their implementation the same as their base class. Bad coding practice!
– Mikko Rantanen
Apr 29 '09 at 1:27
|
show 1 more comment
A mutex is an object that a thread can acquire, preventing other threads from acquiring it. It is advisory, not mandatory; a thread can use the resource the mutex represents without acquiring it.
A critical section is a length of code that is guaranteed by the operating system to not be interupted. In pseudo-code, it would be like:
StartCriticalSection();
DoSomethingImportant();
DoSomeOtherImportantThing();
EndCriticalSection();
2
Am I incorrect? I would appreciate it if down voters would comment with a reason.
– Zifre
Apr 29 '09 at 1:18
+1 because the down vote confuses me. :p I'd say this is more correct than the statements that hint to Mutex and Critical Section being two different mechanisms for multithreading. Critical section is any section of code which ought to be accessed only by one thread. Using mutexes is one way to implement critical sections.
– Mikko Rantanen
Apr 29 '09 at 1:22
1
I think the poster was talking about user mode synchronization primitives, like a win32 Critical section object, which just provides mutual exclusion. I don't know about Linux, but Windows kernel has critical regions which behave like you describe - non-interruptable.
– Michael
Apr 29 '09 at 1:22
1
I don't know why you got downvoted. There's the concept of a critical section, which you've described correctly, which is different from the Windows kernel object called a CriticalSection, which is a type of mutex. I believe the OP was asking about the latter definition.
– Adam Rosenfield
Apr 29 '09 at 1:22
At least I got confused by the language agnostic tag. But in any case this is what we get for Microsoft naming their implementation the same as their base class. Bad coding practice!
– Mikko Rantanen
Apr 29 '09 at 1:27
|
show 1 more comment
A mutex is an object that a thread can acquire, preventing other threads from acquiring it. It is advisory, not mandatory; a thread can use the resource the mutex represents without acquiring it.
A critical section is a length of code that is guaranteed by the operating system to not be interupted. In pseudo-code, it would be like:
StartCriticalSection();
DoSomethingImportant();
DoSomeOtherImportantThing();
EndCriticalSection();
A mutex is an object that a thread can acquire, preventing other threads from acquiring it. It is advisory, not mandatory; a thread can use the resource the mutex represents without acquiring it.
A critical section is a length of code that is guaranteed by the operating system to not be interupted. In pseudo-code, it would be like:
StartCriticalSection();
DoSomethingImportant();
DoSomeOtherImportantThing();
EndCriticalSection();
edited Aug 20 '16 at 14:01
Mark Sowul
7,6312845
7,6312845
answered Apr 29 '09 at 0:28
Zifre
19.2k875101
19.2k875101
2
Am I incorrect? I would appreciate it if down voters would comment with a reason.
– Zifre
Apr 29 '09 at 1:18
+1 because the down vote confuses me. :p I'd say this is more correct than the statements that hint to Mutex and Critical Section being two different mechanisms for multithreading. Critical section is any section of code which ought to be accessed only by one thread. Using mutexes is one way to implement critical sections.
– Mikko Rantanen
Apr 29 '09 at 1:22
1
I think the poster was talking about user mode synchronization primitives, like a win32 Critical section object, which just provides mutual exclusion. I don't know about Linux, but Windows kernel has critical regions which behave like you describe - non-interruptable.
– Michael
Apr 29 '09 at 1:22
1
I don't know why you got downvoted. There's the concept of a critical section, which you've described correctly, which is different from the Windows kernel object called a CriticalSection, which is a type of mutex. I believe the OP was asking about the latter definition.
– Adam Rosenfield
Apr 29 '09 at 1:22
At least I got confused by the language agnostic tag. But in any case this is what we get for Microsoft naming their implementation the same as their base class. Bad coding practice!
– Mikko Rantanen
Apr 29 '09 at 1:27
|
show 1 more comment
2
Am I incorrect? I would appreciate it if down voters would comment with a reason.
– Zifre
Apr 29 '09 at 1:18
+1 because the down vote confuses me. :p I'd say this is more correct than the statements that hint to Mutex and Critical Section being two different mechanisms for multithreading. Critical section is any section of code which ought to be accessed only by one thread. Using mutexes is one way to implement critical sections.
– Mikko Rantanen
Apr 29 '09 at 1:22
1
I think the poster was talking about user mode synchronization primitives, like a win32 Critical section object, which just provides mutual exclusion. I don't know about Linux, but Windows kernel has critical regions which behave like you describe - non-interruptable.
– Michael
Apr 29 '09 at 1:22
1
I don't know why you got downvoted. There's the concept of a critical section, which you've described correctly, which is different from the Windows kernel object called a CriticalSection, which is a type of mutex. I believe the OP was asking about the latter definition.
– Adam Rosenfield
Apr 29 '09 at 1:22
At least I got confused by the language agnostic tag. But in any case this is what we get for Microsoft naming their implementation the same as their base class. Bad coding practice!
– Mikko Rantanen
Apr 29 '09 at 1:27
2
2
Am I incorrect? I would appreciate it if down voters would comment with a reason.
– Zifre
Apr 29 '09 at 1:18
Am I incorrect? I would appreciate it if down voters would comment with a reason.
– Zifre
Apr 29 '09 at 1:18
+1 because the down vote confuses me. :p I'd say this is more correct than the statements that hint to Mutex and Critical Section being two different mechanisms for multithreading. Critical section is any section of code which ought to be accessed only by one thread. Using mutexes is one way to implement critical sections.
– Mikko Rantanen
Apr 29 '09 at 1:22
+1 because the down vote confuses me. :p I'd say this is more correct than the statements that hint to Mutex and Critical Section being two different mechanisms for multithreading. Critical section is any section of code which ought to be accessed only by one thread. Using mutexes is one way to implement critical sections.
– Mikko Rantanen
Apr 29 '09 at 1:22
1
1
I think the poster was talking about user mode synchronization primitives, like a win32 Critical section object, which just provides mutual exclusion. I don't know about Linux, but Windows kernel has critical regions which behave like you describe - non-interruptable.
– Michael
Apr 29 '09 at 1:22
I think the poster was talking about user mode synchronization primitives, like a win32 Critical section object, which just provides mutual exclusion. I don't know about Linux, but Windows kernel has critical regions which behave like you describe - non-interruptable.
– Michael
Apr 29 '09 at 1:22
1
1
I don't know why you got downvoted. There's the concept of a critical section, which you've described correctly, which is different from the Windows kernel object called a CriticalSection, which is a type of mutex. I believe the OP was asking about the latter definition.
– Adam Rosenfield
Apr 29 '09 at 1:22
I don't know why you got downvoted. There's the concept of a critical section, which you've described correctly, which is different from the Windows kernel object called a CriticalSection, which is a type of mutex. I believe the OP was asking about the latter definition.
– Adam Rosenfield
Apr 29 '09 at 1:22
At least I got confused by the language agnostic tag. But in any case this is what we get for Microsoft naming their implementation the same as their base class. Bad coding practice!
– Mikko Rantanen
Apr 29 '09 at 1:27
At least I got confused by the language agnostic tag. But in any case this is what we get for Microsoft naming their implementation the same as their base class. Bad coding practice!
– Mikko Rantanen
Apr 29 '09 at 1:27
|
show 1 more comment
The 'fast' Windows equal of critical selection in Linux would be a futex, which stands for fast user space mutex. The difference between a futex and a mutex is that with a futex, the kernel only becomes involved when arbitration is required, so you save the overhead of talking to the kernel each time the atomic counter is modified. That .. can save a significant amount of time negotiating locks in some applications.
A futex can also be shared amongst processes, using the means you would employ to share a mutex.
Unfortunately, futexes can be very tricky to implement (PDF). (2018 update, they aren't nearly as scary as they were in 2009).
Beyond that, its pretty much the same across both platforms. You're making atomic, token driven updates to a shared structure in a manner that (hopefully) does not cause starvation. What remains is simply the method of accomplishing that.
add a comment |
The 'fast' Windows equal of critical selection in Linux would be a futex, which stands for fast user space mutex. The difference between a futex and a mutex is that with a futex, the kernel only becomes involved when arbitration is required, so you save the overhead of talking to the kernel each time the atomic counter is modified. That .. can save a significant amount of time negotiating locks in some applications.
A futex can also be shared amongst processes, using the means you would employ to share a mutex.
Unfortunately, futexes can be very tricky to implement (PDF). (2018 update, they aren't nearly as scary as they were in 2009).
Beyond that, its pretty much the same across both platforms. You're making atomic, token driven updates to a shared structure in a manner that (hopefully) does not cause starvation. What remains is simply the method of accomplishing that.
add a comment |
The 'fast' Windows equal of critical selection in Linux would be a futex, which stands for fast user space mutex. The difference between a futex and a mutex is that with a futex, the kernel only becomes involved when arbitration is required, so you save the overhead of talking to the kernel each time the atomic counter is modified. That .. can save a significant amount of time negotiating locks in some applications.
A futex can also be shared amongst processes, using the means you would employ to share a mutex.
Unfortunately, futexes can be very tricky to implement (PDF). (2018 update, they aren't nearly as scary as they were in 2009).
Beyond that, its pretty much the same across both platforms. You're making atomic, token driven updates to a shared structure in a manner that (hopefully) does not cause starvation. What remains is simply the method of accomplishing that.
The 'fast' Windows equal of critical selection in Linux would be a futex, which stands for fast user space mutex. The difference between a futex and a mutex is that with a futex, the kernel only becomes involved when arbitration is required, so you save the overhead of talking to the kernel each time the atomic counter is modified. That .. can save a significant amount of time negotiating locks in some applications.
A futex can also be shared amongst processes, using the means you would employ to share a mutex.
Unfortunately, futexes can be very tricky to implement (PDF). (2018 update, they aren't nearly as scary as they were in 2009).
Beyond that, its pretty much the same across both platforms. You're making atomic, token driven updates to a shared structure in a manner that (hopefully) does not cause starvation. What remains is simply the method of accomplishing that.
edited Nov 21 '18 at 13:56
answered Apr 29 '09 at 1:38
Tim Post♦
27.5k1594151
27.5k1594151
add a comment |
add a comment |
In Windows, a critical section is local to your process. A mutex can be shared/accessed across processes. Basically, critical sections are much cheaper. Can't comment on Linux specifically, but on some systems they're just aliases for the same thing.
add a comment |
In Windows, a critical section is local to your process. A mutex can be shared/accessed across processes. Basically, critical sections are much cheaper. Can't comment on Linux specifically, but on some systems they're just aliases for the same thing.
add a comment |
In Windows, a critical section is local to your process. A mutex can be shared/accessed across processes. Basically, critical sections are much cheaper. Can't comment on Linux specifically, but on some systems they're just aliases for the same thing.
In Windows, a critical section is local to your process. A mutex can be shared/accessed across processes. Basically, critical sections are much cheaper. Can't comment on Linux specifically, but on some systems they're just aliases for the same thing.
answered Apr 29 '09 at 0:25
Promit
2,99511428
2,99511428
add a comment |
add a comment |
Just to add my 2 cents, critical Sections are defined as a structure and operations on them are performed in user-mode context.
ntdll!_RTL_CRITICAL_SECTION
+0x000 DebugInfo : Ptr32 _RTL_CRITICAL_SECTION_DEBUG
+0x004 LockCount : Int4B
+0x008 RecursionCount : Int4B
+0x00c OwningThread : Ptr32 Void
+0x010 LockSemaphore : Ptr32 Void
+0x014 SpinCount : Uint4B
Whereas mutex are kernel objects (ExMutantObjectType) created in the Windows object directory. Mutex operations are mostly implemented in kernel-mode. For instance, when creating a Mutex, you end up calling nt!NtCreateMutant in kernel.
What happens when a program that initializes and uses a Mutex object, crashes? Does the Mutex object gets automatically deallocated? No, I would say. Right?
– Ankur
Oct 26 '09 at 12:30
5
Kernel objects have a reference count. Closing a handle to an object decrements the reference count and when it reaches 0 the object is freed. When a process crashes, all of its handles are automatically closed, so a mutex that only that process has a handle to would be automatically deallocated.
– Michael
Nov 18 '09 at 17:19
add a comment |
Just to add my 2 cents, critical Sections are defined as a structure and operations on them are performed in user-mode context.
ntdll!_RTL_CRITICAL_SECTION
+0x000 DebugInfo : Ptr32 _RTL_CRITICAL_SECTION_DEBUG
+0x004 LockCount : Int4B
+0x008 RecursionCount : Int4B
+0x00c OwningThread : Ptr32 Void
+0x010 LockSemaphore : Ptr32 Void
+0x014 SpinCount : Uint4B
Whereas mutex are kernel objects (ExMutantObjectType) created in the Windows object directory. Mutex operations are mostly implemented in kernel-mode. For instance, when creating a Mutex, you end up calling nt!NtCreateMutant in kernel.
What happens when a program that initializes and uses a Mutex object, crashes? Does the Mutex object gets automatically deallocated? No, I would say. Right?
– Ankur
Oct 26 '09 at 12:30
5
Kernel objects have a reference count. Closing a handle to an object decrements the reference count and when it reaches 0 the object is freed. When a process crashes, all of its handles are automatically closed, so a mutex that only that process has a handle to would be automatically deallocated.
– Michael
Nov 18 '09 at 17:19
add a comment |
Just to add my 2 cents, critical Sections are defined as a structure and operations on them are performed in user-mode context.
ntdll!_RTL_CRITICAL_SECTION
+0x000 DebugInfo : Ptr32 _RTL_CRITICAL_SECTION_DEBUG
+0x004 LockCount : Int4B
+0x008 RecursionCount : Int4B
+0x00c OwningThread : Ptr32 Void
+0x010 LockSemaphore : Ptr32 Void
+0x014 SpinCount : Uint4B
Whereas mutex are kernel objects (ExMutantObjectType) created in the Windows object directory. Mutex operations are mostly implemented in kernel-mode. For instance, when creating a Mutex, you end up calling nt!NtCreateMutant in kernel.
Just to add my 2 cents, critical Sections are defined as a structure and operations on them are performed in user-mode context.
ntdll!_RTL_CRITICAL_SECTION
+0x000 DebugInfo : Ptr32 _RTL_CRITICAL_SECTION_DEBUG
+0x004 LockCount : Int4B
+0x008 RecursionCount : Int4B
+0x00c OwningThread : Ptr32 Void
+0x010 LockSemaphore : Ptr32 Void
+0x014 SpinCount : Uint4B
Whereas mutex are kernel objects (ExMutantObjectType) created in the Windows object directory. Mutex operations are mostly implemented in kernel-mode. For instance, when creating a Mutex, you end up calling nt!NtCreateMutant in kernel.
answered Apr 29 '09 at 1:34
Martin
1251
1251
What happens when a program that initializes and uses a Mutex object, crashes? Does the Mutex object gets automatically deallocated? No, I would say. Right?
– Ankur
Oct 26 '09 at 12:30
5
Kernel objects have a reference count. Closing a handle to an object decrements the reference count and when it reaches 0 the object is freed. When a process crashes, all of its handles are automatically closed, so a mutex that only that process has a handle to would be automatically deallocated.
– Michael
Nov 18 '09 at 17:19
add a comment |
What happens when a program that initializes and uses a Mutex object, crashes? Does the Mutex object gets automatically deallocated? No, I would say. Right?
– Ankur
Oct 26 '09 at 12:30
5
Kernel objects have a reference count. Closing a handle to an object decrements the reference count and when it reaches 0 the object is freed. When a process crashes, all of its handles are automatically closed, so a mutex that only that process has a handle to would be automatically deallocated.
– Michael
Nov 18 '09 at 17:19
What happens when a program that initializes and uses a Mutex object, crashes? Does the Mutex object gets automatically deallocated? No, I would say. Right?
– Ankur
Oct 26 '09 at 12:30
What happens when a program that initializes and uses a Mutex object, crashes? Does the Mutex object gets automatically deallocated? No, I would say. Right?
– Ankur
Oct 26 '09 at 12:30
5
5
Kernel objects have a reference count. Closing a handle to an object decrements the reference count and when it reaches 0 the object is freed. When a process crashes, all of its handles are automatically closed, so a mutex that only that process has a handle to would be automatically deallocated.
– Michael
Nov 18 '09 at 17:19
Kernel objects have a reference count. Closing a handle to an object decrements the reference count and when it reaches 0 the object is freed. When a process crashes, all of its handles are automatically closed, so a mutex that only that process has a handle to would be automatically deallocated.
– Michael
Nov 18 '09 at 17:19
add a comment |
Great answer from Michael. I've added a third test for the mutex class introduced in C++11. The result is somewhat interesting, and still supports his original endorsement of CRITICAL_SECTION objects for single processes.
mutex m;
HANDLE mutex = CreateMutex(NULL, FALSE, NULL);
CRITICAL_SECTION critSec;
InitializeCriticalSection(&critSec);
LARGE_INTEGER freq;
QueryPerformanceFrequency(&freq);
LARGE_INTEGER start, end;
// Force code into memory, so we don't see any effects of paging.
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
}
QueryPerformanceCounter(&end);
int totalTimeCS = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
// Force code into memory, so we don't see any effects of paging.
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
}
QueryPerformanceCounter(&end);
int totalTime = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
// Force code into memory, so we don't see any effects of paging.
m.lock();
m.unlock();
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
m.lock();
m.unlock();
}
QueryPerformanceCounter(&end);
int totalTimeM = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
printf("C++ Mutex: %d Mutex: %d CritSec: %dn", totalTimeM, totalTime, totalTimeCS);
My results were 217, 473, and 19 (note that my ratio of times for the last two is roughly comparable to Michael's, but my machine is at least four years younger than his, so you can see evidence of increased speed between 2009 and 2013, when the XPS-8700 came out). The new mutex class is twice as fast as the Windows mutex, but still less than a tenth the speed of the Windows CRITICAL_SECTION object. Note that I only tested the non-recursive mutex. CRITICAL_SECTION objects are recursive (one thread can enter them repeatedly, provided it leaves the same number of times).
add a comment |
Great answer from Michael. I've added a third test for the mutex class introduced in C++11. The result is somewhat interesting, and still supports his original endorsement of CRITICAL_SECTION objects for single processes.
mutex m;
HANDLE mutex = CreateMutex(NULL, FALSE, NULL);
CRITICAL_SECTION critSec;
InitializeCriticalSection(&critSec);
LARGE_INTEGER freq;
QueryPerformanceFrequency(&freq);
LARGE_INTEGER start, end;
// Force code into memory, so we don't see any effects of paging.
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
}
QueryPerformanceCounter(&end);
int totalTimeCS = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
// Force code into memory, so we don't see any effects of paging.
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
}
QueryPerformanceCounter(&end);
int totalTime = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
// Force code into memory, so we don't see any effects of paging.
m.lock();
m.unlock();
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
m.lock();
m.unlock();
}
QueryPerformanceCounter(&end);
int totalTimeM = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
printf("C++ Mutex: %d Mutex: %d CritSec: %dn", totalTimeM, totalTime, totalTimeCS);
My results were 217, 473, and 19 (note that my ratio of times for the last two is roughly comparable to Michael's, but my machine is at least four years younger than his, so you can see evidence of increased speed between 2009 and 2013, when the XPS-8700 came out). The new mutex class is twice as fast as the Windows mutex, but still less than a tenth the speed of the Windows CRITICAL_SECTION object. Note that I only tested the non-recursive mutex. CRITICAL_SECTION objects are recursive (one thread can enter them repeatedly, provided it leaves the same number of times).
add a comment |
Great answer from Michael. I've added a third test for the mutex class introduced in C++11. The result is somewhat interesting, and still supports his original endorsement of CRITICAL_SECTION objects for single processes.
mutex m;
HANDLE mutex = CreateMutex(NULL, FALSE, NULL);
CRITICAL_SECTION critSec;
InitializeCriticalSection(&critSec);
LARGE_INTEGER freq;
QueryPerformanceFrequency(&freq);
LARGE_INTEGER start, end;
// Force code into memory, so we don't see any effects of paging.
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
}
QueryPerformanceCounter(&end);
int totalTimeCS = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
// Force code into memory, so we don't see any effects of paging.
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
}
QueryPerformanceCounter(&end);
int totalTime = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
// Force code into memory, so we don't see any effects of paging.
m.lock();
m.unlock();
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
m.lock();
m.unlock();
}
QueryPerformanceCounter(&end);
int totalTimeM = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
printf("C++ Mutex: %d Mutex: %d CritSec: %dn", totalTimeM, totalTime, totalTimeCS);
My results were 217, 473, and 19 (note that my ratio of times for the last two is roughly comparable to Michael's, but my machine is at least four years younger than his, so you can see evidence of increased speed between 2009 and 2013, when the XPS-8700 came out). The new mutex class is twice as fast as the Windows mutex, but still less than a tenth the speed of the Windows CRITICAL_SECTION object. Note that I only tested the non-recursive mutex. CRITICAL_SECTION objects are recursive (one thread can enter them repeatedly, provided it leaves the same number of times).
Great answer from Michael. I've added a third test for the mutex class introduced in C++11. The result is somewhat interesting, and still supports his original endorsement of CRITICAL_SECTION objects for single processes.
mutex m;
HANDLE mutex = CreateMutex(NULL, FALSE, NULL);
CRITICAL_SECTION critSec;
InitializeCriticalSection(&critSec);
LARGE_INTEGER freq;
QueryPerformanceFrequency(&freq);
LARGE_INTEGER start, end;
// Force code into memory, so we don't see any effects of paging.
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
EnterCriticalSection(&critSec);
LeaveCriticalSection(&critSec);
}
QueryPerformanceCounter(&end);
int totalTimeCS = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
// Force code into memory, so we don't see any effects of paging.
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
WaitForSingleObject(mutex, INFINITE);
ReleaseMutex(mutex);
}
QueryPerformanceCounter(&end);
int totalTime = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
// Force code into memory, so we don't see any effects of paging.
m.lock();
m.unlock();
QueryPerformanceCounter(&start);
for (int i = 0; i < 1000000; i++)
{
m.lock();
m.unlock();
}
QueryPerformanceCounter(&end);
int totalTimeM = (int)((end.QuadPart - start.QuadPart) * 1000 / freq.QuadPart);
printf("C++ Mutex: %d Mutex: %d CritSec: %dn", totalTimeM, totalTime, totalTimeCS);
My results were 217, 473, and 19 (note that my ratio of times for the last two is roughly comparable to Michael's, but my machine is at least four years younger than his, so you can see evidence of increased speed between 2009 and 2013, when the XPS-8700 came out). The new mutex class is twice as fast as the Windows mutex, but still less than a tenth the speed of the Windows CRITICAL_SECTION object. Note that I only tested the non-recursive mutex. CRITICAL_SECTION objects are recursive (one thread can enter them repeatedly, provided it leaves the same number of times).
answered Aug 15 '16 at 18:31
Stevens Miller
776419
776419
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f800383%2fwhat-is-the-difference-between-mutex-and-critical-section%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown