How to avoid false sharing for an OpenMP loop using schedule clause
up vote
0
down vote
favorite
I have false sharing in a a function:
__inline static
void calculateClusterCentroIDs(int numCoords, int numObjs, int numClusters,
float * dataSetMatrix, int * clusterAssignmentCurrent, float *clustersCentroID)
{
int * clusterMemberCount = (int *) calloc (numClusters,sizeof(float));
// sum all points
// for every point
#pragma omp parallel for schedule(static, 16)
for (int i = 0; i < numObjs; ++i) {
// which cluster is it in?
int activeCluster = clusterAssignmentCurrent[i];
// update count of members in that cluster
#pragma omp atomic
++clusterMemberCount[activeCluster];
#pragma omp parallel for schedule(dynamic, 16)
// sum point coordinates for finding centroid
for (int j = 0; j < numCoords; ++j)
#pragma omp atomic
clustersCentroID[activeCluster*numCoords + j] += dataSetMatrix[i*numCoords + j];
}
// now divide each coordinate sum by number of members to find mean/centroid
// for each cluster
#pragma omp parallel for schedule(dynamic, 16)
for (int i = 0; i < numClusters; ++i) {
if (clusterMemberCount[i] != 0)
// for each numCoordsension
for (int j = 0; j < numCoords; ++j)
#pragma omp atomic
clustersCentroID[i*numCoords + j] /= clusterMemberCount[i]; /// XXXX will divide by zero here for any empty clusters!
}
free(clusterMemberCount);
}
I think the false sharing is located on commented line:
#pragma omp parallel for schedule(dynamic, 16)
for (int i = 0; i < numClusters; ++i) {
if (clusterMemberCount[i] != 0)
for (int j = 0; j < numCoords; ++j)
#pragma omp atomic
//HERE
clustersCentroID[i*numCoords + j] /= clusterMemberCount[i];
}
I would like to avoid it (false sharing) with #pragma
clause. I have been researching, and a cache line is over 64 bits, so it is stored 16 floats numbers inside each line. I have thought about #pragma omp parallel for schedule(static, 16)
for solving false sharing (assign to each thread same number of cache line supports) but it did not work.
Exist any way of solve false sharing using schedule
clause inside #pragma
clause?
c openmp
add a comment |
up vote
0
down vote
favorite
I have false sharing in a a function:
__inline static
void calculateClusterCentroIDs(int numCoords, int numObjs, int numClusters,
float * dataSetMatrix, int * clusterAssignmentCurrent, float *clustersCentroID)
{
int * clusterMemberCount = (int *) calloc (numClusters,sizeof(float));
// sum all points
// for every point
#pragma omp parallel for schedule(static, 16)
for (int i = 0; i < numObjs; ++i) {
// which cluster is it in?
int activeCluster = clusterAssignmentCurrent[i];
// update count of members in that cluster
#pragma omp atomic
++clusterMemberCount[activeCluster];
#pragma omp parallel for schedule(dynamic, 16)
// sum point coordinates for finding centroid
for (int j = 0; j < numCoords; ++j)
#pragma omp atomic
clustersCentroID[activeCluster*numCoords + j] += dataSetMatrix[i*numCoords + j];
}
// now divide each coordinate sum by number of members to find mean/centroid
// for each cluster
#pragma omp parallel for schedule(dynamic, 16)
for (int i = 0; i < numClusters; ++i) {
if (clusterMemberCount[i] != 0)
// for each numCoordsension
for (int j = 0; j < numCoords; ++j)
#pragma omp atomic
clustersCentroID[i*numCoords + j] /= clusterMemberCount[i]; /// XXXX will divide by zero here for any empty clusters!
}
free(clusterMemberCount);
}
I think the false sharing is located on commented line:
#pragma omp parallel for schedule(dynamic, 16)
for (int i = 0; i < numClusters; ++i) {
if (clusterMemberCount[i] != 0)
for (int j = 0; j < numCoords; ++j)
#pragma omp atomic
//HERE
clustersCentroID[i*numCoords + j] /= clusterMemberCount[i];
}
I would like to avoid it (false sharing) with #pragma
clause. I have been researching, and a cache line is over 64 bits, so it is stored 16 floats numbers inside each line. I have thought about #pragma omp parallel for schedule(static, 16)
for solving false sharing (assign to each thread same number of cache line supports) but it did not work.
Exist any way of solve false sharing using schedule
clause inside #pragma
clause?
c openmp
How are you determining if you have false sharing? And does it stop when you get rid of that atomic pragma?
– Shawn
Nov 20 at 20:05
I determine it due to execution time is slower than seq. I think problem is there after analice code
– JuMoGar
Nov 20 at 20:08
So you haven't run your code through a profiler that generates stats for cache issues etc. to help tell for sure? Have you tried taking out that atomic bit (Which I'm pretty sure isn't needed; no two threads should ever havei*numCoords + j
evaluate to the same number)? Gotten rid of the schedule clause, or at least increased the chunk size to a more reasonable number for something so straightforward and changed it to static? Tested with enough data to be worth the overhead of threads?
– Shawn
Nov 20 at 20:26
Yes, I have tried with enought data. No, I have not profiler, I used GProf but I do not understand its results. I have removed atomic but then result is not correct (I thought same as you, atomic was not necesary, but I was wrong)
– JuMoGar
Nov 20 at 20:57
add a comment |
up vote
0
down vote
favorite
up vote
0
down vote
favorite
I have false sharing in a a function:
__inline static
void calculateClusterCentroIDs(int numCoords, int numObjs, int numClusters,
float * dataSetMatrix, int * clusterAssignmentCurrent, float *clustersCentroID)
{
int * clusterMemberCount = (int *) calloc (numClusters,sizeof(float));
// sum all points
// for every point
#pragma omp parallel for schedule(static, 16)
for (int i = 0; i < numObjs; ++i) {
// which cluster is it in?
int activeCluster = clusterAssignmentCurrent[i];
// update count of members in that cluster
#pragma omp atomic
++clusterMemberCount[activeCluster];
#pragma omp parallel for schedule(dynamic, 16)
// sum point coordinates for finding centroid
for (int j = 0; j < numCoords; ++j)
#pragma omp atomic
clustersCentroID[activeCluster*numCoords + j] += dataSetMatrix[i*numCoords + j];
}
// now divide each coordinate sum by number of members to find mean/centroid
// for each cluster
#pragma omp parallel for schedule(dynamic, 16)
for (int i = 0; i < numClusters; ++i) {
if (clusterMemberCount[i] != 0)
// for each numCoordsension
for (int j = 0; j < numCoords; ++j)
#pragma omp atomic
clustersCentroID[i*numCoords + j] /= clusterMemberCount[i]; /// XXXX will divide by zero here for any empty clusters!
}
free(clusterMemberCount);
}
I think the false sharing is located on commented line:
#pragma omp parallel for schedule(dynamic, 16)
for (int i = 0; i < numClusters; ++i) {
if (clusterMemberCount[i] != 0)
for (int j = 0; j < numCoords; ++j)
#pragma omp atomic
//HERE
clustersCentroID[i*numCoords + j] /= clusterMemberCount[i];
}
I would like to avoid it (false sharing) with #pragma
clause. I have been researching, and a cache line is over 64 bits, so it is stored 16 floats numbers inside each line. I have thought about #pragma omp parallel for schedule(static, 16)
for solving false sharing (assign to each thread same number of cache line supports) but it did not work.
Exist any way of solve false sharing using schedule
clause inside #pragma
clause?
c openmp
I have false sharing in a a function:
__inline static
void calculateClusterCentroIDs(int numCoords, int numObjs, int numClusters,
float * dataSetMatrix, int * clusterAssignmentCurrent, float *clustersCentroID)
{
int * clusterMemberCount = (int *) calloc (numClusters,sizeof(float));
// sum all points
// for every point
#pragma omp parallel for schedule(static, 16)
for (int i = 0; i < numObjs; ++i) {
// which cluster is it in?
int activeCluster = clusterAssignmentCurrent[i];
// update count of members in that cluster
#pragma omp atomic
++clusterMemberCount[activeCluster];
#pragma omp parallel for schedule(dynamic, 16)
// sum point coordinates for finding centroid
for (int j = 0; j < numCoords; ++j)
#pragma omp atomic
clustersCentroID[activeCluster*numCoords + j] += dataSetMatrix[i*numCoords + j];
}
// now divide each coordinate sum by number of members to find mean/centroid
// for each cluster
#pragma omp parallel for schedule(dynamic, 16)
for (int i = 0; i < numClusters; ++i) {
if (clusterMemberCount[i] != 0)
// for each numCoordsension
for (int j = 0; j < numCoords; ++j)
#pragma omp atomic
clustersCentroID[i*numCoords + j] /= clusterMemberCount[i]; /// XXXX will divide by zero here for any empty clusters!
}
free(clusterMemberCount);
}
I think the false sharing is located on commented line:
#pragma omp parallel for schedule(dynamic, 16)
for (int i = 0; i < numClusters; ++i) {
if (clusterMemberCount[i] != 0)
for (int j = 0; j < numCoords; ++j)
#pragma omp atomic
//HERE
clustersCentroID[i*numCoords + j] /= clusterMemberCount[i];
}
I would like to avoid it (false sharing) with #pragma
clause. I have been researching, and a cache line is over 64 bits, so it is stored 16 floats numbers inside each line. I have thought about #pragma omp parallel for schedule(static, 16)
for solving false sharing (assign to each thread same number of cache line supports) but it did not work.
Exist any way of solve false sharing using schedule
clause inside #pragma
clause?
c openmp
c openmp
asked Nov 20 at 17:52
JuMoGar
7911217
7911217
How are you determining if you have false sharing? And does it stop when you get rid of that atomic pragma?
– Shawn
Nov 20 at 20:05
I determine it due to execution time is slower than seq. I think problem is there after analice code
– JuMoGar
Nov 20 at 20:08
So you haven't run your code through a profiler that generates stats for cache issues etc. to help tell for sure? Have you tried taking out that atomic bit (Which I'm pretty sure isn't needed; no two threads should ever havei*numCoords + j
evaluate to the same number)? Gotten rid of the schedule clause, or at least increased the chunk size to a more reasonable number for something so straightforward and changed it to static? Tested with enough data to be worth the overhead of threads?
– Shawn
Nov 20 at 20:26
Yes, I have tried with enought data. No, I have not profiler, I used GProf but I do not understand its results. I have removed atomic but then result is not correct (I thought same as you, atomic was not necesary, but I was wrong)
– JuMoGar
Nov 20 at 20:57
add a comment |
How are you determining if you have false sharing? And does it stop when you get rid of that atomic pragma?
– Shawn
Nov 20 at 20:05
I determine it due to execution time is slower than seq. I think problem is there after analice code
– JuMoGar
Nov 20 at 20:08
So you haven't run your code through a profiler that generates stats for cache issues etc. to help tell for sure? Have you tried taking out that atomic bit (Which I'm pretty sure isn't needed; no two threads should ever havei*numCoords + j
evaluate to the same number)? Gotten rid of the schedule clause, or at least increased the chunk size to a more reasonable number for something so straightforward and changed it to static? Tested with enough data to be worth the overhead of threads?
– Shawn
Nov 20 at 20:26
Yes, I have tried with enought data. No, I have not profiler, I used GProf but I do not understand its results. I have removed atomic but then result is not correct (I thought same as you, atomic was not necesary, but I was wrong)
– JuMoGar
Nov 20 at 20:57
How are you determining if you have false sharing? And does it stop when you get rid of that atomic pragma?
– Shawn
Nov 20 at 20:05
How are you determining if you have false sharing? And does it stop when you get rid of that atomic pragma?
– Shawn
Nov 20 at 20:05
I determine it due to execution time is slower than seq. I think problem is there after analice code
– JuMoGar
Nov 20 at 20:08
I determine it due to execution time is slower than seq. I think problem is there after analice code
– JuMoGar
Nov 20 at 20:08
So you haven't run your code through a profiler that generates stats for cache issues etc. to help tell for sure? Have you tried taking out that atomic bit (Which I'm pretty sure isn't needed; no two threads should ever have
i*numCoords + j
evaluate to the same number)? Gotten rid of the schedule clause, or at least increased the chunk size to a more reasonable number for something so straightforward and changed it to static? Tested with enough data to be worth the overhead of threads?– Shawn
Nov 20 at 20:26
So you haven't run your code through a profiler that generates stats for cache issues etc. to help tell for sure? Have you tried taking out that atomic bit (Which I'm pretty sure isn't needed; no two threads should ever have
i*numCoords + j
evaluate to the same number)? Gotten rid of the schedule clause, or at least increased the chunk size to a more reasonable number for something so straightforward and changed it to static? Tested with enough data to be worth the overhead of threads?– Shawn
Nov 20 at 20:26
Yes, I have tried with enought data. No, I have not profiler, I used GProf but I do not understand its results. I have removed atomic but then result is not correct (I thought same as you, atomic was not necesary, but I was wrong)
– JuMoGar
Nov 20 at 20:57
Yes, I have tried with enought data. No, I have not profiler, I used GProf but I do not understand its results. I have removed atomic but then result is not correct (I thought same as you, atomic was not necesary, but I was wrong)
– JuMoGar
Nov 20 at 20:57
add a comment |
active
oldest
votes
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53398767%2fhow-to-avoid-false-sharing-for-an-openmp-loop-using-schedule-clause%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
active
oldest
votes
active
oldest
votes
active
oldest
votes
active
oldest
votes
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53398767%2fhow-to-avoid-false-sharing-for-an-openmp-loop-using-schedule-clause%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
How are you determining if you have false sharing? And does it stop when you get rid of that atomic pragma?
– Shawn
Nov 20 at 20:05
I determine it due to execution time is slower than seq. I think problem is there after analice code
– JuMoGar
Nov 20 at 20:08
So you haven't run your code through a profiler that generates stats for cache issues etc. to help tell for sure? Have you tried taking out that atomic bit (Which I'm pretty sure isn't needed; no two threads should ever have
i*numCoords + j
evaluate to the same number)? Gotten rid of the schedule clause, or at least increased the chunk size to a more reasonable number for something so straightforward and changed it to static? Tested with enough data to be worth the overhead of threads?– Shawn
Nov 20 at 20:26
Yes, I have tried with enought data. No, I have not profiler, I used GProf but I do not understand its results. I have removed atomic but then result is not correct (I thought same as you, atomic was not necesary, but I was wrong)
– JuMoGar
Nov 20 at 20:57