MPI for loop in c











up vote
0
down vote

favorite












I need to loop through a bi-dimensional array executing an operation on all the elements of the array, for some number of iterations. Here is my code:



for (for_iters=0;for_iters<ITERS;for_iters++) 
{
diff = 0.0;

for (i=1;i<n;i++)
{
for (j=1;j<n;j++)
{
tmp = A[i][j];
A[i][j] = 0.3*(A[I][j] + A[i][j-1] + A[i-1][j] + A[i][j+1] + A[i+1][j]);
}

}
iters++;

} /*for*/
}


The problem is to translate the above code to MPI. I think I could divide the elements of the array to make every process deal with a part of the array, but I have no idea how to make the outer loop if every line of the code its executed once per process? if I have 3 processes, would I create 3 external loops??










share|improve this question
























  • By outer loop do you mean the for (for_iters=0;for_iters<ITERS;for_iters++)? You can easily split the matrix computation by for instance making different processes perform computations on different matrix rows (and all associated columns). The actual first loop could be left as it is. You can also split the outer loop and have each process compute all matrix values.
    – atru
    Nov 19 at 20:15












  • Come to think of it, it makes little sense to parallelize the iterative outermost loop - you need previous solution in order to compute the current. In that case a simple parallelization is the first way i.e. the second loop.
    – atru
    Nov 19 at 20:17










  • But the n iteration nes all the results of n-1 iterations,if I split the outer loop I think I won't get this result
    – afs2
    Nov 19 at 20:19










  • That's the second comment, also your current code has some typos. I suggest you first make a serial one, run it and confirm its validity, then move on to MPI. So if I were you I would simply split it using rows - like process 0 gets rows 0 to some Ni etc. and then computes across all columns - I would just split them evenly taking care of uneven-ness using like modulo. There are online resources for this one. You will also need to communicate the outer-boundary rows at the end of each iteration (i,j are known on the current process but i-1 and i+1 are not if on the boundary).
    – atru
    Nov 19 at 20:23

















up vote
0
down vote

favorite












I need to loop through a bi-dimensional array executing an operation on all the elements of the array, for some number of iterations. Here is my code:



for (for_iters=0;for_iters<ITERS;for_iters++) 
{
diff = 0.0;

for (i=1;i<n;i++)
{
for (j=1;j<n;j++)
{
tmp = A[i][j];
A[i][j] = 0.3*(A[I][j] + A[i][j-1] + A[i-1][j] + A[i][j+1] + A[i+1][j]);
}

}
iters++;

} /*for*/
}


The problem is to translate the above code to MPI. I think I could divide the elements of the array to make every process deal with a part of the array, but I have no idea how to make the outer loop if every line of the code its executed once per process? if I have 3 processes, would I create 3 external loops??










share|improve this question
























  • By outer loop do you mean the for (for_iters=0;for_iters<ITERS;for_iters++)? You can easily split the matrix computation by for instance making different processes perform computations on different matrix rows (and all associated columns). The actual first loop could be left as it is. You can also split the outer loop and have each process compute all matrix values.
    – atru
    Nov 19 at 20:15












  • Come to think of it, it makes little sense to parallelize the iterative outermost loop - you need previous solution in order to compute the current. In that case a simple parallelization is the first way i.e. the second loop.
    – atru
    Nov 19 at 20:17










  • But the n iteration nes all the results of n-1 iterations,if I split the outer loop I think I won't get this result
    – afs2
    Nov 19 at 20:19










  • That's the second comment, also your current code has some typos. I suggest you first make a serial one, run it and confirm its validity, then move on to MPI. So if I were you I would simply split it using rows - like process 0 gets rows 0 to some Ni etc. and then computes across all columns - I would just split them evenly taking care of uneven-ness using like modulo. There are online resources for this one. You will also need to communicate the outer-boundary rows at the end of each iteration (i,j are known on the current process but i-1 and i+1 are not if on the boundary).
    – atru
    Nov 19 at 20:23















up vote
0
down vote

favorite









up vote
0
down vote

favorite











I need to loop through a bi-dimensional array executing an operation on all the elements of the array, for some number of iterations. Here is my code:



for (for_iters=0;for_iters<ITERS;for_iters++) 
{
diff = 0.0;

for (i=1;i<n;i++)
{
for (j=1;j<n;j++)
{
tmp = A[i][j];
A[i][j] = 0.3*(A[I][j] + A[i][j-1] + A[i-1][j] + A[i][j+1] + A[i+1][j]);
}

}
iters++;

} /*for*/
}


The problem is to translate the above code to MPI. I think I could divide the elements of the array to make every process deal with a part of the array, but I have no idea how to make the outer loop if every line of the code its executed once per process? if I have 3 processes, would I create 3 external loops??










share|improve this question















I need to loop through a bi-dimensional array executing an operation on all the elements of the array, for some number of iterations. Here is my code:



for (for_iters=0;for_iters<ITERS;for_iters++) 
{
diff = 0.0;

for (i=1;i<n;i++)
{
for (j=1;j<n;j++)
{
tmp = A[i][j];
A[i][j] = 0.3*(A[I][j] + A[i][j-1] + A[i-1][j] + A[i][j+1] + A[i+1][j]);
}

}
iters++;

} /*for*/
}


The problem is to translate the above code to MPI. I think I could divide the elements of the array to make every process deal with a part of the array, but I have no idea how to make the outer loop if every line of the code its executed once per process? if I have 3 processes, would I create 3 external loops??







c mpi






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 19 at 23:21









dmcgrandle

958315




958315










asked Nov 19 at 20:05









afs2

12




12












  • By outer loop do you mean the for (for_iters=0;for_iters<ITERS;for_iters++)? You can easily split the matrix computation by for instance making different processes perform computations on different matrix rows (and all associated columns). The actual first loop could be left as it is. You can also split the outer loop and have each process compute all matrix values.
    – atru
    Nov 19 at 20:15












  • Come to think of it, it makes little sense to parallelize the iterative outermost loop - you need previous solution in order to compute the current. In that case a simple parallelization is the first way i.e. the second loop.
    – atru
    Nov 19 at 20:17










  • But the n iteration nes all the results of n-1 iterations,if I split the outer loop I think I won't get this result
    – afs2
    Nov 19 at 20:19










  • That's the second comment, also your current code has some typos. I suggest you first make a serial one, run it and confirm its validity, then move on to MPI. So if I were you I would simply split it using rows - like process 0 gets rows 0 to some Ni etc. and then computes across all columns - I would just split them evenly taking care of uneven-ness using like modulo. There are online resources for this one. You will also need to communicate the outer-boundary rows at the end of each iteration (i,j are known on the current process but i-1 and i+1 are not if on the boundary).
    – atru
    Nov 19 at 20:23




















  • By outer loop do you mean the for (for_iters=0;for_iters<ITERS;for_iters++)? You can easily split the matrix computation by for instance making different processes perform computations on different matrix rows (and all associated columns). The actual first loop could be left as it is. You can also split the outer loop and have each process compute all matrix values.
    – atru
    Nov 19 at 20:15












  • Come to think of it, it makes little sense to parallelize the iterative outermost loop - you need previous solution in order to compute the current. In that case a simple parallelization is the first way i.e. the second loop.
    – atru
    Nov 19 at 20:17










  • But the n iteration nes all the results of n-1 iterations,if I split the outer loop I think I won't get this result
    – afs2
    Nov 19 at 20:19










  • That's the second comment, also your current code has some typos. I suggest you first make a serial one, run it and confirm its validity, then move on to MPI. So if I were you I would simply split it using rows - like process 0 gets rows 0 to some Ni etc. and then computes across all columns - I would just split them evenly taking care of uneven-ness using like modulo. There are online resources for this one. You will also need to communicate the outer-boundary rows at the end of each iteration (i,j are known on the current process but i-1 and i+1 are not if on the boundary).
    – atru
    Nov 19 at 20:23


















By outer loop do you mean the for (for_iters=0;for_iters<ITERS;for_iters++)? You can easily split the matrix computation by for instance making different processes perform computations on different matrix rows (and all associated columns). The actual first loop could be left as it is. You can also split the outer loop and have each process compute all matrix values.
– atru
Nov 19 at 20:15






By outer loop do you mean the for (for_iters=0;for_iters<ITERS;for_iters++)? You can easily split the matrix computation by for instance making different processes perform computations on different matrix rows (and all associated columns). The actual first loop could be left as it is. You can also split the outer loop and have each process compute all matrix values.
– atru
Nov 19 at 20:15














Come to think of it, it makes little sense to parallelize the iterative outermost loop - you need previous solution in order to compute the current. In that case a simple parallelization is the first way i.e. the second loop.
– atru
Nov 19 at 20:17




Come to think of it, it makes little sense to parallelize the iterative outermost loop - you need previous solution in order to compute the current. In that case a simple parallelization is the first way i.e. the second loop.
– atru
Nov 19 at 20:17












But the n iteration nes all the results of n-1 iterations,if I split the outer loop I think I won't get this result
– afs2
Nov 19 at 20:19




But the n iteration nes all the results of n-1 iterations,if I split the outer loop I think I won't get this result
– afs2
Nov 19 at 20:19












That's the second comment, also your current code has some typos. I suggest you first make a serial one, run it and confirm its validity, then move on to MPI. So if I were you I would simply split it using rows - like process 0 gets rows 0 to some Ni etc. and then computes across all columns - I would just split them evenly taking care of uneven-ness using like modulo. There are online resources for this one. You will also need to communicate the outer-boundary rows at the end of each iteration (i,j are known on the current process but i-1 and i+1 are not if on the boundary).
– atru
Nov 19 at 20:23






That's the second comment, also your current code has some typos. I suggest you first make a serial one, run it and confirm its validity, then move on to MPI. So if I were you I would simply split it using rows - like process 0 gets rows 0 to some Ni etc. and then computes across all columns - I would just split them evenly taking care of uneven-ness using like modulo. There are online resources for this one. You will also need to communicate the outer-boundary rows at the end of each iteration (i,j are known on the current process but i-1 and i+1 are not if on the boundary).
– atru
Nov 19 at 20:23



















active

oldest

votes











Your Answer






StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});














draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53381858%2fmpi-for-loop-in-c%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes
















draft saved

draft discarded




















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53381858%2fmpi-for-loop-in-c%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

404 Error Contact Form 7 ajax form submitting

How to know if a Active Directory user can login interactively

Refactoring coordinates for Minecraft Pi buildings written in Python