Without accessing the code how can a reviewer confirm if the simulation results are correct?
I have been reviewing the articles for various top-rank journals and conferences for the last few years. After all these experiences, I can tell you there is no way to confirm the correctness of simulation results. Therefore, I usually make comments on design, procedure, mathematical and analytical analysis.
In the results section, I can ask some questions about why this or that is so, but how can I judge if the simulation was really performed or these are just fabricated graphs?
publications research-process peer-review review-articles
add a comment |
I have been reviewing the articles for various top-rank journals and conferences for the last few years. After all these experiences, I can tell you there is no way to confirm the correctness of simulation results. Therefore, I usually make comments on design, procedure, mathematical and analytical analysis.
In the results section, I can ask some questions about why this or that is so, but how can I judge if the simulation was really performed or these are just fabricated graphs?
publications research-process peer-review review-articles
This question came in my mind because I observed on few occasions, during review process, a reviewer ask for including new results, which in my opinion required a lot of coding and effort to implement, but author responded within 7-10 days with new results and improved article.
– MBK
7 hours ago
5
Why would this be different from an author stating the result of an experiment? The experimental protocol should be given, but there would be no expectation that the reviewer would perform the experiment again to verify that it came out as the authors said. In either case, the author could be mistaken or dishonest, but at least in the experiment case that wouldn't be something the reviewer would know.
– David Thornley
4 hours ago
@MBK: Regarding the case you mention in comments, it is surely very possible that the authors weren’t starting from scratch in implementing the reviewers’ suggestions, but had already independently considered those suggestions or something related, and so had a significant amount of the necessary code already written?
– PLL
21 mins ago
add a comment |
I have been reviewing the articles for various top-rank journals and conferences for the last few years. After all these experiences, I can tell you there is no way to confirm the correctness of simulation results. Therefore, I usually make comments on design, procedure, mathematical and analytical analysis.
In the results section, I can ask some questions about why this or that is so, but how can I judge if the simulation was really performed or these are just fabricated graphs?
publications research-process peer-review review-articles
I have been reviewing the articles for various top-rank journals and conferences for the last few years. After all these experiences, I can tell you there is no way to confirm the correctness of simulation results. Therefore, I usually make comments on design, procedure, mathematical and analytical analysis.
In the results section, I can ask some questions about why this or that is so, but how can I judge if the simulation was really performed or these are just fabricated graphs?
publications research-process peer-review review-articles
publications research-process peer-review review-articles
edited 55 mins ago
aeismail♦
159k31373694
159k31373694
asked 7 hours ago
MBKMBK
2,3151426
2,3151426
This question came in my mind because I observed on few occasions, during review process, a reviewer ask for including new results, which in my opinion required a lot of coding and effort to implement, but author responded within 7-10 days with new results and improved article.
– MBK
7 hours ago
5
Why would this be different from an author stating the result of an experiment? The experimental protocol should be given, but there would be no expectation that the reviewer would perform the experiment again to verify that it came out as the authors said. In either case, the author could be mistaken or dishonest, but at least in the experiment case that wouldn't be something the reviewer would know.
– David Thornley
4 hours ago
@MBK: Regarding the case you mention in comments, it is surely very possible that the authors weren’t starting from scratch in implementing the reviewers’ suggestions, but had already independently considered those suggestions or something related, and so had a significant amount of the necessary code already written?
– PLL
21 mins ago
add a comment |
This question came in my mind because I observed on few occasions, during review process, a reviewer ask for including new results, which in my opinion required a lot of coding and effort to implement, but author responded within 7-10 days with new results and improved article.
– MBK
7 hours ago
5
Why would this be different from an author stating the result of an experiment? The experimental protocol should be given, but there would be no expectation that the reviewer would perform the experiment again to verify that it came out as the authors said. In either case, the author could be mistaken or dishonest, but at least in the experiment case that wouldn't be something the reviewer would know.
– David Thornley
4 hours ago
@MBK: Regarding the case you mention in comments, it is surely very possible that the authors weren’t starting from scratch in implementing the reviewers’ suggestions, but had already independently considered those suggestions or something related, and so had a significant amount of the necessary code already written?
– PLL
21 mins ago
This question came in my mind because I observed on few occasions, during review process, a reviewer ask for including new results, which in my opinion required a lot of coding and effort to implement, but author responded within 7-10 days with new results and improved article.
– MBK
7 hours ago
This question came in my mind because I observed on few occasions, during review process, a reviewer ask for including new results, which in my opinion required a lot of coding and effort to implement, but author responded within 7-10 days with new results and improved article.
– MBK
7 hours ago
5
5
Why would this be different from an author stating the result of an experiment? The experimental protocol should be given, but there would be no expectation that the reviewer would perform the experiment again to verify that it came out as the authors said. In either case, the author could be mistaken or dishonest, but at least in the experiment case that wouldn't be something the reviewer would know.
– David Thornley
4 hours ago
Why would this be different from an author stating the result of an experiment? The experimental protocol should be given, but there would be no expectation that the reviewer would perform the experiment again to verify that it came out as the authors said. In either case, the author could be mistaken or dishonest, but at least in the experiment case that wouldn't be something the reviewer would know.
– David Thornley
4 hours ago
@MBK: Regarding the case you mention in comments, it is surely very possible that the authors weren’t starting from scratch in implementing the reviewers’ suggestions, but had already independently considered those suggestions or something related, and so had a significant amount of the necessary code already written?
– PLL
21 mins ago
@MBK: Regarding the case you mention in comments, it is surely very possible that the authors weren’t starting from scratch in implementing the reviewers’ suggestions, but had already independently considered those suggestions or something related, and so had a significant amount of the necessary code already written?
– PLL
21 mins ago
add a comment |
4 Answers
4
active
oldest
votes
Do you have reason to doubt their claims? Do they seem in some way unreasonable? If it isn't standard in your field to release code I don't think a reviewer should necessarily demand it, regardless of your feelings about making code public.
The authors should describe their methodology sufficiently for someone else to replicate it; in that way, they are putting their reputations at risk that were someone to duplicate their approach they would find the same results. Fabricating results is a very serious accusation. There are some statistical approaches to test whether data are likely to be fabricated, but the efficacy of this approach depends on the sophistication of the fabrication, and that question is better suited to CrossValidated.
If their work is meaningful in the field, at some point someone will implement their approach again. There is necessarily a bit of trust in science that people do what they say they've done.
1
Do you have reason to doubt their claims? Surely that's the job of any researcher?
– user2768
7 hours ago
1
@user2768 Of course I mean beyond basic skepticism. You should doubt their approach as presented: if they say we tested X by doing Y and you know Y is not the correct way to test X, that's a different kind of doubt than them saying we tested X by doing Y and you wondering if they actually just made it up instead of ever doing Y. The standard is to provide enough information for someone else to replicate; if someone was making a truly remarkable claim, there is more reason to demand to see their code than if they are showing an incremental improvement.
– Bryan Krause
6 hours ago
add a comment |
You can't really judge if the simulation was really performed. That's why we've had things such as the Schön scandal - the reviewers of those manuscripts didn't detect the fraud either.
What you can do is implement the "smell test". Is this approach feasible? Are the results reasonable? Were there any glaring omissions? If you can't see any obvious problems with the simulation, that's good enough: the real peer review happens after publication.
add a comment |
there is no way to confirm the correctness of simulation results.
Simulations should be repeatable, hence, correctness can be checked by re-running the simulation. Of course, the authors might not provide the necessary code, but then you can request the code as a part of the review process.
1
Its not an easy task to implement someones algorithm to check correctness. Sometime the implementation may take 3~4 months
– MBK
7 hours ago
2
@MBK I don't recommend implementing; I recommend repeating. If the authors won't let you repeat (by denying access to code), then I'd be inclined to reject, but I'd consult with the editor.
– user2768
7 hours ago
3
@user2768 So, if I were to run some simulations using software I didn't have a license to redistribute, I shouldn't be able to publish my results?
– David Thornley
4 hours ago
2
@MBK Just running the same code on the same data tells you very little without actually examining the code to make sure it implements the algorithm(s) of the paper. It tells you that the author(s) didn't outright lie about the results, and that's all.
– David Thornley
4 hours ago
1
@SylvainRibault In many areas of research it isn't possible, for various reasons, to share all the raw data involved. Should none of that research be published either? Should we save and distribute blood samples to anyone that wants to verify the results of a study of inflammatory biomarkers? What if the process of analysis destroys the sample? Trust is an integral part of academic research.
– Bryan Krause
4 hours ago
|
show 6 more comments
After all these experiences, I can tell you there is no way to confirm the correctness of simulation results.
That is not necessarily true. In some cases, it is very easy to discern that a graph cannot possibly be correct or at the least has been badly misconstrued or misinterpreted. I had such a mistake caught in one of my early papers and have caught them in several papers I have reviewed.
It is not easy to prove that the simulations have actually been performed. However, the Open Science framework is designed to make it easier to verify results of both computational and experimental work.
add a comment |
Your Answer
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "415"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
noCode: true, onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f122840%2fwithout-accessing-the-code-how-can-a-reviewer-confirm-if-the-simulation-results%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
4 Answers
4
active
oldest
votes
4 Answers
4
active
oldest
votes
active
oldest
votes
active
oldest
votes
Do you have reason to doubt their claims? Do they seem in some way unreasonable? If it isn't standard in your field to release code I don't think a reviewer should necessarily demand it, regardless of your feelings about making code public.
The authors should describe their methodology sufficiently for someone else to replicate it; in that way, they are putting their reputations at risk that were someone to duplicate their approach they would find the same results. Fabricating results is a very serious accusation. There are some statistical approaches to test whether data are likely to be fabricated, but the efficacy of this approach depends on the sophistication of the fabrication, and that question is better suited to CrossValidated.
If their work is meaningful in the field, at some point someone will implement their approach again. There is necessarily a bit of trust in science that people do what they say they've done.
1
Do you have reason to doubt their claims? Surely that's the job of any researcher?
– user2768
7 hours ago
1
@user2768 Of course I mean beyond basic skepticism. You should doubt their approach as presented: if they say we tested X by doing Y and you know Y is not the correct way to test X, that's a different kind of doubt than them saying we tested X by doing Y and you wondering if they actually just made it up instead of ever doing Y. The standard is to provide enough information for someone else to replicate; if someone was making a truly remarkable claim, there is more reason to demand to see their code than if they are showing an incremental improvement.
– Bryan Krause
6 hours ago
add a comment |
Do you have reason to doubt their claims? Do they seem in some way unreasonable? If it isn't standard in your field to release code I don't think a reviewer should necessarily demand it, regardless of your feelings about making code public.
The authors should describe their methodology sufficiently for someone else to replicate it; in that way, they are putting their reputations at risk that were someone to duplicate their approach they would find the same results. Fabricating results is a very serious accusation. There are some statistical approaches to test whether data are likely to be fabricated, but the efficacy of this approach depends on the sophistication of the fabrication, and that question is better suited to CrossValidated.
If their work is meaningful in the field, at some point someone will implement their approach again. There is necessarily a bit of trust in science that people do what they say they've done.
1
Do you have reason to doubt their claims? Surely that's the job of any researcher?
– user2768
7 hours ago
1
@user2768 Of course I mean beyond basic skepticism. You should doubt their approach as presented: if they say we tested X by doing Y and you know Y is not the correct way to test X, that's a different kind of doubt than them saying we tested X by doing Y and you wondering if they actually just made it up instead of ever doing Y. The standard is to provide enough information for someone else to replicate; if someone was making a truly remarkable claim, there is more reason to demand to see their code than if they are showing an incremental improvement.
– Bryan Krause
6 hours ago
add a comment |
Do you have reason to doubt their claims? Do they seem in some way unreasonable? If it isn't standard in your field to release code I don't think a reviewer should necessarily demand it, regardless of your feelings about making code public.
The authors should describe their methodology sufficiently for someone else to replicate it; in that way, they are putting their reputations at risk that were someone to duplicate their approach they would find the same results. Fabricating results is a very serious accusation. There are some statistical approaches to test whether data are likely to be fabricated, but the efficacy of this approach depends on the sophistication of the fabrication, and that question is better suited to CrossValidated.
If their work is meaningful in the field, at some point someone will implement their approach again. There is necessarily a bit of trust in science that people do what they say they've done.
Do you have reason to doubt their claims? Do they seem in some way unreasonable? If it isn't standard in your field to release code I don't think a reviewer should necessarily demand it, regardless of your feelings about making code public.
The authors should describe their methodology sufficiently for someone else to replicate it; in that way, they are putting their reputations at risk that were someone to duplicate their approach they would find the same results. Fabricating results is a very serious accusation. There are some statistical approaches to test whether data are likely to be fabricated, but the efficacy of this approach depends on the sophistication of the fabrication, and that question is better suited to CrossValidated.
If their work is meaningful in the field, at some point someone will implement their approach again. There is necessarily a bit of trust in science that people do what they say they've done.
answered 7 hours ago
Bryan KrauseBryan Krause
11.9k13658
11.9k13658
1
Do you have reason to doubt their claims? Surely that's the job of any researcher?
– user2768
7 hours ago
1
@user2768 Of course I mean beyond basic skepticism. You should doubt their approach as presented: if they say we tested X by doing Y and you know Y is not the correct way to test X, that's a different kind of doubt than them saying we tested X by doing Y and you wondering if they actually just made it up instead of ever doing Y. The standard is to provide enough information for someone else to replicate; if someone was making a truly remarkable claim, there is more reason to demand to see their code than if they are showing an incremental improvement.
– Bryan Krause
6 hours ago
add a comment |
1
Do you have reason to doubt their claims? Surely that's the job of any researcher?
– user2768
7 hours ago
1
@user2768 Of course I mean beyond basic skepticism. You should doubt their approach as presented: if they say we tested X by doing Y and you know Y is not the correct way to test X, that's a different kind of doubt than them saying we tested X by doing Y and you wondering if they actually just made it up instead of ever doing Y. The standard is to provide enough information for someone else to replicate; if someone was making a truly remarkable claim, there is more reason to demand to see their code than if they are showing an incremental improvement.
– Bryan Krause
6 hours ago
1
1
Do you have reason to doubt their claims? Surely that's the job of any researcher?
– user2768
7 hours ago
Do you have reason to doubt their claims? Surely that's the job of any researcher?
– user2768
7 hours ago
1
1
@user2768 Of course I mean beyond basic skepticism. You should doubt their approach as presented: if they say we tested X by doing Y and you know Y is not the correct way to test X, that's a different kind of doubt than them saying we tested X by doing Y and you wondering if they actually just made it up instead of ever doing Y. The standard is to provide enough information for someone else to replicate; if someone was making a truly remarkable claim, there is more reason to demand to see their code than if they are showing an incremental improvement.
– Bryan Krause
6 hours ago
@user2768 Of course I mean beyond basic skepticism. You should doubt their approach as presented: if they say we tested X by doing Y and you know Y is not the correct way to test X, that's a different kind of doubt than them saying we tested X by doing Y and you wondering if they actually just made it up instead of ever doing Y. The standard is to provide enough information for someone else to replicate; if someone was making a truly remarkable claim, there is more reason to demand to see their code than if they are showing an incremental improvement.
– Bryan Krause
6 hours ago
add a comment |
You can't really judge if the simulation was really performed. That's why we've had things such as the Schön scandal - the reviewers of those manuscripts didn't detect the fraud either.
What you can do is implement the "smell test". Is this approach feasible? Are the results reasonable? Were there any glaring omissions? If you can't see any obvious problems with the simulation, that's good enough: the real peer review happens after publication.
add a comment |
You can't really judge if the simulation was really performed. That's why we've had things such as the Schön scandal - the reviewers of those manuscripts didn't detect the fraud either.
What you can do is implement the "smell test". Is this approach feasible? Are the results reasonable? Were there any glaring omissions? If you can't see any obvious problems with the simulation, that's good enough: the real peer review happens after publication.
add a comment |
You can't really judge if the simulation was really performed. That's why we've had things such as the Schön scandal - the reviewers of those manuscripts didn't detect the fraud either.
What you can do is implement the "smell test". Is this approach feasible? Are the results reasonable? Were there any glaring omissions? If you can't see any obvious problems with the simulation, that's good enough: the real peer review happens after publication.
You can't really judge if the simulation was really performed. That's why we've had things such as the Schön scandal - the reviewers of those manuscripts didn't detect the fraud either.
What you can do is implement the "smell test". Is this approach feasible? Are the results reasonable? Were there any glaring omissions? If you can't see any obvious problems with the simulation, that's good enough: the real peer review happens after publication.
answered 1 hour ago
AllureAllure
27.4k1482134
27.4k1482134
add a comment |
add a comment |
there is no way to confirm the correctness of simulation results.
Simulations should be repeatable, hence, correctness can be checked by re-running the simulation. Of course, the authors might not provide the necessary code, but then you can request the code as a part of the review process.
1
Its not an easy task to implement someones algorithm to check correctness. Sometime the implementation may take 3~4 months
– MBK
7 hours ago
2
@MBK I don't recommend implementing; I recommend repeating. If the authors won't let you repeat (by denying access to code), then I'd be inclined to reject, but I'd consult with the editor.
– user2768
7 hours ago
3
@user2768 So, if I were to run some simulations using software I didn't have a license to redistribute, I shouldn't be able to publish my results?
– David Thornley
4 hours ago
2
@MBK Just running the same code on the same data tells you very little without actually examining the code to make sure it implements the algorithm(s) of the paper. It tells you that the author(s) didn't outright lie about the results, and that's all.
– David Thornley
4 hours ago
1
@SylvainRibault In many areas of research it isn't possible, for various reasons, to share all the raw data involved. Should none of that research be published either? Should we save and distribute blood samples to anyone that wants to verify the results of a study of inflammatory biomarkers? What if the process of analysis destroys the sample? Trust is an integral part of academic research.
– Bryan Krause
4 hours ago
|
show 6 more comments
there is no way to confirm the correctness of simulation results.
Simulations should be repeatable, hence, correctness can be checked by re-running the simulation. Of course, the authors might not provide the necessary code, but then you can request the code as a part of the review process.
1
Its not an easy task to implement someones algorithm to check correctness. Sometime the implementation may take 3~4 months
– MBK
7 hours ago
2
@MBK I don't recommend implementing; I recommend repeating. If the authors won't let you repeat (by denying access to code), then I'd be inclined to reject, but I'd consult with the editor.
– user2768
7 hours ago
3
@user2768 So, if I were to run some simulations using software I didn't have a license to redistribute, I shouldn't be able to publish my results?
– David Thornley
4 hours ago
2
@MBK Just running the same code on the same data tells you very little without actually examining the code to make sure it implements the algorithm(s) of the paper. It tells you that the author(s) didn't outright lie about the results, and that's all.
– David Thornley
4 hours ago
1
@SylvainRibault In many areas of research it isn't possible, for various reasons, to share all the raw data involved. Should none of that research be published either? Should we save and distribute blood samples to anyone that wants to verify the results of a study of inflammatory biomarkers? What if the process of analysis destroys the sample? Trust is an integral part of academic research.
– Bryan Krause
4 hours ago
|
show 6 more comments
there is no way to confirm the correctness of simulation results.
Simulations should be repeatable, hence, correctness can be checked by re-running the simulation. Of course, the authors might not provide the necessary code, but then you can request the code as a part of the review process.
there is no way to confirm the correctness of simulation results.
Simulations should be repeatable, hence, correctness can be checked by re-running the simulation. Of course, the authors might not provide the necessary code, but then you can request the code as a part of the review process.
answered 7 hours ago
user2768user2768
11.9k23052
11.9k23052
1
Its not an easy task to implement someones algorithm to check correctness. Sometime the implementation may take 3~4 months
– MBK
7 hours ago
2
@MBK I don't recommend implementing; I recommend repeating. If the authors won't let you repeat (by denying access to code), then I'd be inclined to reject, but I'd consult with the editor.
– user2768
7 hours ago
3
@user2768 So, if I were to run some simulations using software I didn't have a license to redistribute, I shouldn't be able to publish my results?
– David Thornley
4 hours ago
2
@MBK Just running the same code on the same data tells you very little without actually examining the code to make sure it implements the algorithm(s) of the paper. It tells you that the author(s) didn't outright lie about the results, and that's all.
– David Thornley
4 hours ago
1
@SylvainRibault In many areas of research it isn't possible, for various reasons, to share all the raw data involved. Should none of that research be published either? Should we save and distribute blood samples to anyone that wants to verify the results of a study of inflammatory biomarkers? What if the process of analysis destroys the sample? Trust is an integral part of academic research.
– Bryan Krause
4 hours ago
|
show 6 more comments
1
Its not an easy task to implement someones algorithm to check correctness. Sometime the implementation may take 3~4 months
– MBK
7 hours ago
2
@MBK I don't recommend implementing; I recommend repeating. If the authors won't let you repeat (by denying access to code), then I'd be inclined to reject, but I'd consult with the editor.
– user2768
7 hours ago
3
@user2768 So, if I were to run some simulations using software I didn't have a license to redistribute, I shouldn't be able to publish my results?
– David Thornley
4 hours ago
2
@MBK Just running the same code on the same data tells you very little without actually examining the code to make sure it implements the algorithm(s) of the paper. It tells you that the author(s) didn't outright lie about the results, and that's all.
– David Thornley
4 hours ago
1
@SylvainRibault In many areas of research it isn't possible, for various reasons, to share all the raw data involved. Should none of that research be published either? Should we save and distribute blood samples to anyone that wants to verify the results of a study of inflammatory biomarkers? What if the process of analysis destroys the sample? Trust is an integral part of academic research.
– Bryan Krause
4 hours ago
1
1
Its not an easy task to implement someones algorithm to check correctness. Sometime the implementation may take 3~4 months
– MBK
7 hours ago
Its not an easy task to implement someones algorithm to check correctness. Sometime the implementation may take 3~4 months
– MBK
7 hours ago
2
2
@MBK I don't recommend implementing; I recommend repeating. If the authors won't let you repeat (by denying access to code), then I'd be inclined to reject, but I'd consult with the editor.
– user2768
7 hours ago
@MBK I don't recommend implementing; I recommend repeating. If the authors won't let you repeat (by denying access to code), then I'd be inclined to reject, but I'd consult with the editor.
– user2768
7 hours ago
3
3
@user2768 So, if I were to run some simulations using software I didn't have a license to redistribute, I shouldn't be able to publish my results?
– David Thornley
4 hours ago
@user2768 So, if I were to run some simulations using software I didn't have a license to redistribute, I shouldn't be able to publish my results?
– David Thornley
4 hours ago
2
2
@MBK Just running the same code on the same data tells you very little without actually examining the code to make sure it implements the algorithm(s) of the paper. It tells you that the author(s) didn't outright lie about the results, and that's all.
– David Thornley
4 hours ago
@MBK Just running the same code on the same data tells you very little without actually examining the code to make sure it implements the algorithm(s) of the paper. It tells you that the author(s) didn't outright lie about the results, and that's all.
– David Thornley
4 hours ago
1
1
@SylvainRibault In many areas of research it isn't possible, for various reasons, to share all the raw data involved. Should none of that research be published either? Should we save and distribute blood samples to anyone that wants to verify the results of a study of inflammatory biomarkers? What if the process of analysis destroys the sample? Trust is an integral part of academic research.
– Bryan Krause
4 hours ago
@SylvainRibault In many areas of research it isn't possible, for various reasons, to share all the raw data involved. Should none of that research be published either? Should we save and distribute blood samples to anyone that wants to verify the results of a study of inflammatory biomarkers? What if the process of analysis destroys the sample? Trust is an integral part of academic research.
– Bryan Krause
4 hours ago
|
show 6 more comments
After all these experiences, I can tell you there is no way to confirm the correctness of simulation results.
That is not necessarily true. In some cases, it is very easy to discern that a graph cannot possibly be correct or at the least has been badly misconstrued or misinterpreted. I had such a mistake caught in one of my early papers and have caught them in several papers I have reviewed.
It is not easy to prove that the simulations have actually been performed. However, the Open Science framework is designed to make it easier to verify results of both computational and experimental work.
add a comment |
After all these experiences, I can tell you there is no way to confirm the correctness of simulation results.
That is not necessarily true. In some cases, it is very easy to discern that a graph cannot possibly be correct or at the least has been badly misconstrued or misinterpreted. I had such a mistake caught in one of my early papers and have caught them in several papers I have reviewed.
It is not easy to prove that the simulations have actually been performed. However, the Open Science framework is designed to make it easier to verify results of both computational and experimental work.
add a comment |
After all these experiences, I can tell you there is no way to confirm the correctness of simulation results.
That is not necessarily true. In some cases, it is very easy to discern that a graph cannot possibly be correct or at the least has been badly misconstrued or misinterpreted. I had such a mistake caught in one of my early papers and have caught them in several papers I have reviewed.
It is not easy to prove that the simulations have actually been performed. However, the Open Science framework is designed to make it easier to verify results of both computational and experimental work.
After all these experiences, I can tell you there is no way to confirm the correctness of simulation results.
That is not necessarily true. In some cases, it is very easy to discern that a graph cannot possibly be correct or at the least has been badly misconstrued or misinterpreted. I had such a mistake caught in one of my early papers and have caught them in several papers I have reviewed.
It is not easy to prove that the simulations have actually been performed. However, the Open Science framework is designed to make it easier to verify results of both computational and experimental work.
answered 47 mins ago
aeismail♦aeismail
159k31373694
159k31373694
add a comment |
add a comment |
Thanks for contributing an answer to Academia Stack Exchange!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2facademia.stackexchange.com%2fquestions%2f122840%2fwithout-accessing-the-code-how-can-a-reviewer-confirm-if-the-simulation-results%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
This question came in my mind because I observed on few occasions, during review process, a reviewer ask for including new results, which in my opinion required a lot of coding and effort to implement, but author responded within 7-10 days with new results and improved article.
– MBK
7 hours ago
5
Why would this be different from an author stating the result of an experiment? The experimental protocol should be given, but there would be no expectation that the reviewer would perform the experiment again to verify that it came out as the authors said. In either case, the author could be mistaken or dishonest, but at least in the experiment case that wouldn't be something the reviewer would know.
– David Thornley
4 hours ago
@MBK: Regarding the case you mention in comments, it is surely very possible that the authors weren’t starting from scratch in implementing the reviewers’ suggestions, but had already independently considered those suggestions or something related, and so had a significant amount of the necessary code already written?
– PLL
21 mins ago