How to upload a files in dir to S3 when files are continuesly getting written in dir
So I am uploading all files from a dir into S3 using TransferManager
and I am able to upload also .
But my issue in the same dir file are getting written also.
So how do i call that method to write into S3 .
Do i have to call that method on fixed interval ?
Please suggest what could be the best way to call that method.
public void uploadDir(Path strFile,String strFileName){
ArrayList<File> files = new ArrayList<File>();
for (Path path : strFile) {
files.add(new File(path.toString()));
}
TransferManager xfer_mgr = TransferManagerBuilder.standard().build();
try {
MultipleFileUpload xfer = xfer_mgr.uploadFileList(bucketName,strFileName, new File("."), files);
//XferMgrProgress.showTransferProgress(xfer);
//XferMgrProgress.waitForCompletion(xfer);
} catch (AmazonServiceException e) {
System.err.println(e.getErrorMessage());
System.exit(1);
}
}
java amazon-web-services amazon-s3
add a comment |
So I am uploading all files from a dir into S3 using TransferManager
and I am able to upload also .
But my issue in the same dir file are getting written also.
So how do i call that method to write into S3 .
Do i have to call that method on fixed interval ?
Please suggest what could be the best way to call that method.
public void uploadDir(Path strFile,String strFileName){
ArrayList<File> files = new ArrayList<File>();
for (Path path : strFile) {
files.add(new File(path.toString()));
}
TransferManager xfer_mgr = TransferManagerBuilder.standard().build();
try {
MultipleFileUpload xfer = xfer_mgr.uploadFileList(bucketName,strFileName, new File("."), files);
//XferMgrProgress.showTransferProgress(xfer);
//XferMgrProgress.waitForCompletion(xfer);
} catch (AmazonServiceException e) {
System.err.println(e.getErrorMessage());
System.exit(1);
}
}
java amazon-web-services amazon-s3
The best thing to do would be to listen to changes in that directory and upload the changes objects to S3 only when a change happens. See stackoverflow.com/questions/23452527/…
– Matthew Pope
Nov 25 '18 at 2:40
@MatthewPope yes I am doing that but files are keeps on coming so how do we control the event
– Anupam
Nov 25 '18 at 3:14
1
I’m not sure what you’re asking then. Can you try to clarify your question?
– Matthew Pope
Nov 25 '18 at 3:41
@Anupam if I understood correctly, files are continuously getting written on your local directory, you're having a service that keeps listening to the directory, but problem is, file listener is picking files even before completely written by the file creators, is that accurate? please modify the question if not the case
– Red Boy
Nov 25 '18 at 16:47
add a comment |
So I am uploading all files from a dir into S3 using TransferManager
and I am able to upload also .
But my issue in the same dir file are getting written also.
So how do i call that method to write into S3 .
Do i have to call that method on fixed interval ?
Please suggest what could be the best way to call that method.
public void uploadDir(Path strFile,String strFileName){
ArrayList<File> files = new ArrayList<File>();
for (Path path : strFile) {
files.add(new File(path.toString()));
}
TransferManager xfer_mgr = TransferManagerBuilder.standard().build();
try {
MultipleFileUpload xfer = xfer_mgr.uploadFileList(bucketName,strFileName, new File("."), files);
//XferMgrProgress.showTransferProgress(xfer);
//XferMgrProgress.waitForCompletion(xfer);
} catch (AmazonServiceException e) {
System.err.println(e.getErrorMessage());
System.exit(1);
}
}
java amazon-web-services amazon-s3
So I am uploading all files from a dir into S3 using TransferManager
and I am able to upload also .
But my issue in the same dir file are getting written also.
So how do i call that method to write into S3 .
Do i have to call that method on fixed interval ?
Please suggest what could be the best way to call that method.
public void uploadDir(Path strFile,String strFileName){
ArrayList<File> files = new ArrayList<File>();
for (Path path : strFile) {
files.add(new File(path.toString()));
}
TransferManager xfer_mgr = TransferManagerBuilder.standard().build();
try {
MultipleFileUpload xfer = xfer_mgr.uploadFileList(bucketName,strFileName, new File("."), files);
//XferMgrProgress.showTransferProgress(xfer);
//XferMgrProgress.waitForCompletion(xfer);
} catch (AmazonServiceException e) {
System.err.println(e.getErrorMessage());
System.exit(1);
}
}
java amazon-web-services amazon-s3
java amazon-web-services amazon-s3
asked Nov 24 '18 at 18:10
AnupamAnupam
1377
1377
The best thing to do would be to listen to changes in that directory and upload the changes objects to S3 only when a change happens. See stackoverflow.com/questions/23452527/…
– Matthew Pope
Nov 25 '18 at 2:40
@MatthewPope yes I am doing that but files are keeps on coming so how do we control the event
– Anupam
Nov 25 '18 at 3:14
1
I’m not sure what you’re asking then. Can you try to clarify your question?
– Matthew Pope
Nov 25 '18 at 3:41
@Anupam if I understood correctly, files are continuously getting written on your local directory, you're having a service that keeps listening to the directory, but problem is, file listener is picking files even before completely written by the file creators, is that accurate? please modify the question if not the case
– Red Boy
Nov 25 '18 at 16:47
add a comment |
The best thing to do would be to listen to changes in that directory and upload the changes objects to S3 only when a change happens. See stackoverflow.com/questions/23452527/…
– Matthew Pope
Nov 25 '18 at 2:40
@MatthewPope yes I am doing that but files are keeps on coming so how do we control the event
– Anupam
Nov 25 '18 at 3:14
1
I’m not sure what you’re asking then. Can you try to clarify your question?
– Matthew Pope
Nov 25 '18 at 3:41
@Anupam if I understood correctly, files are continuously getting written on your local directory, you're having a service that keeps listening to the directory, but problem is, file listener is picking files even before completely written by the file creators, is that accurate? please modify the question if not the case
– Red Boy
Nov 25 '18 at 16:47
The best thing to do would be to listen to changes in that directory and upload the changes objects to S3 only when a change happens. See stackoverflow.com/questions/23452527/…
– Matthew Pope
Nov 25 '18 at 2:40
The best thing to do would be to listen to changes in that directory and upload the changes objects to S3 only when a change happens. See stackoverflow.com/questions/23452527/…
– Matthew Pope
Nov 25 '18 at 2:40
@MatthewPope yes I am doing that but files are keeps on coming so how do we control the event
– Anupam
Nov 25 '18 at 3:14
@MatthewPope yes I am doing that but files are keeps on coming so how do we control the event
– Anupam
Nov 25 '18 at 3:14
1
1
I’m not sure what you’re asking then. Can you try to clarify your question?
– Matthew Pope
Nov 25 '18 at 3:41
I’m not sure what you’re asking then. Can you try to clarify your question?
– Matthew Pope
Nov 25 '18 at 3:41
@Anupam if I understood correctly, files are continuously getting written on your local directory, you're having a service that keeps listening to the directory, but problem is, file listener is picking files even before completely written by the file creators, is that accurate? please modify the question if not the case
– Red Boy
Nov 25 '18 at 16:47
@Anupam if I understood correctly, files are continuously getting written on your local directory, you're having a service that keeps listening to the directory, but problem is, file listener is picking files even before completely written by the file creators, is that accurate? please modify the question if not the case
– Red Boy
Nov 25 '18 at 16:47
add a comment |
1 Answer
1
active
oldest
votes
Couple of solutions, you could try any, based on your need.
Solution 1:- For scanrio like your, instead of time interval, you should be using fileAge
.
FileAge:when the file was last modified, this common concept used by file Poller either local directory or remote directory.
So think like your files takes max. 20 seconds in writing, then only pull files older then 20s or more.
Solution 2:-
Other way is ask your clients, the program generating files to use some extension say .tmp, when file writing completed, ask them to convert it to actual file extension and modify your program to skip files with extension .tmp.
e.g While writingabc.jpg
toabc.jpg.tmp
, when files writing completed, then rename it toabc.jpg
.
Hope this helps.
add a comment |
Your Answer
StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");
StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "1"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);
StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});
function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});
}
});
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53461033%2fhow-to-upload-a-files-in-dir-to-s3-when-files-are-continuesly-getting-written-in%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
Couple of solutions, you could try any, based on your need.
Solution 1:- For scanrio like your, instead of time interval, you should be using fileAge
.
FileAge:when the file was last modified, this common concept used by file Poller either local directory or remote directory.
So think like your files takes max. 20 seconds in writing, then only pull files older then 20s or more.
Solution 2:-
Other way is ask your clients, the program generating files to use some extension say .tmp, when file writing completed, ask them to convert it to actual file extension and modify your program to skip files with extension .tmp.
e.g While writingabc.jpg
toabc.jpg.tmp
, when files writing completed, then rename it toabc.jpg
.
Hope this helps.
add a comment |
Couple of solutions, you could try any, based on your need.
Solution 1:- For scanrio like your, instead of time interval, you should be using fileAge
.
FileAge:when the file was last modified, this common concept used by file Poller either local directory or remote directory.
So think like your files takes max. 20 seconds in writing, then only pull files older then 20s or more.
Solution 2:-
Other way is ask your clients, the program generating files to use some extension say .tmp, when file writing completed, ask them to convert it to actual file extension and modify your program to skip files with extension .tmp.
e.g While writingabc.jpg
toabc.jpg.tmp
, when files writing completed, then rename it toabc.jpg
.
Hope this helps.
add a comment |
Couple of solutions, you could try any, based on your need.
Solution 1:- For scanrio like your, instead of time interval, you should be using fileAge
.
FileAge:when the file was last modified, this common concept used by file Poller either local directory or remote directory.
So think like your files takes max. 20 seconds in writing, then only pull files older then 20s or more.
Solution 2:-
Other way is ask your clients, the program generating files to use some extension say .tmp, when file writing completed, ask them to convert it to actual file extension and modify your program to skip files with extension .tmp.
e.g While writingabc.jpg
toabc.jpg.tmp
, when files writing completed, then rename it toabc.jpg
.
Hope this helps.
Couple of solutions, you could try any, based on your need.
Solution 1:- For scanrio like your, instead of time interval, you should be using fileAge
.
FileAge:when the file was last modified, this common concept used by file Poller either local directory or remote directory.
So think like your files takes max. 20 seconds in writing, then only pull files older then 20s or more.
Solution 2:-
Other way is ask your clients, the program generating files to use some extension say .tmp, when file writing completed, ask them to convert it to actual file extension and modify your program to skip files with extension .tmp.
e.g While writingabc.jpg
toabc.jpg.tmp
, when files writing completed, then rename it toabc.jpg
.
Hope this helps.
answered Nov 24 '18 at 18:25
Red BoyRed Boy
2,2462923
2,2462923
add a comment |
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53461033%2fhow-to-upload-a-files-in-dir-to-s3-when-files-are-continuesly-getting-written-in%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
The best thing to do would be to listen to changes in that directory and upload the changes objects to S3 only when a change happens. See stackoverflow.com/questions/23452527/…
– Matthew Pope
Nov 25 '18 at 2:40
@MatthewPope yes I am doing that but files are keeps on coming so how do we control the event
– Anupam
Nov 25 '18 at 3:14
1
I’m not sure what you’re asking then. Can you try to clarify your question?
– Matthew Pope
Nov 25 '18 at 3:41
@Anupam if I understood correctly, files are continuously getting written on your local directory, you're having a service that keeps listening to the directory, but problem is, file listener is picking files even before completely written by the file creators, is that accurate? please modify the question if not the case
– Red Boy
Nov 25 '18 at 16:47