Twitter scraper using tweepy











up vote
0
down vote

favorite












I wrote a Twitter scraper using the tweepy so I can scrape user information and tweets. Given that the free API doesn't let me get the number of messages per tweet, I had to rely on BeautifulSoup to do so.



class TweetAPI():

def __init__(self, k1,k2,k3,k4):
self.key = k1
self.secret_key = k2
self.token = k3
self.secret_token = k4

auth = tweepy.OAuthHandler(self.key, self.secret_key)
auth.set_access_token(self.token, self.secret_token)

api = tweepy.API(auth, wait_on_rate_limit=True)
self.api = api

def tweet_getter(self, user_id, n):
api = self.api
tweets =
try:
for tweet in tweepy.Cursor(self.api.user_timeline, id = user_id).items(n):
url = "https://twitter.com/{}/status/{}".format(user_id, tweet.id_str)
page = requests.get(url)
page = BeautifulSoup(page.content, 'html.parser')
message_count = int(page.find('span',{"class":"ProfileTweet-actionCount"}).text.strip().split()[0])
temp = [user_id, tweet.created_at, tweet.id_str,
tweet.favorite_count, tweet.retweet_count, message_count, tweet.text]
tweets.append(temp)
return tweets
except:
print("Unable to get user @{} tweets".format(user_id))
pass


I'm worried about two things:




  1. Does the wait_on_rate_limit=True really prevent the over doing request in terms of limits?


  2. Should I add an artificial delayer to the in the BeautifulSoup part in order to avoid getting blocked off from the page content of the Twitter website?











share|improve this question









New contributor




Frank Pinto is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Why are you using the API but then sending a request to the user-facing page and parsing the HTML for the tweet? This could break if twitter changes that page. You should already have the info you need coming back from the API (or if not, there should be an API endpoint to request it).
    – Bailey Parker
    1 min ago

















up vote
0
down vote

favorite












I wrote a Twitter scraper using the tweepy so I can scrape user information and tweets. Given that the free API doesn't let me get the number of messages per tweet, I had to rely on BeautifulSoup to do so.



class TweetAPI():

def __init__(self, k1,k2,k3,k4):
self.key = k1
self.secret_key = k2
self.token = k3
self.secret_token = k4

auth = tweepy.OAuthHandler(self.key, self.secret_key)
auth.set_access_token(self.token, self.secret_token)

api = tweepy.API(auth, wait_on_rate_limit=True)
self.api = api

def tweet_getter(self, user_id, n):
api = self.api
tweets =
try:
for tweet in tweepy.Cursor(self.api.user_timeline, id = user_id).items(n):
url = "https://twitter.com/{}/status/{}".format(user_id, tweet.id_str)
page = requests.get(url)
page = BeautifulSoup(page.content, 'html.parser')
message_count = int(page.find('span',{"class":"ProfileTweet-actionCount"}).text.strip().split()[0])
temp = [user_id, tweet.created_at, tweet.id_str,
tweet.favorite_count, tweet.retweet_count, message_count, tweet.text]
tweets.append(temp)
return tweets
except:
print("Unable to get user @{} tweets".format(user_id))
pass


I'm worried about two things:




  1. Does the wait_on_rate_limit=True really prevent the over doing request in terms of limits?


  2. Should I add an artificial delayer to the in the BeautifulSoup part in order to avoid getting blocked off from the page content of the Twitter website?











share|improve this question









New contributor




Frank Pinto is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.




















  • Why are you using the API but then sending a request to the user-facing page and parsing the HTML for the tweet? This could break if twitter changes that page. You should already have the info you need coming back from the API (or if not, there should be an API endpoint to request it).
    – Bailey Parker
    1 min ago















up vote
0
down vote

favorite









up vote
0
down vote

favorite











I wrote a Twitter scraper using the tweepy so I can scrape user information and tweets. Given that the free API doesn't let me get the number of messages per tweet, I had to rely on BeautifulSoup to do so.



class TweetAPI():

def __init__(self, k1,k2,k3,k4):
self.key = k1
self.secret_key = k2
self.token = k3
self.secret_token = k4

auth = tweepy.OAuthHandler(self.key, self.secret_key)
auth.set_access_token(self.token, self.secret_token)

api = tweepy.API(auth, wait_on_rate_limit=True)
self.api = api

def tweet_getter(self, user_id, n):
api = self.api
tweets =
try:
for tweet in tweepy.Cursor(self.api.user_timeline, id = user_id).items(n):
url = "https://twitter.com/{}/status/{}".format(user_id, tweet.id_str)
page = requests.get(url)
page = BeautifulSoup(page.content, 'html.parser')
message_count = int(page.find('span',{"class":"ProfileTweet-actionCount"}).text.strip().split()[0])
temp = [user_id, tweet.created_at, tweet.id_str,
tweet.favorite_count, tweet.retweet_count, message_count, tweet.text]
tweets.append(temp)
return tweets
except:
print("Unable to get user @{} tweets".format(user_id))
pass


I'm worried about two things:




  1. Does the wait_on_rate_limit=True really prevent the over doing request in terms of limits?


  2. Should I add an artificial delayer to the in the BeautifulSoup part in order to avoid getting blocked off from the page content of the Twitter website?











share|improve this question









New contributor




Frank Pinto is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











I wrote a Twitter scraper using the tweepy so I can scrape user information and tweets. Given that the free API doesn't let me get the number of messages per tweet, I had to rely on BeautifulSoup to do so.



class TweetAPI():

def __init__(self, k1,k2,k3,k4):
self.key = k1
self.secret_key = k2
self.token = k3
self.secret_token = k4

auth = tweepy.OAuthHandler(self.key, self.secret_key)
auth.set_access_token(self.token, self.secret_token)

api = tweepy.API(auth, wait_on_rate_limit=True)
self.api = api

def tweet_getter(self, user_id, n):
api = self.api
tweets =
try:
for tweet in tweepy.Cursor(self.api.user_timeline, id = user_id).items(n):
url = "https://twitter.com/{}/status/{}".format(user_id, tweet.id_str)
page = requests.get(url)
page = BeautifulSoup(page.content, 'html.parser')
message_count = int(page.find('span',{"class":"ProfileTweet-actionCount"}).text.strip().split()[0])
temp = [user_id, tweet.created_at, tweet.id_str,
tweet.favorite_count, tweet.retweet_count, message_count, tweet.text]
tweets.append(temp)
return tweets
except:
print("Unable to get user @{} tweets".format(user_id))
pass


I'm worried about two things:




  1. Does the wait_on_rate_limit=True really prevent the over doing request in terms of limits?


  2. Should I add an artificial delayer to the in the BeautifulSoup part in order to avoid getting blocked off from the page content of the Twitter website?








python python-3.x beautifulsoup twitter






share|improve this question









New contributor




Frank Pinto is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.











share|improve this question









New contributor




Frank Pinto is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









share|improve this question




share|improve this question








edited 5 mins ago









Jamal

30.2k11115226




30.2k11115226






New contributor




Frank Pinto is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.









asked 5 hours ago









Frank Pinto

1




1




New contributor




Frank Pinto is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.





New contributor





Frank Pinto is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.






Frank Pinto is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.












  • Why are you using the API but then sending a request to the user-facing page and parsing the HTML for the tweet? This could break if twitter changes that page. You should already have the info you need coming back from the API (or if not, there should be an API endpoint to request it).
    – Bailey Parker
    1 min ago




















  • Why are you using the API but then sending a request to the user-facing page and parsing the HTML for the tweet? This could break if twitter changes that page. You should already have the info you need coming back from the API (or if not, there should be an API endpoint to request it).
    – Bailey Parker
    1 min ago


















Why are you using the API but then sending a request to the user-facing page and parsing the HTML for the tweet? This could break if twitter changes that page. You should already have the info you need coming back from the API (or if not, there should be an API endpoint to request it).
– Bailey Parker
1 min ago






Why are you using the API but then sending a request to the user-facing page and parsing the HTML for the tweet? This could break if twitter changes that page. You should already have the info you need coming back from the API (or if not, there should be an API endpoint to request it).
– Bailey Parker
1 min ago

















active

oldest

votes











Your Answer





StackExchange.ifUsing("editor", function () {
return StackExchange.using("mathjaxEditing", function () {
StackExchange.MarkdownEditor.creationCallbacks.add(function (editor, postfix) {
StackExchange.mathjaxEditing.prepareWmdForMathJax(editor, postfix, [["\$", "\$"]]);
});
});
}, "mathjax-editing");

StackExchange.ifUsing("editor", function () {
StackExchange.using("externalEditor", function () {
StackExchange.using("snippets", function () {
StackExchange.snippets.init();
});
});
}, "code-snippets");

StackExchange.ready(function() {
var channelOptions = {
tags: "".split(" "),
id: "196"
};
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function() {
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled) {
StackExchange.using("snippets", function() {
createEditor();
});
}
else {
createEditor();
}
});

function createEditor() {
StackExchange.prepareEditor({
heartbeatType: 'answer',
convertImagesToLinks: false,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: null,
bindNavPrevention: true,
postfix: "",
imageUploader: {
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
},
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
});


}
});






Frank Pinto is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f209327%2ftwitter-scraper-using-tweepy%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown






























active

oldest

votes













active

oldest

votes









active

oldest

votes






active

oldest

votes








Frank Pinto is a new contributor. Be nice, and check out our Code of Conduct.










draft saved

draft discarded


















Frank Pinto is a new contributor. Be nice, and check out our Code of Conduct.













Frank Pinto is a new contributor. Be nice, and check out our Code of Conduct.












Frank Pinto is a new contributor. Be nice, and check out our Code of Conduct.
















Thanks for contributing an answer to Code Review Stack Exchange!


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


Use MathJax to format equations. MathJax reference.


To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid



  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.


To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fcodereview.stackexchange.com%2fquestions%2f209327%2ftwitter-scraper-using-tweepy%23new-answer', 'question_page');
}
);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

404 Error Contact Form 7 ajax form submitting

How to know if a Active Directory user can login interactively

Refactoring coordinates for Minecraft Pi buildings written in Python