How to set the upper bound on a scrapy spider ReturnsContract
up vote
1
down vote
favorite
I want to limit the number of items I find in each pages.
I found this documentation that seems to fit what I need:
class scrapy.contracts.default.ReturnsContract
This contract (@returns) sets lower and upper bounds for the items and
requests returned by the spider. The upper bound is optional:
@returns item(s)|request(s) [min [max]]
But I don't understand how to use this class. In my spider, I tried to add
ReturnsContract.__setattr__("max",10)
But it didn't work. Am I missing something?
python scrapy web-crawler
add a comment |
up vote
1
down vote
favorite
I want to limit the number of items I find in each pages.
I found this documentation that seems to fit what I need:
class scrapy.contracts.default.ReturnsContract
This contract (@returns) sets lower and upper bounds for the items and
requests returned by the spider. The upper bound is optional:
@returns item(s)|request(s) [min [max]]
But I don't understand how to use this class. In my spider, I tried to add
ReturnsContract.__setattr__("max",10)
But it didn't work. Am I missing something?
python scrapy web-crawler
add a comment |
up vote
1
down vote
favorite
up vote
1
down vote
favorite
I want to limit the number of items I find in each pages.
I found this documentation that seems to fit what I need:
class scrapy.contracts.default.ReturnsContract
This contract (@returns) sets lower and upper bounds for the items and
requests returned by the spider. The upper bound is optional:
@returns item(s)|request(s) [min [max]]
But I don't understand how to use this class. In my spider, I tried to add
ReturnsContract.__setattr__("max",10)
But it didn't work. Am I missing something?
python scrapy web-crawler
I want to limit the number of items I find in each pages.
I found this documentation that seems to fit what I need:
class scrapy.contracts.default.ReturnsContract
This contract (@returns) sets lower and upper bounds for the items and
requests returned by the spider. The upper bound is optional:
@returns item(s)|request(s) [min [max]]
But I don't understand how to use this class. In my spider, I tried to add
ReturnsContract.__setattr__("max",10)
But it didn't work. Am I missing something?
python scrapy web-crawler
python scrapy web-crawler
asked Nov 19 at 17:45
Mrtnchps
10710
10710
add a comment |
add a comment |
1 Answer
1
active
oldest
votes
up vote
1
down vote
accepted
The Spider Contracts are meant for testing purposes, not to control your data extraction logic.
Testing spiders can get particularly annoying and while nothing
prevents you from writing unit tests the task gets cumbersome quickly.
Scrapy offers an integrated way of testing your spiders by the means
of contracts.
This allows you to test each callback of your spider by hardcoding a
sample url and check various constraints for how the callback
processes the response. Each contract is prefixed with an @ and
included in the docstring.
For your purpose, you can simply set an upper bound in your extraction logic, for example:
response.xpath('//my/xpath').extract()[:10]
I want to add an upper bound so my spider is "polite". I think this is scraping the full website but returning 10 results. It is still better than what I found, so I will probably use it.
– Mrtnchps
Nov 19 at 19:08
1
If you want to limit the number of items or the number of pages to crawl, take a look at the close spider extension: doc.scrapy.org/en/latest/topics/…. You can configure your spider to stop after it has scraped X items or requested X pages
– Guillaume
Nov 19 at 19:24
add a comment |
1 Answer
1
active
oldest
votes
1 Answer
1
active
oldest
votes
active
oldest
votes
active
oldest
votes
up vote
1
down vote
accepted
The Spider Contracts are meant for testing purposes, not to control your data extraction logic.
Testing spiders can get particularly annoying and while nothing
prevents you from writing unit tests the task gets cumbersome quickly.
Scrapy offers an integrated way of testing your spiders by the means
of contracts.
This allows you to test each callback of your spider by hardcoding a
sample url and check various constraints for how the callback
processes the response. Each contract is prefixed with an @ and
included in the docstring.
For your purpose, you can simply set an upper bound in your extraction logic, for example:
response.xpath('//my/xpath').extract()[:10]
I want to add an upper bound so my spider is "polite". I think this is scraping the full website but returning 10 results. It is still better than what I found, so I will probably use it.
– Mrtnchps
Nov 19 at 19:08
1
If you want to limit the number of items or the number of pages to crawl, take a look at the close spider extension: doc.scrapy.org/en/latest/topics/…. You can configure your spider to stop after it has scraped X items or requested X pages
– Guillaume
Nov 19 at 19:24
add a comment |
up vote
1
down vote
accepted
The Spider Contracts are meant for testing purposes, not to control your data extraction logic.
Testing spiders can get particularly annoying and while nothing
prevents you from writing unit tests the task gets cumbersome quickly.
Scrapy offers an integrated way of testing your spiders by the means
of contracts.
This allows you to test each callback of your spider by hardcoding a
sample url and check various constraints for how the callback
processes the response. Each contract is prefixed with an @ and
included in the docstring.
For your purpose, you can simply set an upper bound in your extraction logic, for example:
response.xpath('//my/xpath').extract()[:10]
I want to add an upper bound so my spider is "polite". I think this is scraping the full website but returning 10 results. It is still better than what I found, so I will probably use it.
– Mrtnchps
Nov 19 at 19:08
1
If you want to limit the number of items or the number of pages to crawl, take a look at the close spider extension: doc.scrapy.org/en/latest/topics/…. You can configure your spider to stop after it has scraped X items or requested X pages
– Guillaume
Nov 19 at 19:24
add a comment |
up vote
1
down vote
accepted
up vote
1
down vote
accepted
The Spider Contracts are meant for testing purposes, not to control your data extraction logic.
Testing spiders can get particularly annoying and while nothing
prevents you from writing unit tests the task gets cumbersome quickly.
Scrapy offers an integrated way of testing your spiders by the means
of contracts.
This allows you to test each callback of your spider by hardcoding a
sample url and check various constraints for how the callback
processes the response. Each contract is prefixed with an @ and
included in the docstring.
For your purpose, you can simply set an upper bound in your extraction logic, for example:
response.xpath('//my/xpath').extract()[:10]
The Spider Contracts are meant for testing purposes, not to control your data extraction logic.
Testing spiders can get particularly annoying and while nothing
prevents you from writing unit tests the task gets cumbersome quickly.
Scrapy offers an integrated way of testing your spiders by the means
of contracts.
This allows you to test each callback of your spider by hardcoding a
sample url and check various constraints for how the callback
processes the response. Each contract is prefixed with an @ and
included in the docstring.
For your purpose, you can simply set an upper bound in your extraction logic, for example:
response.xpath('//my/xpath').extract()[:10]
answered Nov 19 at 18:59
Guillaume
9331624
9331624
I want to add an upper bound so my spider is "polite". I think this is scraping the full website but returning 10 results. It is still better than what I found, so I will probably use it.
– Mrtnchps
Nov 19 at 19:08
1
If you want to limit the number of items or the number of pages to crawl, take a look at the close spider extension: doc.scrapy.org/en/latest/topics/…. You can configure your spider to stop after it has scraped X items or requested X pages
– Guillaume
Nov 19 at 19:24
add a comment |
I want to add an upper bound so my spider is "polite". I think this is scraping the full website but returning 10 results. It is still better than what I found, so I will probably use it.
– Mrtnchps
Nov 19 at 19:08
1
If you want to limit the number of items or the number of pages to crawl, take a look at the close spider extension: doc.scrapy.org/en/latest/topics/…. You can configure your spider to stop after it has scraped X items or requested X pages
– Guillaume
Nov 19 at 19:24
I want to add an upper bound so my spider is "polite". I think this is scraping the full website but returning 10 results. It is still better than what I found, so I will probably use it.
– Mrtnchps
Nov 19 at 19:08
I want to add an upper bound so my spider is "polite". I think this is scraping the full website but returning 10 results. It is still better than what I found, so I will probably use it.
– Mrtnchps
Nov 19 at 19:08
1
1
If you want to limit the number of items or the number of pages to crawl, take a look at the close spider extension: doc.scrapy.org/en/latest/topics/…. You can configure your spider to stop after it has scraped X items or requested X pages
– Guillaume
Nov 19 at 19:24
If you want to limit the number of items or the number of pages to crawl, take a look at the close spider extension: doc.scrapy.org/en/latest/topics/…. You can configure your spider to stop after it has scraped X items or requested X pages
– Guillaume
Nov 19 at 19:24
add a comment |
Thanks for contributing an answer to Stack Overflow!
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Some of your past answers have not been well-received, and you're in danger of being blocked from answering.
Please pay close attention to the following guidance:
- Please be sure to answer the question. Provide details and share your research!
But avoid …
- Asking for help, clarification, or responding to other answers.
- Making statements based on opinion; back them up with references or personal experience.
To learn more, see our tips on writing great answers.
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
StackExchange.ready(
function () {
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53380064%2fhow-to-set-the-upper-bound-on-a-scrapy-spider-returnscontract%23new-answer', 'question_page');
}
);
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Sign up or log in
StackExchange.ready(function () {
StackExchange.helpers.onClickDraftSave('#login-link');
});
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Sign up using Google
Sign up using Facebook
Sign up using Email and Password
Post as a guest
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown
Required, but never shown