This is the starting point of my scraping process.
https://www.storiaimoveis.com.br/alugar/brasil
This is the AJAX call which returns the data in a JSON format for every page.
My POST request fails with error 404. Those requests require payloads gave me trouble in the past. I always solved the problem somehow, but now I'm trying to understand what am I doing wrong with them.
My questions are;
json.dumps(payload)
before sending them, or send them as dictionaries?. This is my code's relevant parts.
class MySpider(CrawlSpider):
name = 'myspider'
start_urls = [
'https://www.storiaimoveis.com.br/api/search?fields=%24%24meta.geo.postalCodeAddress.city%2C%24%24meta.geo.postalCodeAddress.neighborhood%2C%24%24meta.geo.postalCodeAddress.street%2C%24%24meta.location%2C%24%24meta.created%2Caddress.number%2Caddress.postalCode%2Caddress.neighborhood%2Caddress.state%2Cmedia%2ClivingArea%2CtotalArea%2Ctypes%2Coperation%2CsalePrice%2CrentPrice%2CnewDevelopment%2CadministrationFee%2CyearlyTax%2Caccount.logoUrl%2Caccount.name%2Caccount.id%2Caccount.creci%2Cgarage%2Cbedrooms%2Csuites%2Cbathrooms%2Cref&optimizeMedia=true&size=20&from=0&sessionId=5ff29d7e-88d0-54d5-2641-e203cafd6f4e'
]
page = 1
payload = {"locations":[{"geo":{"top_left":{"lat":5.2717863,
"lon":-73.982817},
"bottom_right":{"lat":-34.0891,
"lon":-28.650543}},
"placeId":"ChIJzyjM68dZnAARYz4p8gYVWik",
"keywords":"Brasil",
"address":{"label":"Brasil","country":"BR"}}],
"operation":["RENT"],
"bathrooms":[],
"bedrooms":[],
"garage":[],
"features":[]}
headers = {
'Accept': 'application/json',
'Content-Type': 'application/json',
'Referer': 'https://www.storiaimoveis.com.br/alugar/brasil',
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/75.0.3770.100 Safari/537.36'
}
def parse(self, response):
for url in self.start_urls:
yield scrapy.Request(url=url,
method='POST',
headers=self.headers,
body=json.dumps(self.payload),
callback=self.parse_items)
def parse_items(self, response):
from scrapy.shell import inspect_response
inspect_response(response, self)
print response.text
Yes, you need to call json.dumps(payload)
because the request body needs to be str or unicode
as stated in the documentation: https://docs.scrapy.org/en/latest/topics/request-response.html#request-objects
But, in your case, your request fails because of these 2 missing headers: Content-Type
and Referer
.
What I usually do in order to get the right request headers is this:
curl
or Postman
to make the request until I get the right headers. In this case, Content-Type
and Referer
seem to be enough for an HTTP 200 response status:
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.