我試圖刮掉一個帶有分頁鏈接的網(wǎng)站,所以我這樣做了import scrapyclass SummymartSpider(scrapy.Spider): name = 'dummymart' allowed_domains = ['www.dummrmart.com/product'] start_urls = ['https://www.dummymart.net/product/auto-parts--118?page%s'% page for page in range(1,20)]有效??!與單個網(wǎng)址,它的工作原理,但當(dāng)我嘗試做到這一點: import scrapy class DummymartSpider(scrapy.Spider): name = 'dummymart' allowed_domains = ['www.dummymart.com/product'] start_urls = ['https://www.dummymart.net/product/auto-parts--118?page%s', 'https://www.dummymart.net/product/accessories-tools--112?id=1316264860?page%s'% page for page in range(1,20)]它不起作用,但是對于多個URL,我如何實現(xiàn)相同的邏輯?謝謝
添加回答
舉報
0/150
提交
取消