我正在制作一個(gè)腳本,它獲取近 20 000 個(gè)頁(yè)面的 HTML 并對(duì)其進(jìn)行解析以獲取其中的一部分。我設(shè)法使用 asyncio 和 aiohttp 通過異步請(qǐng)求在數(shù)據(jù)框中獲取了 20 000 個(gè)頁(yè)面的內(nèi)容,但該腳本仍然等待所有頁(yè)面被提取以解析它們。async def get_request(session, url, params=None): async with session.get(url, headers=HEADERS, params=params) as response: return await response.text()async def get_html_from_url(urls): tasks = [] async with aiohttp.ClientSession() as session: for url in urls: tasks.append(get_request(session, url)) html_page_response = await asyncio.gather(*tasks) return html_page_responsehtml_pages_list = asyncio_loop.run_until_complete(get_html_from_url(urls))一旦我獲得了每個(gè)頁(yè)面的內(nèi)容,我就設(shè)法使用多處理的池來并行化解析。get_whatiwant_from_html(html_content): parsed_html = BeautifulSoup(html_content, "html.parser") clean = parsed_html.find("div", class_="class").get_text() # Some re.subs clean = re.sub("", "", clean) clean = re.sub("", "", clean) clean = re.sub("", "", clean) return cleanpool = Pool(4)what_i_want = pool.map(get_whatiwant_from_html, html_content_list)這段代碼異步混合了獲取和解析,但我想將多處理集成到其中:async def process(url, session): html = await getRequest(session, url) return await get_whatiwant_from_html(html)async def dispatch(urls): async with aiohttp.ClientSession() as session: coros = (process(url, session) for url in urls) return await asyncio.gather(*coros)result = asyncio.get_event_loop().run_until_complete(dispatch(urls))有什么明顯的方法可以做到這一點(diǎn)嗎?我想過創(chuàng)建 4 個(gè)進(jìn)程,每個(gè)進(jìn)程都運(yùn)行異步調(diào)用,但實(shí)現(xiàn)看起來有點(diǎn)復(fù)雜,我想知道是否有另一種方法。我對(duì) asyncio 和 aiohttp 很陌生,所以如果你有什么建議我閱讀以更好地理解,我會(huì)很高興。
添加回答
舉報(bào)
0/150
提交
取消