第七色在线视频,2021少妇久久久久久久久久,亚洲欧洲精品成人久久av18,亚洲国产精品特色大片观看完整版,孙宇晨将参加特朗普的晚宴

為了賬號(hào)安全,請(qǐng)及時(shí)綁定郵箱和手機(jī)立即綁定
已解決430363個(gè)問(wèn)題,去搜搜看,總會(huì)有你想問(wèn)的

Python 并發(fā) executor.map() 和 submit()

Python 并發(fā) executor.map() 和 submit()

墨色風(fēng)雨 2023-06-13 10:57:29
我正在學(xué)習(xí)如何使用 concurrent withexecutor.map()和executor.submit()。我有一個(gè)包含 20 個(gè) url 的列表,想同時(shí)發(fā)送 20 個(gè)請(qǐng)求,問(wèn)題是.submit()從一開(kāi)始就以與給定列表不同的順序返回結(jié)果。我讀過(guò)它map()可以滿足我的需要,但我不知道如何用它編寫代碼。下面的代碼對(duì)我來(lái)說(shuō)很完美。問(wèn)題:是否有任何代碼塊map()等同于下面的代碼,或者任何排序方法可以submit()按給定列表的順序?qū)Y(jié)果列表進(jìn)行排序?import concurrent.futuresimport urllib.requestURLS = ['http://www.foxnews.com/',        'http://www.cnn.com/',        'http://europe.wsj.com/',        'http://www.bbc.co.uk/',        'http://some-made-up-domain.com/']# Retrieve a single page and report the url and contentsdef load_url(url, timeout):    with urllib.request.urlopen(url, timeout=timeout) as conn:        return conn.read()# We can use a with statement to ensure threads are cleaned up promptlywith concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:    # Start the load operations and mark each future with its URL    future_to_url = {executor.submit(load_url, url, 60): url for url in URLS}    for future in concurrent.futures.as_completed(future_to_url):        url = future_to_url[future]        try:            data = future.result()        except Exception as exc:            print('%r generated an exception: %s' % (url, exc))        else:            print('%r page is %d bytes' % (url, len(data)))
查看完整描述

2 回答

?
一只萌萌小番薯

TA貢獻(xiàn)1795條經(jīng)驗(yàn) 獲得超7個(gè)贊

這是您現(xiàn)有代碼的地圖版本。請(qǐng)注意,回調(diào)現(xiàn)在接受一個(gè)元組作為參數(shù)。我在回調(diào)中添加了一個(gè) try\except,因此結(jié)果不會(huì)拋出錯(cuò)誤。結(jié)果根據(jù)輸入列表排序。


from concurrent.futures import ThreadPoolExecutor

import urllib.request


URLS = ['http://www.foxnews.com/',

        'http://www.cnn.com/',

        'http://www.wsj.com/',

        'http://www.bbc.co.uk/',

        'http://some-made-up-domain.com/']


# Retrieve a single page and report the url and contents

def load_url(tt):  # (url,timeout)

    url, timeout = tt

    try:

      with urllib.request.urlopen(url, timeout=timeout) as conn:

         return (url, conn.read())

    except Exception as ex:

        print("Error:", url, ex)

        return(url,"")  # error, return empty string


with ThreadPoolExecutor(max_workers=5) as executor:

    results = executor.map(load_url, [(u,60) for u in URLS])  # pass url and timeout as tuple to callback

    executor.shutdown(wait=True) # wait for all complete

    print("Results:")

for r in results:  # ordered results, will throw exception here if not handled in callback

    print('   %r page is %d bytes' % (r[0], len(r[1])))

輸出


Error: http://www.wsj.com/ HTTP Error 404: Not Found

Results:

   'http://www.foxnews.com/' page is 320028 bytes

   'http://www.cnn.com/' page is 1144916 bytes

   'http://www.wsj.com/' page is 0 bytes

   'http://www.bbc.co.uk/' page is 279418 bytes

   'http://some-made-up-domain.com/' page is 64668 bytes


查看完整回答
反對(duì) 回復(fù) 2023-06-13
?
幕布斯6054654

TA貢獻(xiàn)1876條經(jīng)驗(yàn) 獲得超7個(gè)贊

在不使用該方法的情況下,您不僅map可以使用URL 作為值,還可以使用它們?cè)诹斜碇械乃饕齺?lái)enumerate構(gòu)建dict。然后,您可以使用索引作為鍵從調(diào)用返回的對(duì)象future_to_url構(gòu)建一個(gè)字典,這樣您就可以在字典的長(zhǎng)度上迭代索引,以按照與原始列表中相應(yīng)項(xiàng)目相同的順序讀取字典:futureconcurrent.futures.as_completed(future_to_url)


with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:

    # Start the load operations and mark each future with its URL

    future_to_url = {

        executor.submit(load_url, url, 60): (i, url) for i, url in enumerate(URLS)

    }

    futures = {}

    for future in concurrent.futures.as_completed(future_to_url):

        i, url = future_to_url[future]

        futures[i] = url, future

    for i in range(len(futures)):

        url, future = futures[i]

        try:

            data = future.result()

        except Exception as exc:

            print('%r generated an exception: %s' % (url, exc))

        else:

            print('%r page is %d bytes' % (url, len(data)))


查看完整回答
反對(duì) 回復(fù) 2023-06-13
  • 2 回答
  • 0 關(guān)注
  • 561 瀏覽
慕課專欄
更多

添加回答

舉報(bào)

0/150
提交
取消
微信客服

購(gòu)課補(bǔ)貼
聯(lián)系客服咨詢優(yōu)惠詳情

幫助反饋 APP下載

慕課網(wǎng)APP
您的移動(dòng)學(xué)習(xí)伙伴

公眾號(hào)

掃描二維碼
關(guān)注慕課網(wǎng)微信公眾號(hào)