在python中下載帶有請(qǐng)求的大型文件請(qǐng)求是個(gè)很好的圖書館。我想用它來下載大文件(>1GB)。問題是不可能將整個(gè)文件保存在內(nèi)存中,我需要以塊的形式讀取它。下面的代碼出現(xiàn)了問題import requestsdef DownloadFile(url)
local_filename = url.split('/')[-1]
r = requests.get(url)
f = open(local_filename, 'wb')
for chunk in r.iter_content(chunk_size=512 * 1024):
if chunk: # filter out keep-alive new chunks
f.write(chunk)
f.close()
return因?yàn)槟撤N原因它不是這樣工作的。在將響應(yīng)保存到文件之前,它仍然會(huì)將響應(yīng)加載到內(nèi)存中。更新如果您需要一個(gè)可以從FTP下載大文件的小客戶機(jī)(Python2.x/3.x),您可以找到它這里..它支持多線程和重新連接(它確實(shí)監(jiān)視連接),還為下載任務(wù)調(diào)優(yōu)套接字參數(shù)。
3 回答

慕的地10843
TA貢獻(xiàn)1785條經(jīng)驗(yàn) 獲得超8個(gè)贊
Response.raw
shutil.copyfileobj()
:
import requestsimport shutildef download_file(url): local_filename = url.split('/')[-1] with requests.get(url, stream=True) as r: with open(local_filename, 'wb') as f: shutil.copyfileobj(r.raw, f) return local_filename

不負(fù)相思意
TA貢獻(xiàn)1777條經(jīng)驗(yàn) 獲得超10個(gè)贊
with
def DownloadFile(url): local_filename = url.split('/')[-1] r = requests.get(url) with open(local_filename, 'wb') as f: for chunk in r.iter_content(chunk_size=1024): if chunk: # filter out keep-alive new chunks f.write(chunk) return
f.flush()
os.fsync()
with open(local_filename, 'wb') as f: for chunk in r.iter_content(chunk_size=1024): if chunk: # filter out keep-alive new chunks f.write(chunk) f.flush() os.fsync(f.fileno())
添加回答
舉報(bào)
0/150
提交
取消