我在問(wèn)是否有比我更好的 python 查詢,它可以允許更好的處理時(shí)間。我正在為 CSV 文件的每一行迭代 REST API 請(qǐng)求,并將結(jié)果導(dǎo)出到新的 CSV 文件中。當(dāng)我跑 10 行時(shí),大約需要 11 秒。但我需要做 50,000 行。所以我猜這大約需要 14 小時(shí)(833 分鐘 = 50,000 秒)。有什么辦法可以減少處理時(shí)間嗎?(任何查詢改進(jìn)?)謝謝!注意:此 API 可以通過(guò)輸入個(gè)人地址、名字、姓氏等來(lái)確定個(gè)人地址是否是最新的。Python查詢import requestsimport jsonimport pandas as pdimport numpy as npimport csv# Input CSVdf = pd.read_csv(r"C:\users\testu\documents\travis_50000.csv",delimiter = ',' , na_values="nan") # Writing first, last name columnsplitted = df['prop_yr_owner_name'].str.split()df['last_name'] = splitted.str[0]df['first_name'] = splitted.str[1]print(df["first_name"].iloc[0])# Output CSVwith open(r"C:\users\testu\documents\travis_output.csv", 'w', newline='') as fp: # Writing Header fieldnames = ["AddressExtras","AddressLine1","AddressLine2","BaseMelissaAddressKey","City","CityAbbreviation","MelissaAddressKey","MoveEffectiveDate","MoveTypeCode","PostalCode","State","StateName","NameFirst", "NameFull", "NameLast", "NameMiddle", "NamePrefix", "NameSuffix"] writer = csv.DictWriter(fp, fieldnames=fieldnames) writer.writeheader()# Iterating requests for each rowfor row in df.itertuples(): url = 'https://smartmover.melissadata.net/v3/WEB/SmartMover/doSmartMover' payload = {'t': '1353', 'id': '4t8hsfh8fj3jf', 'jobid': '1', 'act': 'NCOA, CCOA', 'cols': 'TotalRecords,AddressExtras,AddressLine1,AddressLine2,,BaseMelissaAddressKey,City,CityAbbreviation,MelissaAddressKey,MoveEffectiveDate,MoveTypeCode,PostalCode,RecordID,Results,State,StateName, NameFirst, NameFull, NameLast, NameMiddle, NamePrefix, NameSuffix', 'opt': 'ProcessingType: Standard', 'List': 'test', 'first': row.first_name, 'last': row.last_name, 'a1': row.prop_year_addr_line1, 'a2': row.prop_year_addr_line2, 'city': row.prop_addr_city, 'state': row.prop_addr_state, 'postal': row.prop_addr_zip, 'ctry': 'USA'} response = requests.get( url, params=payload, headers={'Content-Type': 'application/json'} ) r = response.json() print(r)
查看完整描述