以下是我需要幫助的代碼。我必須在1,300,000行以上運(yùn)行它,這意味著最多需要40分鐘才能插入?300,000行。我認(rèn)為批量插入是加快速度的途徑嗎?還是因?yàn)槲乙ㄟ^for data in reader:部分遍歷行?#Opens the prepped csv filewith open (os.path.join(newpath,outfile), 'r') as f: #hooks csv reader to file reader = csv.reader(f) #pulls out the columns (which match the SQL table) columns = next(reader) #trims any extra spaces columns = [x.strip(' ') for x in columns] #starts SQL statement query = 'bulk insert into SpikeData123({0}) values ({1})' #puts column names in SQL query 'query' query = query.format(','.join(columns), ','.join('?' * len(columns))) print 'Query is: %s' % query #starts curser from cnxn (which works) cursor = cnxn.cursor() #uploads everything by row for data in reader: cursor.execute(query, data) cursor.commit()我有目的地動(dòng)態(tài)地選擇列標(biāo)題(因?yàn)槲蚁雱?chuàng)建盡可能多的pythonic代碼)。SpikeData123是表名。
如何使用pyodbc加快從CSV到MS SQL Server的批量插入
慕尼黑的夜晚無繁華
2019-11-14 14:24:05