1 回答

TA貢獻1859條經(jīng)驗 獲得超6個贊
在這種情況下,.__iter__它被實現(xiàn)為生成器函數(shù)(而不是生成器迭代器),它返回一個生成器迭代器。
每次Tokenizer.advance被調(diào)用時,都會創(chuàng)建一個新的 生成器迭代器并由 返回.__iter__。相反,迭代器應(yīng)該Tokenizer在初始化階段由對象存儲以供所有后續(xù)使用。
例如:
import re
class Tokenizer:
def __init__(self, input_file):
self.in_file = input_file
self.tokens = []
self.current_token = None
self.next_token = None
self.line = 1
def split_tokens(self):
''' Create a list with all the tokens of the input file '''
self.tokens = re.findall("\w+|[{}()\[\].;,+\-*/&|<>=~\n]", self.in_file)
self.iterator = self.__iter__()
def __iter__(self):
for token in self.tokens:
if token != '\n':
yield token
else:
self.line += 1
def advance(self):
self.current_token = self.next_token
self.next_token = next(self.iterator)
另一個可以解釋的最小例子:
def fib():
a = 0
b = 1
while True:
yield b
a, b = b, a + b
# 1, 1, 2, ...
fibs = fib()
next(fibs)
next(fibs)
next(fibs)
# 1, 1, 1, ...
next(fib())
next(fib())
next(fib())
順便說一句,我看不出混合使用.__iter__魔術(shù)方法和單獨.advance方法的原因。它可能會引起一些混亂。
添加回答
舉報