我使用asTokenizer()中的函數(shù)tensorflow.keras.preprocessing.text:from tensorflow.keras.preprocessing.text import Tokenizers = ["The quick brown fox jumped over the lazy dog."]t = Tokenizer()t.fit_on_texts(s)print(t.word_index)輸出 :{'the': 1, 'quick': 2, 'brown': 3, 'fox': 4, 'jumped': 5, 'over': 6, 'lazy': 7, 'dog': 8}Tokenizer 函數(shù)排除標(biāo)點符號。如何標(biāo)記標(biāo)點符號?( .,在此示例中。)
1 回答

12345678_0001
TA貢獻(xiàn)1802條經(jīng)驗 獲得超5個贊
一種可能性是用空格將標(biāo)點符號與單詞分開。我用預(yù)處理函數(shù)來做到這一點pad_punctuation。之后我Tokenizer申請filter=''
import re
import string
from tensorflow.keras.preprocessing.text import Tokenizer
def pad_punctuation(s): return re.sub(f"([{string.punctuation}])", r' \1 ', s)
S = ["The quick brown fox jumped over the lazy dog."]
S = [pad_punctuation(s) for s in S]
t = Tokenizer(filters='')
t.fit_on_texts(S)
print(t.word_index)
結(jié)果:
{'the': 1, 'quick': 2, 'brown': 3, 'fox': 4, 'jumped': 5, 'over': 6, 'lazy': 7, 'dog': 8, '.': 9}
該pad_punctuation功能對所有標(biāo)點符號都有效
添加回答
舉報
0/150
提交
取消