2 回答

TA貢獻(xiàn)1757條經(jīng)驗 獲得超8個贊
您必須手動解析日志以收集相關(guān)數(shù)據(jù):
import re, json
pattern = re.compile(r'.+? .+? (.+)')
logs = []
with open('data.txt') as fp:
? ? for line in fp:
? ? ? ? match = pattern.match(line)
? ? ? ? if match:
? ? ? ? ? ? try:
? ? ? ? ? ? ? ? data = json.loads(match.group(1))
? ? ? ? ? ? ? ? logs.append(data)
? ? ? ? ? ? except json.JSONDecodeError:
? ? ? ? ? ? ? ? pass
df = pd.DataFrame(logs)
要實時執(zhí)行此操作,您必須監(jiān)視文件的更改。

TA貢獻(xiàn)1804條經(jīng)驗 獲得超7個贊
這是另一種使用方法json_normalize:
import json
import re
pattern = re.compile('{.*}')
rows = []
with open('a.txt', 'r+') as f:
for line in f:
for match in re.finditer(pattern, line):
data = json.loads(match.group())
dfx = pd.json_normalize(data)
rows.append(dfx)
df = pd.concat(rows)
print(df)
message level logType timeStamp
0 Killing processes... Information User 2020-10-19T10:07:49.1386035+02:00
0 Opening applications... Information User 2020-10-19T10:07:49.4092373+02:00
添加回答
舉報