我正在嘗試在 pyspark (Spark 2.4.5) 中應(yīng)用非常簡單的 Pandas UDF,但它對我不起作用。例子:pyspark --master local[4] --conf "spark.pyspark.python=/opt/anaconda/envs/bd9/bin/python3" --conf "spark.pyspark.driver.python=/opt/anaconda/envs/bd9/bin/python3" >>> my_df = spark.createDataFrame(... [... (1, 0),... (2, 1),... (3, 1),... ],... ["uid", "partition_id"]... )from pyspark.sql.types import StructType, StructField, StringTypeschema = StructType([StructField("uid", StringType())])from pyspark.sql.functions import pandas_udf, PandasUDFTypeimport pandas>>> @pandas_udf(schema, PandasUDFType.GROUPED_MAP)... def apply_model(sample_df):... print(sample_df)... return pandas.DataFrame({"uid": sample_df["uid"]})...>>> result = my_df.groupBy("partition_id").apply(apply_model)>>> result.show() uid partition_id0 1 0[Stage 13:==================================================> (92 + 4) / 100] uid partition_id0 2 11 3 1+---+|uid|+---+| || || |+---+不知何故 uid 沒有反映在結(jié)果中。你能說我在這里缺少什么嗎?
1 回答

婷婷同學(xué)_
TA貢獻(xiàn)1844條經(jīng)驗(yàn) 獲得超8個(gè)贊
抱歉,不好意思,我在模式中寫錯(cuò)了類型,應(yīng)該是 LongType() 而不是 StringType()
添加回答
舉報(bào)
0/150
提交
取消