您的数据集不干净。985行
split('t'),只有一个值:
>>> from operator import add>>> lines = sc.textFile("classified_tweets.txt")>>> parts = lines.map(lambda l: l.split("t"))>>> parts.map(lambda l: (len(l), 1)).reduceByKey(add).collect()[(2, 149195), (1, 985)]>>> parts.filter(lambda l: len(l) == 1).take(5)[['"show me the money!” at what point do you start trying to monetize your #startup? tweet us with #startuplife.'], ['a good pitch can mean money in the bank for your #startup. see how body language plays a key role: (via: ajalumnify)'], ['100+ apps in five years? @2359media did it using microsoft #azure: #azureapps'], ['does buying better coffee make you a better leader? little things can make a big difference: (via: @jmbrandonbb)'], ['.@msftventures graduates pitchedxa0#homeautomation #startups to #vcs! check out how they celebrated: ']]
>>> training = parts.filter(lambda l: len(l) == 2).map(lambda p: (p[0], p[1].strip()))>>> training_df = sqlContext.createDataframe(training, ["tweet", "classification"])>>> df = training_df.withColumn("dummy", dummy_function_udf(training_df['tweet']))>>> df.show(5)+--------------------+--------------+---------+| tweet|classification| dummy|+--------------------+--------------+---------+|rt @jiffyclub: wi...| python|dummyData||rt @arnicas: ipyt...| python|dummyData||rt @treycausey: i...| python|dummyData||what's my best op...| python|dummyData||rt @raymondh: #py...| python|dummyData|+--------------------+--------------+---------+only showing top 5 rows
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)