这可能不是最佳的解决方案,但却是可行的解决方案。用定界符分割句子(在我的示例中是一个或多个空格或逗号),然后爆炸并合并以得到n-
gram,然后使用
collect_set(如果需要唯一的n-gram)或n组合n-gram数组
collect_list:
with src as (select source_data.sentence, words.pos, words.word from (--Replace this subquery (source_data) with your table select stack (2, 'This is my sentence','This is another sentence' ) as sentence ) source_data --split and explode words lateral view posexplode(split(sentence, '[ ,]+')) words as pos, word)select s1.sentence, collect_set(concat_ws(' ',s1.word, s2.word)) as ngrams from src s1 inner join src s2 on s1.sentence=s2.sentence and s1.pos+1=s2.pos group by s1.sentence;
结果:
OKThis is another sentence ["This is","is another","another sentence"]This is my sentence ["This is","is my","my sentence"]Time taken: 67.832 seconds, Fetched: 2 row(s)
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)