text <- Corpus(VectorSource(d$text))newtext <- lapply(text,tolower)sw <- c(stopwords("english"),"ahram","online","egypt","egypts","egyptian")newtext <- lapply(newtext,function(x) removePunctuation(x))newtext <- lapply(newtext,function(x) removeWords(x,sw))newtext <- lapply(newtext,function(x) removeNumbers(x))newtext <- lapply(newtext,function(x) stripwhitespace(x))d$processed <- unList(newtext)corpus <- lexicalize(d$processed)k <- 40result <-lda.collapsed.gibbs.sampler(corpus$documents,k,corpus$vocab,500,.02,.05,compute.log.likelihood = TRUE,trace = 2L)
不幸的是,当我训练lda模型时,一切看起来都很棒,除了最常出现的单词是“”.我尝试通过从下面给出的词汇中删除它并如上所述重新估计模型来解决这个问题:
newtext <- lapply(newtext,""))
但是,它仍然存在,如下所示:
str_split(newtext[[1]]," ")[[1]] [1] "" "body" "mohamed" "hassan" [5] "cook" "found" "turkish" "search" [9] "rescue" "teams" "rescued" "hospital" [13] "rescue" "teams" "continued" "search" [17] "missing" "body" "cook" "crew" [21] "wereegyptians" "sudanese" "syrians" "hassan" [25] "cook" "cargo" "ship" "sea" [29] "bright" "crashed" "thursday" "port" [33] "antalya" "southern" "turkey" "vessel" [37] "collIDed" "rocks" "port" "thursday" [41] "night" "result" "heavy" "winds" [45] "waves" "crew" ""
关于如何去除这个的任何建议?在我的停用词列表中添加“”也无济于事.
解决方法 我经常处理文本,但不是这样,所以这是摆脱“你有”的2种方法.可能额外的“”字符是由于句子之间的双倍空格键.您可以在将文本转换为单词之前或之后处理此情况.您可以在strsplit之前将所有“”x2替换为“”x1,或者之后可以将其取代(您必须在strsplit之后取消列出).x <- "I like to rIDe my bicycle. Do you like to rIDe too?"#TREAT BEFORE(OPTION):a <- gsub(" +"," ",x)strsplit(a," ")#TREAT AFTER OPTION:y <- unList(strsplit(x," "))y[!y%in%""]
您也可以尝试:
newtext <- lapply(newtext,function(x) gsub(" +",x))
再次,我不使用tm,所以这可能没有帮助,但这篇文章没有看到任何行动所以我想我会分享可能性.
总结以上是内存溢出为你收集整理的从R中的文档语料库中删除“空”字符项?全部内容,希望文章能够帮你解决从R中的文档语料库中删除“空”字符项?所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)