newspaper 框架是一个主要用来提取新闻内容及分析的 Python 爬虫框架,更确切的说,newspaper 是一个 Python 库,但这个库由第三方开发。
newspaper 主要具有如下几个特点:
比较简洁
速度较快
支持多线程
支持多语言
GitHub 链接:https://github.com/codelucas/newspaper
安装方法:pip3 install newspaper3k
我们以环球网为例,如下所示:
import newspaper
hq_paper = newspaper.build("https://tech.huanqiu.com/", language="zh", memoize_articles=False)
默认情况下,newspaper 缓存所有以前提取的文章,并删除它已经提取的任何文章,使用 memoize_articles 参数选择退出此功能。
2.2 获取文章 URL2.3 获取类别>>> import newspaper
>>> hq_paper = newspaper.build("https://tech.huanqiu.com/", language="zh", memoize_articles=False)
>>> for article in hq_paper.articles:
>>> print(article.url)
http://world.huanqiu.com/gallery/9CaKrnQhXvy
http://mil.huanqiu.com/gallery/7RFBDCOiXNC
http://world.huanqiu.com/gallery/9CaKrnQhXvz
http://world.huanqiu.com/gallery/9CaKrnQhXvw
...
2.4 获取品牌和描述>>> import newspaper
>>> hq_paper = newspaper.build("https://tech.huanqiu.com/", language="zh", memoize_articles=False)
>>> for category in hq_paper.category_urls():
>>> print(category)
http://www.huanqiu.com
http://tech.huanqiu.com
http://smart.huanqiu.com
https://tech.huanqiu.com/
2.5 下载解析>>> import newspaper
>>> hq_paper = newspaper.build("https://tech.huanqiu.com/", language="zh", memoize_articles=False)
>>> print(hq_paper.brand)
>>> print(hq_paper.description)
huanqiu
环球网科技,不一样的IT视角!以“成为全球科技界的一面镜子”为出发点,向关注国际科技类资讯的网民,提供国际科技资讯的传播与服务。
我们选取其中一篇文章为例,如下所示:
2.6 Article 类使用>>> import newspaper
>>> hq_paper = newspaper.build("https://tech.huanqiu.com/", language="zh", memoize_articles=False)
>>> article = hq_paper.articles[4]
# 下载
>>> article.download()
# 解析
article.parse()
# 获取文章标题
>>> print("title=", article.Title)
# 获取文章日期
>>> print("publish_date=", article.publish_date)
# 获取文章作者
>>> print("author=", article.authors)
# 获取文章顶部图片地址
>>> print("top_iamge=", article.top_image)
# 获取文章视频链接
>>> print("movIEs=", article.movIEs)
# 获取文章摘要
>>> print("summary=", article.summary)
# 获取文章正文
>>> print("text=", article.text)
Title= “美丽山”的美丽传奇
publish_date= 2019-11-15 00:00:00
...
3 多任务import newspaper
from newspaper import Article
def newspaper_url(url):
web_paper = newspaper.build(url, language="zh", memoize_articles=False)
for article in web_paper.articles:
newspaper_info(article.url)
def newspaper_info(url):
article = Article(url, language='zh')
article.download()
article.parse()
print("title=", article.Title)
print("author=", article.authors)
print("publish_date=", article.publish_date)
print("top_iamge=", article.top_image)
print("movIEs=", article.movIEs)
print("text=", article.text)
print("summary=", article.summary)
if __name__ == "__main__":
newspaper_url("https://tech.huanqiu.com/")
当我们需要从多个渠道获取新闻信息时可以采用多任务的方式,如下所示:
import newspaper
from newspaper import news_pool
hq_paper = newspaper.build('https://www.huanqiu.com', language="zh")
sh_paper = newspaper.build('http://news.sohu.com', language="zh")
sn_paper = newspaper.build('https://news.sina.com.cn', language="zh")
papers = [hq_paper, sh_paper, sn_paper]
# 线程数为 3 * 2 = 6
news_pool.set(papers, threads_per_source=2)
news_pool.join()
print(hq_paper.articles[0].HTML)
因获取内容较多,上述代码执行可能需要一段时间,我们要耐心等待。
总结本文为大家介绍了 Python 爬虫框架 newspaper,让大家能够对 newspaper 有个基本了解以及能够上手使用。newspaper 框架还存在一些 BUG,因此,我们在实际工作中需要综合考虑、谨慎使用。
示例代码:https://github.com/JustDoPython/python-100-day/tree/master/day-074
参考:https://newspaper.readthedocs.io/en/latest/user_guIDe/quickstart.HTML#performing-nlp-on-an-article
系列文章
第73天:itchat 微信机器人简介第72天:PySpider框架的使用
第71天:Python Scrapy 项目实战
从 0 学习 Python 0 - 70 大合集总结@H_871_301@
总结
以上是内存溢出为你收集整理的第74天:Python newspaper 框架全部内容,希望文章能够帮你解决第74天:Python newspaper 框架所遇到的程序开发问题。
如果觉得内存溢出网站内容还不错,欢迎将内存溢出网站推荐给程序员好友。
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)