提取Google搜索结果

提取Google搜索结果,第1张

提取Google搜索结果

正则表达式对于解析HTML是个坏主意。读取并依赖格式正确的HTML是很神秘的。

尝试使用BeautifulSoup for
Python。这是一个示例脚本,该脚本从site:domain.com Google查询的前10个页面返回URL。

import sys # Used to add the BeautifulSoup folder the import pathimport urllib2 # Used to read the html documentif __name__ == "__main__":    ### import Beautiful Soup    ### Here, I have the BeautifulSoup folder in the level of this Python script    ### So I need to tell Python where to look.    sys.path.append("./BeautifulSoup")    from BeautifulSoup import BeautifulSoup    ### Create opener with Google-friendly user agent    opener = urllib2.build_opener()    opener.addheaders = [('User-agent', 'Mozilla/5.0')]    ### Open page & generate soup    ### the "start" variable will be used to iterate through 10 pages.    for start in range(0,10):        url = "http://www.google.com/search?q=site:stackoverflow.com&start=" + str(start*10)        page = opener.open(url)        soup = BeautifulSoup(page)        ### Parse and find        ### Looks like google contains URLs in <cite> tags.        ### So for each cite tag on each page (10), print its contents (url)        for cite in soup.findAll('cite'): print cite.text

输出:

stackoverflow.com/stackoverflow.com/questionsstackoverflow.com/unansweredstackoverflow.com/usersmeta.stackoverflow.com/blog.stackoverflow.com/chat.meta.stackoverflow.com/...

当然,您可以将每个结果附加到列表中,以便可以将其解析为子域。我几天前刚接触Python并抓取内容,但这应该可以帮助您入门。



欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5673491.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-16
下一篇 2022-12-17

发表评论

登录后才能评论

评论列表(0条)

保存