Error[8]: Undefined offset: 9, File: /www/wwwroot/outofmemory.cn/tmp/plugin_ss_superseo_model_superseo.php, Line: 121
File: /www/wwwroot/outofmemory.cn/tmp/plugin_ss_superseo_model_superseo.php, Line: 473, decode(

如何在python中将查询提交到.aspx页面

作为概述,您将需要执行四个主要任务:

http请求和响应处理是通过Python标准库的urllib和urllib2中的方法和类完成的。可以使用Python的标准库的HTMLParser
或其他模块(例如Beautiful
Soup)
来完成html页面的解析

以下代码段演示了在问题指示的站点上请求和接收搜索的过程。该站点是ASP驱动的,因此,我们需要确保发送多个表单字段,其中一些表单字段具有“可怕”的值,因为ASP逻辑使用这些字段来维护状态并在某种程度上对请求进行身份验证。确实是提交。必须使用
http POST方法
发送请求,因为这是该ASP应用程序所期望的。主要困难在于确定ASP期望的表单字段和关联值(使用Python获取页面是最容易的部分)。

直到我删除了大部分的VSTATE值,并可能通过添加注释引入了一两个拼写错误,这段代码才起作用,或者更确切地说, 起作用的。

import urllibimport urllib2uri = 'http://legistar.council.nyc.gov/Legislation.aspx'#the http headers are useful to simulate a particular browser (some sites deny#access to non-browsers (bots, etc.)#also needed to pass the content type. headers = {    'HTTP_USER_AGENT': 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.13) Gecko/2009073022 Firefox/3.0.13',    'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml; q=0.9,*/*; q=0.8',    'Content-Type': 'application/x-www-form-urlenpred'}# we group the form fields and their values in a list (any# iterable, actually) of name-value tuples.  This helps# with clarity and also makes it easy to later encoding of them.formFields = (   # the viewstate is actualy 800+ characters in length! I truncated it   # for this sample pre.  It can be lifted from the first page   # obtained from the site.  It may be ok to hardpre this value, or   # it may have to be refreshed each time / each day, by essentially   # running an extra page request and parse, for this specific value.   (r'__VSTATE', r'7TzretNIlrZiKb7EOB3AQE ... ...2qd6g5xD8CGXm5EftXtNPt+H8B'),   # following are more of these ASP form fields   (r'__VIEWSTATE', r''),   (r'__EVENTVALIDATION', r'/wEWDwL+raDpAgKnpt8nAs3q+pQOAs3q/pQOAs3qgpUOAs3qhpUOAoPE36ANAve684YCAoOs79EIAoOs89EIAoOs99EIAoOs39EIAoOs49EIAoOs09EIAoSs99EI6IQ74SEV9n4XbtWm1rEbB6Ic3/M='),   (r'ctl00_RadscriptManager1_HiddenField', ''),    (r'ctl00_tabTop_ClientState', ''),    (r'ctl00_ContentPlaceHolder1_menuMain_ClientState', ''),   (r'ctl00_ContentPlaceHolder1_gridMain_ClientState', ''),   #but then we come to fields of interest: the search   #criteria the collections to search from etc.# Check boxes     (r'ctl00$ContentPlaceHolder1$chkOptions[+++]', 'on'),  # file number   (r'ctl00$ContentPlaceHolder1$chkOptions', 'on'),  # Legislative text   (r'ctl00$ContentPlaceHolder1$chkOptions', 'on'),  # attachement# etc. (not all listed)   (r'ctl00$ContentPlaceHolder1$txtSearch', 'york'),   # Search text   (r'ctl00$ContentPlaceHolder1$lstYears', 'All Years'),  # Years to include   (r'ctl00$ContentPlaceHolder1$lstTypeBasic', 'All Types'),  #types to include   (r'ctl00$ContentPlaceHolder1$btnSearch', 'Search Legislation')  # Search button itself)# these have to be enpred    enpredFields = urllib.urlenpre(formFields)req = urllib2.Request(uri, enpredFields, headers)f= urllib2.urlopen(req)     #that's the actual call to the http site.# *** here would normally be the in-memory parsing of f #     contents, but instead I store this to file#     this is useful during design, allowing to have a#     sample of what is to be parsed in a text editor, for analysis.try:  fout = open('tmp.htm', 'w')except:  print('Could not open output filen')fout.writelines(f.readlines())fout.close()

就是为了获得初始页面。如上所述,然后需要解析页面,即找到感兴趣的部分并适当地收集它们,并将它们存储到文件/数据库/任何地方。可以通过很多方法来完成这项工作:使用html解析器或XSLT类型的技术(实际上是将html解析为xml之后),甚至对于简单的工作,也可以使用简单的正则表达式。同样,一个通常提取的项目之一是“下一个信息”,即各种链接,可用于对服务器的新请求中以获取后续页面。

这应该使您大致了解“长手”
html刮擦的含义。还有许多其他方法,例如专用工具,Mozilla(FireFox)GreaseMonkey插件中的脚本,XSLT …



)
File: /www/wwwroot/outofmemory.cn/tmp/route_read.php, Line: 126, InsideLink()
File: /www/wwwroot/outofmemory.cn/tmp/index.inc.php, Line: 165, include(/www/wwwroot/outofmemory.cn/tmp/route_read.php)
File: /www/wwwroot/outofmemory.cn/index.php, Line: 30, include(/www/wwwroot/outofmemory.cn/tmp/index.inc.php)
如何在python中将查询提交到.aspx页面_随笔_内存溢出

如何在python中将查询提交到.aspx页面

如何在python中将查询提交到.aspx页面,第1张

如何在python中将查询提交到.aspx页面

作为概述,您将需要执行四个主要任务:

  • 向网站提交请求,
  • 从站点检索响应
  • 解析这些响应
  • 在上面的任务中有一些逻辑可以迭代,带有与导航关联的参数(到结果列表中的“下一页”)

http请求和响应处理是通过Python标准库的urllib和urllib2中的方法和类完成的。可以使用Python的标准库的HTMLParser
或其他模块(例如Beautiful
Soup)
来完成html页面的解析

以下代码段演示了在问题指示的站点上请求和接收搜索的过程。该站点是ASP驱动的,因此,我们需要确保发送多个表单字段,其中一些表单字段具有“可怕”的值,因为ASP逻辑使用这些字段来维护状态并在某种程度上对请求进行身份验证。确实是提交。必须使用
http POST方法
发送请求,因为这是该ASP应用程序所期望的。主要困难在于确定ASP期望的表单字段和关联值(使用Python获取页面是最容易的部分)。

直到我删除了大部分的VSTATE值,并可能通过添加注释引入了一两个拼写错误,这段代码才起作用,或者更确切地说, 起作用的。

import urllibimport urllib2uri = 'http://legistar.council.nyc.gov/Legislation.aspx'#the http headers are useful to simulate a particular browser (some sites deny#access to non-browsers (bots, etc.)#also needed to pass the content type. headers = {    'HTTP_USER_AGENT': 'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.13) Gecko/2009073022 Firefox/3.0.13',    'HTTP_ACCEPT': 'text/html,application/xhtml+xml,application/xml; q=0.9,*/*; q=0.8',    'Content-Type': 'application/x-www-form-urlenpred'}# we group the form fields and their values in a list (any# iterable, actually) of name-value tuples.  This helps# with clarity and also makes it easy to later encoding of them.formFields = (   # the viewstate is actualy 800+ characters in length! I truncated it   # for this sample pre.  It can be lifted from the first page   # obtained from the site.  It may be ok to hardpre this value, or   # it may have to be refreshed each time / each day, by essentially   # running an extra page request and parse, for this specific value.   (r'__VSTATE', r'7TzretNIlrZiKb7EOB3AQE ... ...2qd6g5xD8CGXm5EftXtNPt+H8B'),   # following are more of these ASP form fields   (r'__VIEWSTATE', r''),   (r'__EVENTVALIDATION', r'/wEWDwL+raDpAgKnpt8nAs3q+pQOAs3q/pQOAs3qgpUOAs3qhpUOAoPE36ANAve684YCAoOs79EIAoOs89EIAoOs99EIAoOs39EIAoOs49EIAoOs09EIAoSs99EI6IQ74SEV9n4XbtWm1rEbB6Ic3/M='),   (r'ctl00_RadscriptManager1_HiddenField', ''),    (r'ctl00_tabTop_ClientState', ''),    (r'ctl00_ContentPlaceHolder1_menuMain_ClientState', ''),   (r'ctl00_ContentPlaceHolder1_gridMain_ClientState', ''),   #but then we come to fields of interest: the search   #criteria the collections to search from etc.# Check boxes     (r'ctl00$ContentPlaceHolder1$chkOptions', 'on'),  # file number   (r'ctl00$ContentPlaceHolder1$chkOptions', 'on'),  # Legislative text   (r'ctl00$ContentPlaceHolder1$chkOptions', 'on'),  # attachement# etc. (not all listed)   (r'ctl00$ContentPlaceHolder1$txtSearch', 'york'),   # Search text   (r'ctl00$ContentPlaceHolder1$lstYears', 'All Years'),  # Years to include   (r'ctl00$ContentPlaceHolder1$lstTypeBasic', 'All Types'),  #types to include   (r'ctl00$ContentPlaceHolder1$btnSearch', 'Search Legislation')  # Search button itself)# these have to be enpred    enpredFields = urllib.urlenpre(formFields)req = urllib2.Request(uri, enpredFields, headers)f= urllib2.urlopen(req)     #that's the actual call to the http site.# *** here would normally be the in-memory parsing of f #     contents, but instead I store this to file#     this is useful during design, allowing to have a#     sample of what is to be parsed in a text editor, for analysis.try:  fout = open('tmp.htm', 'w')except:  print('Could not open output filen')fout.writelines(f.readlines())fout.close()

就是为了获得初始页面。如上所述,然后需要解析页面,即找到感兴趣的部分并适当地收集它们,并将它们存储到文件/数据库/任何地方。可以通过很多方法来完成这项工作:使用html解析器或XSLT类型的技术(实际上是将html解析为xml之后),甚至对于简单的工作,也可以使用简单的正则表达式。同样,一个通常提取的项目之一是“下一个信息”,即各种链接,可用于对服务器的新请求中以获取后续页面。

这应该使您大致了解“长手”
html刮擦的含义。还有许多其他方法,例如专用工具,Mozilla(FireFox)GreaseMonkey插件中的脚本,XSLT …



欢迎分享,转载请注明来源:内存溢出

原文地址: http://outofmemory.cn/zaji/5644759.html

(0)
打赏 微信扫一扫 微信扫一扫 支付宝扫一扫 支付宝扫一扫
上一篇 2022-12-16
下一篇 2022-12-16

发表评论

登录后才能评论

评论列表(0条)

保存