- 前言
一、爬虫
- 1.ip138
- 2.bing
二、通过字典进行子域名爆破
三、python爬虫 *** 作步骤
- 1.写出请求头headers与目标网站url
- 2.生成请求
- 3.抓取数据
- 4.分析源码,截取标签中内容
四、爬虫一些总结
- 1.抓取数据,生成soup
- 2.从文档中获取所有文字内容
- 3.从文档中找到所有< a >标签的链接
意义:子域名枚举是为一个或多个域查找子域的过程,它是信息收集阶段的重要组成部分。
实现方法:使用爬虫与字典爆破。
一、爬虫 1.ip138
def search_2(domain):
res_list = []
headers = {
'Accept': '*/*',
'Accept-Language': 'en-US,en;q=0.8',
'Cache-Control': 'max-age=0',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36',
'Connection': 'keep-alive',
'Referer': 'http://www.baidu.com/'
}
results = requests.get('https://site.ip138.com/' + domain + '/domain.htm', headers=headers)
soup = BeautifulSoup(results.content, 'html.parser')
job_bt = soup.findAll('p')
try:
for i in job_bt:
link = i.a.get('href')
linkk = link[1:-1]
res_list.append(linkk)
print(linkk)
except:
pass
print(res_list[:-1])
if __name__ == '__main__':
search_2("jd.com")
返回结果:
def search_1(site):
Subdomain = []
headers = {
'Accept': '*/*',
'Accept-Language': 'en-US,en;q=0.8',
'Cache-Control': 'max-age=0',
'User-Agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/48.0.2564.116 Safari/537.36',
'Connection': 'keep-alive',
'Referer': 'http://www.baidu.com/'
}
for i in range(1, 16):
url = "https://cn.bing.com/search?q=site%3A" + site + "&go=Search&qs=ds&first=" + str(
(int(i) - 1) * 10) + "&FORM=PERE"
# conn = requests.session()
# conn.get('http://cn.bing.com', headers=headers)
# html = conn.get(url, stream=True, headers=headers)
html = requests.get(url, stream=True, headers=headers)
soup = BeautifulSoup(html.content, 'html.parser')
# print(soup)
job_bt = soup.findAll('h2')
for i in job_bt:
link = i.a.get('href')
print(link)
if link in Subdomain:
pass
else:
Subdomain.append(link)
print(Subdomain)
if __name__ == '__main__':
search_1("jd.com")
返回结果:
二、通过字典进行子域名爆破
def dict(url):
for dict in open('dic.txt'):
dict = dict.replace('\n', "")
zym_url = dict + "." + url
try:
ip = socket.gethostbyname(zym_url)
print(zym_url + "-->" + ip)
time.sleep(0.1)
except Exception as e:
# print(zym_url + "-->" + ip + "--error")
time.sleep(0.1)
if __name__ == '__main__':
dict("jd.com")
返回结果:
三、python爬虫 *** 作步骤 1.写出请求头headers与目标网站url
headers = {
'User-Agent': "Mozilla/5.0 (Windows NT 10.0) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36 Edge/12.10240"
}
url = "https://site.ip138.com/"
2.生成请求
get:res = requests.get(url + domain, headers=headers)
post:res = requests.post(url + domain, headers=headers, data=data)
3.抓取数据
soup = BeautifulSoup(res.content, 'html.parser') # 以html解析器解析res的内容
此时print(soup),返回结果:
1.通过分析源码,确定需要提取p标签中的内容:
job_bt = soup.findAll('p')
此时print(job_bt),返回结果:
2.继续提取a标签内属性为href的值:
try:
for i in job_bt:
link = i.a.get('href')
linkk = link[1:-1]
res_list.append(linkk)
print(linkk)
except:
pass
得结果:
3.再进行截取:
res_list[:-1]
得结果:
四、爬虫一些总结 1.抓取数据,生成soup
soup = BeautifulSoup(res.content, 'html.parser') # 以html解析器解析res的内容
2.从文档中获取所有文字内容
print(soup.get_text())
3.从文档中找到所有< a >标签的链接
for link in soup.find_all('a'):
print(link.get('href'))
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)