chrome动态ip python_用Python爬虫爬取动态网页,附带完整代码,有错误欢迎指出!...
系統環境:
操作系統:Windows8.1專業版 64bit Python:anaconda、Python2.7 Python modules:requests、random、jsonBackground:
對于靜態網頁,我們只需要把網頁地址欄中的url傳到get請求中就可以輕松地獲取到網頁的數據。但是,我們經常會遇到直接把網頁地址欄中的url傳到get請求中無法直接獲取到網頁的數據的情況,而且右鍵查看網頁源代碼也無法看到網頁的數據,同時點擊第二頁、第三頁等進行翻頁的時候,網頁地址欄中的url也沒變,這些就是動態網頁,例如:
自律監管措施 - 全國中小企業股份轉讓系統?www.neeq.com.cn解決辦法:
對于動態網頁抓取的關鍵是先分析網頁數據獲取和跳轉的邏輯,再去寫代碼。接下來,將以上面的那個網頁為例,介紹如何利用Python來爬取動態網頁的數據。
1、分析網頁數據請求和跳轉的邏輯:
如上圖所示,我們打開網頁之后,按“F12”進入chrome瀏覽器的開發工具,點擊“Network”->XHR(有時候是JS),然后我們點擊上面的頁面跳轉欄的“2”跳轉到第二頁,然后我們可以看到開發工具左邊的框里出現了一個新的請求,即左下圖的最下面那一行(藍色那條),我們用鼠標點擊它,就可以在右邊顯示出該請求的headers的相關信息。在Headers中我們可以知道:Requests URL就是該網頁真正請求的URL,而且由Request Method可以知道這是一個post請求,而下面的Request Headers就是該請求所需要設置的headers參數。因為這是一個post請求,所以我們要查看一下post請求提交了那些數據,所以我們可以在右邊的Headers中繼續往下拉來查看。
所以由上圖的Form Data我們可以知道,post請求上傳了兩個關鍵的數據:disclosureType和page,到此我們就成功地分析了該動態網頁數據請求和跳轉的邏輯,接下來通過編程來實現爬取該網頁的數據。
2、Coding:
# -*- coding: utf-8 -*- """ Created on Tue May 01 18:52:49 2018 @author: gmn """ #導入requests module import requests #導入random module import random #導入json module import json# ============================================================================= # 應對網站反爬蟲的相關設置 # ============================================================================= #User-Agent列表,這個可以自己在網上搜到,用于偽裝瀏覽器的User Agent USER_AGENTS = ["Mozilla/5.0 (Windows; U; Windows NT 5.2) Gecko/2008070208 Firefox/3.0.1""Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50","Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1","Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1","Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11","Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11","Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11","Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)","Opera/9.80 (Windows NT 5.1; U; zh-cn) Presto/2.9.168 Version/11.50","Mozilla/5.0 (Windows NT 5.1; rv:5.0) Gecko/20100101 Firefox/5.0","Mozilla/5.0 (Windows NT 5.2) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.122 Safari/534.30","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER","Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; LBBROWSER)","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; 360SE)","Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 SE 2.X MetaSr 1.0","Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0)","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2)","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)","Mozilla/4.0 (compatible; MSIE 5.0; Windows NT)","Mozilla/5.0 (Windows; U; Windows NT 5.2) Gecko/2008070208 Firefox/3.0.1","Mozilla/5.0 (Windows; U; Windows NT 5.1) Gecko/20070309 Firefox/2.0.0.3","Mozilla/5.0 (Windows; U; Windows NT 5.1) Gecko/20070803 Firefox/1.5.0.12 "] #IP地址列表,用于設置IP代理 IP_AGENTS = ["http://58.240.53.196:8080", "http://219.135.99.185:8088","http://117.127.0.198:8080","http://58.240.53.194:8080" ]#設置IP代理 proxies={"http":random.choice(IP_AGENTS)} # ============================================================================= # 上面的設置是為了應對網站的反爬蟲,與具體的網頁爬取無關 # =============================================================================# ============================================================================= # 下面這些是根據剛才第一步的分析來設置的,所以下面需要按照第一步的分析來設置對應的參數。 # 根據第一步圖片的右下角部分來設置Cookie、url、headers和post參數 # ============================================================================= #設置cookie Cookie = "Hm_lvt_b58fe8237d8d72ce286e1dbd2fc8308c=1525162758; BIGipServerNEEQ_8000-NEW=83952564.16415.0000; JSESSIONID=E50D2B8270D728502754D4330CB0E275; Hm_lpvt_b58fe8237d8d72ce286e1dbd2fc8308c=1525165761" #設置動態js的url url = 'http://www.neeq.com.cn/disclosureInfoController/infoResult.do?callback=jQuery18307528463705200819_1525173495230' #設置requests請求的 headers headers = {'User-agent': random.choice(USER_AGENTS), #設置get請求的User-Agent,用于偽裝瀏覽器UA 'Cookie': Cookie,'Connection': 'keep-alive','Accept': 'text/javascript, application/javascript, application/ecmascript, application/x-ecmascript, */*; q=0.01','Accept-Encoding': 'gzip, deflate','Accept-Language': 'zh-CN,zh;q=0.9','Host': 'www.neeq.com.cn','Referer': 'http://www.neeq.com.cn/disclosure/supervise.html' } #設置頁面索引 pageIndex=0 #設置url post請求的參數 data={'page':pageIndex,'disclosureType':8}#requests post請求 req=requests.post(url,data=data,headers=headers,proxies=proxies) print(req.content) #通過打印req.content,我們可以知道post請求返回的是json數據,而且該數據是一個字符串類型的 #獲取包含json數據的字符串 #str_data=req.content ##獲取json字符串數據 #str_json=str_data[8:-2] #print(str_json) ##把json數據轉成dict類型 #json_Info=json.loads(str_json)運行結果如下:
我們可以看到返回的數據req.content為json格式的數據,但是json數據的前面和后面分別是"jQuery18307528463705200819_1525173495230(["和"])",所以我們要去掉這兩部分,留下中間的json格式的數據。在此之前,我們可以發現“jQuery18307528463705200819_1525173495230”就是我們的url參數“callback”的值,所以為了去掉jQuery后面的一大串數字,我們可以把“callback”的值改成“jQuery”(當然你也可以改成其他的值),所以url變為'
http://www.neeq.com.cn/disclosureInfoController/infoResult.do?callback=jQuery?www.neeq.com.cn在此運行代碼,可以得到:
而且我們發現req.content是一個字符串類型的數據,所以我們可以用:
#獲取json字符串數據 str_json=str_data[8:-2]來獲取我們需要的中間的那部分json數據,此時代碼如下:
# -*- coding: utf-8 -*- """ Created on Tue May 01 18:52:49 2018 @author: gmn """ #導入requests module import requests #導入random module import random #導入json module import json# ============================================================================= # 應對網站反爬蟲的相關設置 # ============================================================================= #User-Agent列表,這個可以自己在網上搜到,用于偽裝瀏覽器的User Agent USER_AGENTS = ["Mozilla/5.0 (Windows; U; Windows NT 5.2) Gecko/2008070208 Firefox/3.0.1""Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50","Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1","Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1","Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11","Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11","Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11","Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)","Opera/9.80 (Windows NT 5.1; U; zh-cn) Presto/2.9.168 Version/11.50","Mozilla/5.0 (Windows NT 5.1; rv:5.0) Gecko/20100101 Firefox/5.0","Mozilla/5.0 (Windows NT 5.2) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.122 Safari/534.30","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER","Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; LBBROWSER)","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; 360SE)","Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 SE 2.X MetaSr 1.0","Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0)","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2)","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)","Mozilla/4.0 (compatible; MSIE 5.0; Windows NT)","Mozilla/5.0 (Windows; U; Windows NT 5.2) Gecko/2008070208 Firefox/3.0.1","Mozilla/5.0 (Windows; U; Windows NT 5.1) Gecko/20070309 Firefox/2.0.0.3","Mozilla/5.0 (Windows; U; Windows NT 5.1) Gecko/20070803 Firefox/1.5.0.12 "] #IP地址列表,用于設置IP代理 IP_AGENTS = ["http://58.240.53.196:8080", "http://219.135.99.185:8088","http://117.127.0.198:8080","http://58.240.53.194:8080" ]#設置IP代理 proxies={"http":random.choice(IP_AGENTS)} # ============================================================================= # 上面的設置是為了應對網站的反爬蟲,與具體的網頁爬取無關 # =============================================================================# ============================================================================= # 下面這些是根據剛才第一步的分析來設置的,所以下面需要按照第一步的分析來設置對應的參數。 # 根據第一步圖片的右下角部分來設置Cookie、url、headers和post參數 # ============================================================================= #設置cookie Cookie = "Hm_lvt_b58fe8237d8d72ce286e1dbd2fc8308c=1525162758; BIGipServerNEEQ_8000-NEW=83952564.16415.0000; JSESSIONID=E50D2B8270D728502754D4330CB0E275; Hm_lpvt_b58fe8237d8d72ce286e1dbd2fc8308c=1525165761" #設置動態js的url url = 'http://www.neeq.com.cn/disclosureInfoController/infoResult.do?callback=jQuery' #設置requests請求的 headers headers = {'User-agent': random.choice(USER_AGENTS), #設置get請求的User-Agent,用于偽裝瀏覽器UA 'Cookie': Cookie,'Connection': 'keep-alive','Accept': 'text/javascript, application/javascript, application/ecmascript, application/x-ecmascript, */*; q=0.01','Accept-Encoding': 'gzip, deflate','Accept-Language': 'zh-CN,zh;q=0.9','Host': 'www.neeq.com.cn','Referer': 'http://www.neeq.com.cn/disclosure/supervise.html' } #設置頁面索引 pageIndex=0 #設置url post請求的參數 data={'page':pageIndex,'disclosureType':8}#requests post請求 req=requests.post(url,data=data,headers=headers,proxies=proxies) #print(req.content) #通過打印req.content,我們可以知道post請求返回的是json數據,而且該數據是一個字符串類型的 #獲取包含json數據的字符串 str_data=req.content #獲取json字符串數據 str_json=str_data[8:-2] print(str_json) #把json數據轉成dict類型 #json_Info=json.loads(str_json)運行結果如下:
我們把str_json打印出來的字符串復制粘貼到網上的json在線解析工具來分析該數據的規律,結果如下:
由右圖,我們可以發現json數據的規律。接下來,我們先把str_json轉成dict字典類型的數據:
#把json數據轉成dict類型 json_Info=json.loads(str_json)然后就可以通過字典數據的相關操作來獲取網頁的相關數據了。
完整代碼如下:
# -*- coding: utf-8 -*- """ Created on Tue May 01 18:52:49 2018 @author: gmn """ #導入requests module import requests #導入random module import random #導入json module import json# ============================================================================= # 應對網站反爬蟲的相關設置 # ============================================================================= #User-Agent列表,這個可以自己在網上搜到,用于偽裝瀏覽器的User Agent USER_AGENTS = ["Mozilla/5.0 (Windows; U; Windows NT 5.2) Gecko/2008070208 Firefox/3.0.1""Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50","Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1","Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1","Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11","Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11","Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11","Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E)","Opera/9.80 (Windows NT 5.1; U; zh-cn) Presto/2.9.168 Version/11.50","Mozilla/5.0 (Windows NT 5.1; rv:5.0) Gecko/20100101 Firefox/5.0","Mozilla/5.0 (Windows NT 5.2) AppleWebKit/534.30 (KHTML, like Gecko) Chrome/12.0.742.122 Safari/534.30","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/2.0 Safari/536.11","Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.71 Safari/537.1 LBBROWSER","Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; WOW64; Trident/5.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; .NET4.0E; LBBROWSER)","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SV1; QQDownload 732; .NET4.0C; .NET4.0E; 360SE)","Mozilla/5.0 (Windows NT 5.1) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.84 Safari/535.11 SE 2.X MetaSr 1.0","Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0)","Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.2)","Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)","Mozilla/4.0 (compatible; MSIE 5.0; Windows NT)","Mozilla/5.0 (Windows; U; Windows NT 5.2) Gecko/2008070208 Firefox/3.0.1","Mozilla/5.0 (Windows; U; Windows NT 5.1) Gecko/20070309 Firefox/2.0.0.3","Mozilla/5.0 (Windows; U; Windows NT 5.1) Gecko/20070803 Firefox/1.5.0.12 "] #IP地址列表,用于設置IP代理 IP_AGENTS = ["http://58.240.53.196:8080", "http://219.135.99.185:8088","http://117.127.0.198:8080","http://58.240.53.194:8080" ]#設置IP代理 proxies={"http":random.choice(IP_AGENTS)} # ============================================================================= # 上面的設置是為了應對網站的反爬蟲,與具體的網頁爬取無關 # =============================================================================# ============================================================================= # 下面這些是根據剛才第一步的分析來設置的,所以下面需要按照第一步的分析來設置對應的參數。 # 根據第一步圖片的右下角部分來設置Cookie、url、headers和post參數 # ============================================================================= #設置cookie Cookie = "Hm_lvt_b58fe8237d8d72ce286e1dbd2fc8308c=1525162758; BIGipServerNEEQ_8000-NEW=83952564.16415.0000; JSESSIONID=E50D2B8270D728502754D4330CB0E275; Hm_lpvt_b58fe8237d8d72ce286e1dbd2fc8308c=1525165761" #設置動態js的url url = 'http://www.neeq.com.cn/disclosureInfoController/infoResult.do?callback=jQuery' #設置requests請求的 headers headers = {'User-agent': random.choice(USER_AGENTS), #設置get請求的User-Agent,用于偽裝瀏覽器UA 'Cookie': Cookie,'Connection': 'keep-alive','Accept': 'text/javascript, application/javascript, application/ecmascript, application/x-ecmascript, */*; q=0.01','Accept-Encoding': 'gzip, deflate','Accept-Language': 'zh-CN,zh;q=0.9','Host': 'www.neeq.com.cn','Referer': 'http://www.neeq.com.cn/disclosure/supervise.html' } #設置頁面索引 pageIndex=0 #設置url post請求的參數 data={'page':pageIndex,'disclosureType':8}#requests post請求 req=requests.post(url,data=data,headers=headers,proxies=proxies) #print(req.content) #通過打印req.content,我們可以知道post請求返回的是json數據,而且該數據是一個字符串類型的 #獲取包含json數據的字符串 str_data=req.content #獲取json字符串數據 str_json=str_data[8:-2] #print(str_json) #把json數據轉成dict類型 json_Info=json.loads(str_json)注意事項:
有時候我們按照以上步驟,仍然難以準確的找到數據訪問的URL的時候,可以考慮使用selenium + 瀏覽器driver (如:chromedriver)的方式來爬取動態網頁,不過這種方式比較耗時間。
總結
以上是生活随笔為你收集整理的chrome动态ip python_用Python爬虫爬取动态网页,附带完整代码,有错误欢迎指出!...的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: tableau示例超市数据在哪儿_超市运
- 下一篇: cfg桩设备型号_试桩、试验桩、工程桩是