weibo.cn html5,微博爬虫:爬取微博正文、关注人
由于網頁版的微博存在滾動刷新的特性,使用傳統的獲取html再解析的方法不再可行,而移動版的微博每頁微博條數是固定的,不存在網頁版滾動刷新的現象,而且URL設計更加簡單,通過修改URL重復模擬登錄即可實現翻頁效果。本文最終實現了三個函數,分別是爬取博主主頁信息、爬取關注人信息、爬取原創微博信息。
首先登錄 https://weibo.cn,在登錄時頁面可能一直不響應,我也遇到了相同的問題,我是用第三方軟件登錄然后綁定到我的微博實現了登錄,登錄的目的是獲取cookie,然后使用這個cookie模擬登錄微博,爬取微博內容。
獲取cookie的方式:
step1:登錄微博移動版
step2:打開“檢查”工具 ->右鍵重新加載->Network->Request Headers->Cookie
注意:cookie的時效不超過24小時,所以每次運行爬蟲需要更新
getcookie.jpg
URL分析:
以 @追風少年劉全有 的微博為例
在移動版微博中,博主的主頁URL是 https://weibo.cn/u/2150511032,同時使用https://weibo.cn/2150511032也可以訪問,其中數字串是唯一標識博主的user_id;
劉全有關注的人網頁的URL為https://weibo.cn/2150511032/follow?page=1,也就是主頁URL拼接'/follow?page=x',給page賦不同值即可實現翻頁;
follow.png
劉全有的微博列表第一頁的網頁URL為https://weibo.cn/2150511032/profile?page=1,也就是主頁URL拼接'/profile?page=x';
話不多說上代碼:
from bs4 import BeautifulSoup
import requests
from lxml import etree
import re
import pymysql.cursors
cookie = {"Cookie": "yourCookie"}
#獲取博主關注的人的urls
def get_guanzhu_urls(url):
url_g = url+ "/follow?page="
count="1"
url_init=url_g+count
html = requests.get(url_init, cookies=cookie, verify=False).content #cookie登錄
selector = etree.HTML(html)
pageNum = (int)(selector.xpath('//input[@name="mp"]')[0].attrib['value']) #獲取關注人頁數
for i in range(1,pageNum+1):
url_new=url_g+ (str)(i)
html = requests.get(url_new, cookies=cookie, verify=False).content
soup = BeautifulSoup(html, 'html5lib')
person=soup.find_all('td')
in_val=[]
try:
conn = pymysql.connect(host='localhost', user='root', passwd='xxx', db='weibo_spider', charset='utf8mb4')
with conn.cursor() as cursor:
for i in range(0, len(person)):
if (i + 1) % 2 == 0:
user = (person[i].a.string , person[i].a.get("href"), url)
in_val.append(user)
sql = "insert into `gurls`(`weiboid`,`url`,`prewid`) VALUES(%s,%s,%s)"
cursor.executemany(sql,in_val)#將數據批量導入數據庫
conn.commit()
finally:
conn.close()
#獲取博主個人信息:ID、粉絲數、關注數、關注列表URL
def get_host_user_info(url):
html = requests.get(url, cookies=cookie, verify=False).content
soup = BeautifulSoup(html,'html5lib')
weiboid=soup.find_all('span',class_='ctt')[0].get_text().split()[0] #span標簽
myurl=url
info=soup.find('div', class_='tip2')
infolist=info.find_all('a')
guanzhu_url = infolist[0].get("href")
pattern_num = re.compile(r'.*\[(.*)\]')
guanzhu_num = re.findall(pattern_num,(infolist[0].string))[0]
fan_num = re.findall(pattern_num, (infolist[1].string))[0]
print(weiboid,myurl,guanzhu_url,guanzhu_num,fan_num)
try:
conn = pymysql.connect(host='localhost', user='root', passwd='xxx', db='weibo_spider', charset='utf8mb4')
with conn.cursor() as cursor:
sql="insert into `wusers`(`weiboid`,`myurl`,`follower_num`,`guanzhu_num`,`guanzhu_url`) VALUES (%s,%s,%s,%s,%s)"
cursor.execute(sql,(weiboid,myurl,fan_num,guanzhu_num,guanzhu_url))
conn.commit()
finally:
conn.close()
#獲取博主微博博文
def get_weibo_contents(url):
#只爬取原創微博
html = requests.get(url, cookies=cookie, verify=False).content
selector = etree.HTML(html)
pageNum = (int)(selector.xpath('//input[@name="mp"]')[0].attrib['value'])
soup=BeautifulSoup(html, 'html5lib')
weiboid=soup.find_all('span',class_='ctt')[0].get_text().split()[0] #span標簽
print(weiboid)
word_count = 1
print(pageNum)
for page in range(1, pageNum + 1):
# 獲取lxml頁面
url_new = url+"?filter=1&page="+(str)(page)
print(url_new)
lxml = requests.get(url_new, cookies=cookie, verify=False).content
soup = BeautifulSoup(lxml, 'html5lib')
content=soup.find_all('span',class_="ctt")
comment=soup.find_all('a',class_="cc")
in_weibo=[]
try:
conn = pymysql.connect(host='localhost', user='root', passwd='xxx', db='weibo_spider', charset='utf8mb4')
with conn.cursor() as cursor:
for (con,com) in zip(content,comment):
pattern_num = re.compile(r'.*\[(.*)\]')
com_num = re.findall(pattern_num, (com.string))[0]
weibo=(weiboid,con.get_text(),com_num)
in_weibo.append(weibo)
sql = "insert into `weibos`(`weiboid`,`weibo_content`,`comment`) VALUES(%s,%s,%s)"
cursor.executemany(sql,in_weibo)#將數據批量導入數據庫
conn.commit()
finally:
conn.close()
if __name__ == '__main__':
url = "https://weibo.cn/1496852380"
#get_host_user_info(url)
#get_guanzhu_urls(url)
#get_weibo_contents(url)
我是將爬取的數據存到MySQL數據庫中,你也可以用其他方式存儲。
總結
以上是生活随笔為你收集整理的weibo.cn html5,微博爬虫:爬取微博正文、关注人的全部內容,希望文章能夠幫你解決所遇到的問題。
- 上一篇: Centos6.3下DRBD+Heart
- 下一篇: 搜索和数据分析引擎