古詩詞大全網 - 成語經典 - python可以爬取什麽數據

python可以爬取什麽數據

壹、爬取我們所需要的壹線鏈接

channel_extract.py

這裏的壹線鏈接也就是我們所說的大類鏈接:

from bs4 import BeautifulSoupimport requests

start_url = '/wu/'host_url = '/'def get_channel_urls(url):

wb_data = requests.get(url)

soup = BeautifulSoup(wb_data.text, 'lxml')

links = soup.select('.fenlei > dt > a') #print(links)

for link in links:

page_url = host_url + link.get('href')

print(page_url)#get_channel_urls(start_url)channel_urls = '''

/jiaju/

/rirongbaihuo/

/shouji/

/bangong/

/nongyongpin/

/jiadian/

/ershoubijibendiannao/

/ruanjiantushu/

/yingyouyunfu/

/diannao/

/xianzhilipin/

/fushixiaobaxuemao/

/meironghuazhuang/

/shuma/

/laonianyongpin/

/xuniwupin/

'''123456789101112131415161718192021222324252627282930313233343536

那麽拿我爬取的58同城為例就是爬取了二手市場所有品類的鏈接,也就是我說的大類鏈接;

找到這些鏈接的***同特征,用函數將其輸出,並作為多行文本儲存起來。

二、獲取我們所需要的詳情頁面的鏈接和詳情信息

page_parsing.py

1、說說我們的數據庫:

先看代碼:

#引入庫文件from bs4 import BeautifulSoupimport requestsimport pymongo #python操作MongoDB的庫import reimport time#鏈接和建立數據庫client = pymongo.MongoClient('localhost', 27017)

ceshi = client['ceshi'] #建ceshi數據庫ganji_url_list = ceshi['ganji_url_list'] #建立表文件ganji_url_info = ceshi['ganji_url_info']123456789101112

2、判斷頁面結構是否和我們想要的頁面結構相匹配,比如有時候會有404頁面;

3、從頁面中提取我們想要的鏈接,也就是每個詳情頁面的鏈接;

這裏我們要說的是壹個方法就是:

item_link = link.get('href').split('?')[0]12

這裏的這個link什麽類型的,這個get方法又是什麽鬼?

後來我發現了這個類型是

<class 'bs4.element.Tab>1

如果我們想要單獨獲取某個屬性,可以這樣,例如我們獲取它的 class 叫什麽

print soup.p['class']

#['title']12

還可以這樣,利用get方法,傳入屬性的名稱,二者是等價的

print soup.p.get('class')#['title']12

下面我來貼上代碼:

#爬取所有商品的詳情頁面鏈接:def get_type_links(channel, num):

list_view = '{0}o{1}/'.format(channel, str(num)) #print(list_view)

wb_data = requests.get(list_view)

soup = BeautifulSoup(wb_data.text, 'lxml')

linkOn = soup.select('.pageBox') #判斷是否為我們所需頁面的標誌;

#如果爬下來的select鏈接為這樣:div.pageBox > ul > li:nth-child(1) > a > span 這裏的:nth-child(1)要刪掉

#print(linkOn)

if linkOn:

link = soup.select('.zz > .zz-til > a')

link_2 = soup.select('.js-item > a')

link = link + link_2 #print(len(link))

for linkc in link:

linkc = linkc.get('href')

ganji_url_list.insert_one({'url': linkc})

print(linkc) else: pass1234567891011121314151617181920

4、爬取詳情頁中我們所需要的信息

我來貼壹段代碼:

#爬取趕集網詳情頁鏈接:def get_url_info_ganji(url):

time.sleep(1)

wb_data = requests.get(url)

soup = BeautifulSoup(wb_data.text, 'lxml') try:

title = soup.select('head > title')[0].text

timec = soup.select('.pr-5')[0].text.strip()

type = soup.select('.det-infor > li > span > a')[0].text

price = soup.select('.det-infor > li > i')[0].text

place = soup.select('.det-infor > li > a')[1:]

placeb = [] for placec in place:

placeb.append(placec.text)

tag = soup.select('.second-dt-bewrite > ul > li')[0].text

tag = ''.join(tag.split()) #print(time.split())

data = { 'url' : url, 'title' : title, 'time' : timec.split(), 'type' : type, 'price' : price, 'place' : placeb, 'new' : tag

}

ganji_url_info.insert_one(data) #向數據庫中插入壹條數據;

print(data) except IndexError: pass123456789101112131415161718192021222324252627282930

四、我們的主函數怎麽寫?

main.py

看代碼:

#先從別的文件中引入函數和數據:from multiprocessing import Poolfrom page_parsing import get_type_links,get_url_info_ganji,ganji_url_listfrom channel_extract import channel_urls#爬取所有鏈接的函數:def get_all_links_from(channel):

for i in range(1,100):

get_type_links(channel,i)#後執行這個函數用來爬取所有詳情頁的文件:if __name__ == '__main__':# pool = Pool()# # pool = Pool()# pool.map(get_url_info_ganji, [url['url'] for url in ganji_url_list.find()])# pool.close()# pool.join()#先執行下面的這個函數,用來爬取所有的鏈接:if __name__ == '__main__':

pool = Pool()

pool = Pool()

pool.map(get_all_links_from,channel_urls.split())

pool.close()

pool.join()1234567891011121314151617181920212223242526

五、計數程序

count.py

用來顯示爬取數據的數目;

import timefrom page_parsing import ganji_url_list,ganji_url_infowhile True: # print(ganji_url_list.find().count())

# time.sleep(5)

print(ganji_url_info.find().count())

time.sleep(5)