# I. 认识网页构成
# 1.1 HTML
HTML 即 [超文本标记语言 (Hyper Text Markup Language),制作网页的一种标记语言(Markup Language),不是一种编程语言。
1.1.1 HTML 的结构:HTML 中每个实体都用 tag 框住,从而可以被展示为不同的形式或功能。爬虫:根据 tag 来找到合适的信息。
(1)header:存放网元信息,即给浏览器看的内容;
(2)body:存放网页信息,即显示给用户的内容,如视频,图片,文字等。
# 1.2 Python 的网页操作
1.2.1 用 python 登录网页并打印网页信息
from urllib.request import urlopen | |
# if has Chinese, apply decode() | |
html = urlopen("https://morvanzhou.github.io/static/scraping/basic-structure.html").read().decode('utf-8') | |
print(html) |
输出:
<!DOCTYPE html>
<html lang="cn">
<head>
<meta charset="UTF-8">
<title>Scraping tutorial 1 | 莫烦Python</title><link rel="icon" href="https://morvanzhou.github.io/static/img/description/tab_icon.png">
</head>
<body>
<h1>爬虫测试1</h1><p>
这是一个在 <a href="https://morvanzhou.github.io/">莫烦Python</a> <a href="https://morvanzhou.github.io/tutorials/data-manipulation/scraping/">爬虫教程</a> 中的简单测试.</p>
</body>
</html>
1.2.2 用 Python 的正则表达式匹配网页信息
import re | |
res = re.findall(r'<p>(.*?)</p>',html, flags=re.DOTALL) | |
print('\nPage paragraph is ',res[0]) |
输出:
Page paragraph is 这是一个在 <a href="https://morvanzhou.github.io/">莫烦Python</a> <a href="https://morvanzhou.github.io/tutorials/data-manipulation/scraping/">爬虫教程</a> 中的简单测试.
res = re.findall(r'href="(.*?)"',html) | |
print('\nAll links: ') | |
for r in res: | |
print(r) |
输出:
All links: [https://morvanzhou.github.io/static/img/description/tab_icon.png](https://links.jianshu.com/go?to=https%3A%2F%2Fmorvanzhou.github.io%2Fstatic%2Fimg%2Fdescription%2Ftab_icon.png) [https://morvanzhou.github.io/](https://links.jianshu.com/go?to=https%3A%2F%2Fmorvanzhou.github.io%2F) [https://morvanzhou.github.io/tutorials/data-manipulation/scraping/](https://links.jianshu.com/go?to=https%3A%2F%2Fmorvanzhou.github.io%2Ftutorials%2Fdata-manipulation%2Fscraping%2F)
# II. 用 BeautifulSoup 解析网页
# 2.1 BeautifulSoup 简介
- 中文文档:https://www.crummy.com/software/BeautifulSoup/bs4/doc/index.zh.html
Beautiful Soup 是一个可以从 HTML 或 XML 文件中提取数据的 Python 库,它代替了正则表达式来选取 HTML 中的 tag 等相关信息。
使用 Beautiful Soup 爬取网站信息的步骤:
- 选取网址(url)
- 使用 python 登录该网址(uropen 等)
- 读取网页信息(read () 等)
- 将读取的信息放入 Beautiful Soup
- 使用 Beautiful Soup 选取 tag 信息等
# 2.2 应用实例
from bs4 import BeautifulSoup | |
soup = BeautifulSoup(html, features='lxml') | |
print(soup.h1) | |
print(soup.p) | |
all_href = soup.find_all('a') | |
print(all_href) | |
all_href = [item['href'] for item in all_href] | |
print(all_href) |
输出:
<h1>爬虫测试1</h1> <p> 这是一个在 <a href="https://morvanzhou.github.io/">莫烦Python</a> <a href="https://morvanzhou.github.io/tutorials/data-manipulation/scraping/">爬虫教程</a> 中的简单测试. </p> [<a href="https://morvanzhou.github.io/">莫烦Python</a>, <a href="https://morvanzhou.github.io/tutorials/data-manipulation/scraping/">爬虫教程</a>] ['[https://morvanzhou.github.io/'](https://links.jianshu.com/go?to=https%3A%2F%2Fmorvanzhou.github.io%2F'), '[https://morvanzhou.github.io/tutorials/data-manipulation/scraping/'\]](https://links.jianshu.com/go?to=https%3A%2F%2Fmorvanzhou.github.io%2Ftutorials%2Fdata-manipulation%2Fscraping%2F'%5D)
# III. Beautiful Soup 和 CSS
层叠样式表 (Cascading Style Sheets,CSS) 是一种用来表现 HTML 或 XML(标准通用标记语言的一个子集)等文件样式的计算机语言。简单来说,CSS 是用来装饰 HTML 的。
# 3.1 CSS 的 Class
CSS 在装饰每一个网页部件的时候,都会给它一个名字。 而且一个类型的部件,名字都可以一样。
from bs4 import BeautifulSoup | |
from urllib.request import urlopen | |
html = urlopen('https://morvanzhou.github.io/static/scraping/list.html').read().decode('utf-8') | |
print(html) |
输出:<!DOCTYPE html>
<html lang="cn">
<head>
<meta charset="UTF-8">
<title>爬虫练习 列表 class | 莫烦 Python</title> <style> .jan { background-color: yellow;}
.feb { font-size: 25px; } .month { color: red;}
</style>
</head>
<body>
<h1>列表 爬虫练习</h1><p>这是一个在 莫烦 Python 的 爬虫教程
里无敌简单的网页, 所有的 code 让你一目了然, 清晰无比.</p>
<ul>
<li class="month">一月</li><ul class="jan">
<li>一月一号</li> <li>一月二号</li> <li>一月三号</li></ul>
<li class="feb month">二月</li> <li class="month">三月</li> <li class="month">四月</li> <li class="month">五月</li></ul>
</body>
</html>
提取所有 month 的信息
soup = BeautifulSoup(html, features='lxml') | |
month = soup.find_all('li', {'class':'month'}) | |
for m in month: | |
print(m.get_text()) |
输出: 一月 二月 三月 四月 五月
提取所有月份 + 日期的信息
jan = soup.find('ul', {"class":"jan"}) | |
d_jan = jan.find_all('li') | |
for d in d_jan: | |
print(d.get_text()) |
输出: 一月一号 一月二号 一月三号
# 3.2 正则表达式匹配网页
from bs4 import BeautifulSoup | |
from urllib.request import urlopen | |
html = urlopen("https://morvanzhou.github.io/static/scraping/table.html").read().decode('utf-8') | |
print(html) | |
#找出所有图片的链接 | |
import re | |
soup = BeautifulSoup(html, features='lxml') | |
img_links = soup.find_all("img", {"src": re.compile('.*?\.jpg')}) | |
for link in img_links: | |
print(link['src']) | |
#输出 | |
#https://morvanzhou.github.io/static/img/course_cover/tf.jpg | |
#https://morvanzhou.github.io/static/img/course_cover/rl.jpg | |
#https://morvanzhou.github.io/static/img/course_cover/scraping.jpg |
#找出所有课程的链接:以 https://morvan 开头 | |
course_links = soup.find_all('a', {'href':re.compile('https://morvan.*')}) | |
for link in course_links: | |
print(link['href']) | |
#输出: | |
#https://morvanzhou.github.io/ | |
#https://morvanzhou.github.io/tutorials/data-#manipulation/scraping/ | |
#https://morvanzhou.github.io/tutorials/machine-learning/tensorflow/ | |
#https://morvanzhou.github.io/tutorials/machine-learning/reinforcement-learning/ | |
#https://morvanzhou.github.io/tutorials/data-manipulation/scraping/ |
# IV. 练习:用 BeautifulSoup 爬取百度百科
from bs4 import BeautifulSoup | |
from urllib.request import urlopen | |
import re | |
import random | |
base_url = "https://baike.baidu.com" | |
his = ["/item/%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB/5162711"] | |
for i in range(20): | |
url = base_url + his[-1] | |
html = urlopen(url).read().decode('utf-8') | |
soup = BeautifulSoup(html, features='lxml') | |
print(i, soup.find('h1').get_text(), ' url: https://baike.baidu.com'+his[-1]) | |
# print(' url', his[-1]) | |
sub_urls = soup.find_all("a", {"target":"_blank", "href":re.compile("^/item/(%.{2})+$")}) | |
if len(sub_urls)!=0: | |
his.append(random.sample(sub_urls, 1)[0]['href']) | |
else: | |
his.pop() | |
#输出: | |
0 网络爬虫 url: [https://baike.baidu.com/item/%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB/5162711](https://baike.baidu.com/item/%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB/5162711) | |
1 深度优先策略 url: [https://baike.baidu.com/item/%E6%B7%B1%E5%BA%A6%E4%BC%98%E5%85%88%E7%AD%96%E7%95%A5](https://baike.baidu.com/item/%E6%B7%B1%E5%BA%A6%E4%BC%98%E5%85%88%E7%AD%96%E7%95%A5) | |
2 网络爬虫 url: [https://baike.baidu.com/item/%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB](https://baike.baidu.com/item/%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB) | |
3 斯坦福大学 url: [https://baike.baidu.com/item/%E6%96%AF%E5%9D%A6%E7%A6%8F](https://baike.baidu.com/item/%E6%96%AF%E5%9D%A6%E7%A6%8F) | |
4 欧洲 url: [https://baike.baidu.com/item/%E6%AC%A7%E6%B4%B2](https://baike.baidu.com/item/%E6%AC%A7%E6%B4%B2) | |
5 义项 url: [https://baike.baidu.com/item/%E4%B9%89%E9%A1%B9](https://baike.baidu.com/item/%E4%B9%89%E9%A1%B9) | |
6 保尔·柯察金 url: [https://baike.baidu.com/item/%E4%BF%9D%E5%B0%94%C2%B7%E6%9F%AF%E5%AF%9F%E9%87%91](https://baike.baidu.com/item/%E4%BF%9D%E5%B0%94%C2%B7%E6%9F%AF%E5%AF%9F%E9%87%91) | |
7 神父 url: [https://baike.baidu.com/item/%E7%A5%9E%E7%88%B6](https://baike.baidu.com/item/%E7%A5%9E%E7%88%B6) | |
8 北京大学 url: [https://baike.baidu.com/item/%E5%8C%97%E4%BA%AC%E5%A4%A7%E5%AD%A6](https://baike.baidu.com/item/%E5%8C%97%E4%BA%AC%E5%A4%A7%E5%AD%A6) | |
9 基础学科拔尖学生培养试验计划 url: [https://baike.baidu.com/item/%E5%9F%BA%E7%A1%80%E5%AD%A6%E7%A7%91%E6%8B%94%E5%B0%96%E5%AD%A6%E7%94%9F%E5%9F%B9%E5%85%BB%E8%AF%95%E9%AA%8C%E8%AE%A1%E5%88%92](https://baike.baidu.com/item/%E5%9F%BA%E7%A1%80%E5%AD%A6%E7%A7%91%E6%8B%94%E5%B0%96%E5%AD%A6%E7%94%9F%E5%9F%B9%E5%85%BB%E8%AF%95%E9%AA%8C%E8%AE%A1%E5%88%92) | |
10 中华人民共和国教育部 url: [https://baike.baidu.com/item/%E6%95%99%E8%82%B2%E9%83%A8](https://baike.baidu.com/item/%E6%95%99%E8%82%B2%E9%83%A8) | |
11 厦门大学 url: [https://baike.baidu.com/item/%E5%8E%A6%E9%97%A8%E5%A4%A7%E5%AD%A6](https://baike.baidu.com/item/%E5%8E%A6%E9%97%A8%E5%A4%A7%E5%AD%A6) | |
12 吴宣恭 url: [https://baike.baidu.com/item/%E5%90%B4%E5%AE%A3%E6%81%AD](https://baike.baidu.com/item/%E5%90%B4%E5%AE%A3%E6%81%AD) | |
13 经济学动态 url: [https://baike.baidu.com/item/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8A%A8%E6%80%81](https://baike.baidu.com/item/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8A%A8%E6%80%81) | |
14 吴宣恭 url: [https://baike.baidu.com/item/%E5%90%B4%E5%AE%A3%E6%81%AD](https://baike.baidu.com/item/%E5%90%B4%E5%AE%A3%E6%81%AD) | |
15 经济学动态 url: [https://baike.baidu.com/item/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8A%A8%E6%80%81](https://baike.baidu.com/item/%E7%BB%8F%E6%B5%8E%E5%AD%A6%E5%8A%A8%E6%80%81) | |
16 吴宣恭 url: [https://baike.baidu.com/item/%E5%90%B4%E5%AE%A3%E6%81%AD](https://baike.baidu.com/item/%E5%90%B4%E5%AE%A3%E6%81%AD) | |
17 五个一工程 url: [https://baike.baidu.com/item/%E4%BA%94%E4%B8%AA%E4%B8%80%E5%B7%A5%E7%A8%8B](https://baike.baidu.com/item/%E4%BA%94%E4%B8%AA%E4%B8%80%E5%B7%A5%E7%A8%8B) | |
18 河北省委宣传部 url: [https://baike.baidu.com/item/%E6%B2%B3%E5%8C%97%E7%9C%81%E5%A7%94%E5%AE%A3%E4%BC%A0%E9%83%A8](https://baike.baidu.com/item/%E6%B2%B3%E5%8C%97%E7%9C%81%E5%A7%94%E5%AE%A3%E4%BC%A0%E9%83%A8) | |
19 五个一工程 url: [https://baike.baidu.com/item/%E4%BA%94%E4%B8%AA%E4%B8%80%E5%B7%A5%E7%A8%8B](https://baike.baidu.com/item/%E4%BA%94%E4%B8%AA%E4%B8%80%E5%B7%A5%E7%A8%8B) |