打开APP
userphoto
未登录

开通VIP,畅享免费电子书等14项超值服

开通VIP
小白爬虫第二弹之健壮的小爬虫 | 静觅

我又来装逼了!上次教大家写了一个下载www.mzitu.com全站图片的小爬虫练手、不知道大家消化得怎么样?

大家在使用的时候会发现,跑着跑着 就断掉了!报错了啊!丢失连接之类的。幸幸苦苦的抓了半天又得从头来,心累啊!

这就是网站的反爬虫在起作用了,一个IP访问次数过于频繁就先将这个IP加入黑名单,过一会儿再放出来。虽然不影响正常使用但是对于爬虫来说很致命啊!因为爬虫会报错退出啊!然后我们又得重来,那么多妹子得重来多少次啊!(而且小爬虫不会识别哪些是爬取过的页面,哪些是没爬去的内容,会从头再来啊!很伤人啊!关于这一块儿我下一篇博文来教大家怎么办,这一篇我们还是先集中精力应付反爬虫吧!

关于反爬虫的定义:建议大家去看看这个blog: 点我

一般来说我们会遇到网站反爬虫策略下面几点:

  1. 限制IP访问频率,超过频率就断开连接。(这种方法解决办法就是,降低爬虫的速度在每个请求前面加上time.sleep;或者不停的更换代理IP,这样就绕过反爬虫机制啦!)
  2. 后台对访问进行统计,如果单个userAgent访问超过阈值,予以封锁。(效果出奇的棒!不过误伤也超级大,一般站点不会使用,不过我们也考虑进去
  3. 还有针对于cookies的 (这个解决办法更简单,一般网站不会用)

我们今天就来针对1、2两点来写个下载模块、别害怕真的很简单。

首先,这次我们需要用到Python中的 re模块来提取内容,很简单的用法,但是也需要各位了解一下:点我查看正则表达式基本教程

首先照常我们需要下面这些模块:

requests

re(Python的正则表达式模块)

random(一个随机选择的模块)

都是上一篇文章装过的哦!re 和random是Python自带的模块,不需要安装ヾ§  ̄▽)ゞ2333333

首先按照惯例我们导入模块:

1
2
3
import requests
import re
import random

我们的思路是先找一个发布代理IP的网站(百度一下很多的!)从这个网站爬取出代理IP 用来访问网页;当本地IP失效时,开始使用代理IP,代理IP失败六次后取消代理IP。下面我们开整ヽ(●-`Д′-)ノ

首先我们写一个基本的请求网页并返回response的函数:

1
2
3
4
5
6
7
8
9
import requests
import re
import random
class download:
    def get(self, url):
        return requests.get(url)

哈哈 简单吧!

这只是基本的,上面说过啦,很多网站都都会拒绝非浏览器的请求的、怎么区分的呢?就是你发起的请求是否包含正常的User-Agent 这玩意儿长啥样儿?就下面这样(如果不一样 请按一下F5)

requests的请求的User-Agent 大概是这样 python-requests/2.3.0 CPython/2.6.6 Windows/7  这个不是正常的User-Agent、所以我们得自己造一个来欺骗服务器(requests又一个headers参数能帮助我们伪装成浏览器哦!不知道的小哥儿 一定是没有看官方文档!这样很不好诶!o(一︿一+)o),让他以为我们是真的浏览器。

上面讲过有的网站会限制相同的User-Agent的访问频率,那我们就给他随机来一个User-Agent好了!去百度一下User-Agent,我找到了下面这些:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
"Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
"Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
"Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
"Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"

下面我们来改改上面的代码成这样:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
import requests
import re
import random
class download:
    def __init__(self):
        self.user_agent_list = [
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
            "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
        ]
    def get(self, url):
        UA = random.choice(self.user_agent_list) ##从self.user_agent_list中随机取出一个字符串(聪明的小哥儿一定发现了这是完整的User-Agent中:后面的一半段)
        headers = {'User-Agent': UA}  ##构造成一个完整的User-Agent (UA代表的是上面随机取出来的字符串哦)
        response = requests.get(url, headers=headers) ##这样服务器就会以为我们是真的浏览器了
        return response

各位可以自己实例化测试一下,headers会不会变哦ε=ε=ε=(~ ̄▽ ̄)~

好啦下面我们继续还有一个点没有处理:那就是限制IP频率的反爬虫。

首先是需要获取代理IP的网站,我找到了这个站点 http://haoip.cc/tiqu.htm(这儿本来我是准备教大家自己维护一个IP代理池的,不过有点麻烦啊!还好发现这个代理站,还是这么好心的站长。我就可以光明正大的偷懒啦!ヾ(≧O≦)〃嗷~)

我们先把这写IP爬取下来吧!本来想让大家自己写,不过有用到正则表达式的,虽然简单,不过有些小哥儿(妹儿)怕是不会使。我也写出来啦.

1
2
3
4
5
6
7
8
        iplist = [] ##初始化一个list用来存放我们获取到的IP
        html = requests.get("http://haoip.cc/tiqu.htm")##不解释咯
        iplistn = re.findall(r'r/>(.*?)<b', html.text, re.S) ##表示从html.text中获取所有r/><b中的内容,re.S的意思是包括匹配包括换行符,findall返回的是个list哦!
        for ip in iplistn:
            i = re.sub('\n', '', ip)##re.sub 是re模块替换的方法,这儿表示将\n替换为空
            iplist.append(i.strip()) ##添加到我们上面初始化的list里面, i.strip()的意思是去掉字符串的空格哦!!(这都不知道的小哥儿基础不牢啊)
            print(i.strip())
        print(iplist)

我们来打印一下看看

下面[————–]中的内容就我们添加进iplist这个初始化的list中的内容哦!

完美!!好啦现在我们把这段代码加到之前写的代码里面去;并判断是否使用了代理:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
import requests
import re
import random
class download:
    def __init__(self):
        self.iplist = []  ##初始化一个list用来存放我们获取到的IP
        html = requests.get("http://haoip.cc/tiqu.htm")  ##不解释咯
        iplistn = re.findall(r'r/>(.*?)<b', html.text, re.S)  ##表示从html.text中获取所有r/><b中的内容,re.S的意思是包括匹配包括换行符,findall返回的是个list哦!
        for ip in iplistn:
            i = re.sub('\n', '', ip)  ##re.sub 是re模块替换的方法,这儿表示将\n替换为空
            self.iplist.append(i.strip())  ##添加到我们上面初始化的list里面
        self.user_agent_list = [
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
            "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
        ]
    def get(self, url, proxy=None): ##给函数一个默认参数proxy为空
        UA = random.choice(self.user_agent_list) ##从self.user_agent_list中随机取出一个字符串
        headers = {'User-Agent': UA}  ##构造成一个完整的User-Agent (UA代表的是上面随机取出来的字符串哦)
        if proxy == None: ##当代理为空时,不使用代理获取response(别忘了response啥哦!之前说过了!!)
            response = requests.get(url, headers=headers)##这样服务器就会以为我们是真的浏览器了
            return response ##返回response
        else: ##当代理不为空
            IP = ''.join(str(random.choice(self.iplist)).strip()) ##将从self.iplist中获取的字符串处理成我们需要的格式(处理了些,什么自己看哦,这是基础呢)
            proxy = {'http': IP} ##构造成一个代理
            response = requests.get(url, headers=headers, proxies=proxy) ##使用代理获取response
            return response
Xz = download() ##实例化
print(Xz.get("mzitu.com").headers) ##打印headers

需要测试的小哥儿(妹儿),可以自行测试哦。

下面我开始判断什么时候需要 !需要使用代理,而且还得规定一下多少次切换成代理爬取,多少次取消代理啊!我们改改代码,成下面这样:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
import requests
import re
import random
import time
class download:
    def __init__(self):
        self.iplist = []  ##初始化一个list用来存放我们获取到的IP
        html = requests.get("http://haoip.cc/tiqu.htm")  ##不解释咯
        iplistn = re.findall(r'r/>(.*?)<b', html.text, re.S)  ##表示从html.text中获取所有r/><b中的内容,re.S的意思是包括匹配包括换行符,findall返回的是个list哦!
        for ip in iplistn:
            i = re.sub('\n', '', ip)  ##re.sub 是re模块替换的方法,这儿表示将\n替换为空
            self.iplist.append(i.strip())  ##添加到我们上面初始化的list里面
        self.user_agent_list = [
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
            "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
        ]
    def get(self, url, timeout, proxy=None, num_retries=6): ##给函数一个默认参数proxy为空
        UA = random.choice(self.user_agent_list) ##从self.user_agent_list中随机取出一个字符串
        headers = {'User-Agent': UA}  ##构造成一个完整的User-Agent (UA代表的是上面随机取出来的字符串哦)
        if proxy == None: ##当代理为空时,不使用代理获取response(别忘了response啥哦!之前说过了!!)
            try:
                return requests.get(url, headers=headers, timeout=timeout)##这样服务器就会以为我们是真的浏览器了
            except:##如过上面的代码执行报错则执行下面的代码
                if num_retries > 0: ##num_retries是我们限定的重试次数
                    time.sleep(10) ##延迟十秒
                    print(u'获取网页出错,10S后将获取倒数第:', num_retries, u'次')
                    return self.get(url, timeout, num_retries-1)  ##调用自身 并将次数减1
                else:
                    print(u'开始使用代理')
                    time.sleep(10)
                    IP = ''.join(str(random.choice(self.iplist)).strip()) ##下面有解释哦
                    proxy = {'http': IP}
                    return self.get(url, timeout, proxy,) ##代理不为空的时候
        else: ##当代理不为空
            IP = ''.join(str(random.choice(self.iplist)).strip()) ##将从self.iplist中获取的字符串处理成我们需要的格式(处理了些什么自己看哦,这是基础呢)
            proxy = {'http': IP} ##构造成一个代理
            return requests.get(url, headers=headers, proxies=proxy, timeout = timeout) ##使用代理获取response
Xz = download() ##实例化
print(Xz.get("mzitu.com", 3)) ##打印headers

上面代码添加了一个timeout (防止超时)、一个num_retries=6(限制次数,6次过后使用代理)。

下面我们让使用代理失败6次后,取消代理,直接上代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
import requests
import re
import random
import time
class download:
    def __init__(self):
        self.iplist = []  ##初始化一个list用来存放我们获取到的IP
        html = requests.get("http://haoip.cc/tiqu.htm")  ##不解释咯
        iplistn = re.findall(r'r/>(.*?)<b', html.text, re.S)  ##表示从html.text中获取所有r/><b中的内容,re.S的意思是包括匹配包括换行符,findall返回的是个list哦!
        for ip in iplistn:
            i = re.sub('\n', '', ip)  ##re.sub 是re模块替换的方法,这儿表示将\n替换为空
            self.iplist.append(i.strip())  ##添加到我们上面初始化的list里面
        self.user_agent_list = [
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
            "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
        ]
    def get(self, url, timeout, proxy=None, num_retries=6): ##给函数一个默认参数proxy为空
        print(u'开始获取:', url)
        UA = random.choice(self.user_agent_list) ##从self.user_agent_list中随机取出一个字符串
        headers = {'User-Agent': UA}  ##构造成一个完整的User-Agent (UA代表的是上面随机取出来的字符串哦)
        if proxy == None: ##当代理为空时,不使用代理获取response(别忘了response啥哦!之前说过了!!)
            try:
                return requests.get(url, headers=headers, timeout=timeout)##这样服务器就会以为我们是真的浏览器了
            except:##如过上面的代码执行报错则执行下面的代码
                if num_retries > 0: ##num_retries是我们限定的重试次数
                    time.sleep(10) ##延迟十秒
                    print(u'获取网页出错,10S后将获取倒数第:', num_retries, u'次')
                    return self.get(url, timeout, num_retries-1)  ##调用自身 并将次数减1
                else:
                    print(u'开始使用代理')
                    time.sleep(10)
                    IP = ''.join(str(random.choice(self.iplist)).strip()) ##下面有解释哦
                    proxy = {'http': IP}
                    return self.get(url, timeout, proxy,) ##代理不为空的时候
        else: ##当代理不为空
            try:
                IP = ''.join(str(random.choice(self.iplist)).strip()) ##将从self.iplist中获取的字符串处理成我们需要的格式(处理了些什么自己看哦,这是基础呢)
                proxy = {'http': IP} ##构造成一个代理
                return requests.get(url, headers=headers, proxies=proxy, timeout=timeout) ##使用代理获取response
            except:
                if num_retries > 0:
                    time.sleep(10)
                    IP = ''.join(str(random.choice(self.iplist)).strip())
                    proxy = {'http': IP}
                    print(u'正在更换代理,10S后将重新获取倒数第', num_retries, u'次')
                    print(u'当前代理是:', proxy)
                    return self.get(url, timeout, proxy, num_retries - 1)
                else:
                    print(u'代理也不好使了!取消代理')
                    return self.get(url, 3)

收工一个较为健壮的下载模块搞定(当然一个健壮的模块还应该有其它的内容,比如判断地址是否是robots.txt文件禁止获取的;错误状态判断是否是服务器出错,限制爬虫深度防止掉入爬虫陷进之类的····)

不过我怕太多大家消化不了,而且我们一般遇到的网站基本不会碰到爬虫陷阱(有也不怕啊,反正规模不大,自己也就注意到了。)

下面我们来把这个下载模块使用到我们上一篇博文的爬出红里面去!

用法很简单!ヾ(*′▽‘*)?将这个py文件放在和上一篇博文爬虫相同的文件夹里面;并新建一个__init__.py的文件。像这样:

在爬虫里面导入下载模块即可,class继承一下下载模块;然后替换掉上一篇爬虫里面的全部requests.get,为download.get即可!还必须加上timeout参数哦!废话不多说直接上代码:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
from bs4 import BeautifulSoup
import os
from Download import download
class mzitu(download):
    def all_url(self, url):
        html = download.get(self, url, 3) ##这儿替换了,并加上timeout参数
        all_a = BeautifulSoup(html.text, 'lxml').find('div', class_='all').find_all('a')
        for a in all_a:
            title = a.get_text()
            print(u'开始保存:', title)
            path = str(title).replace("?", '_')
            self.mkdir(path)
            os.chdir("D:\mzitu\\"+path)
            href = a['href']
            self.html(href)
    def html(self, href):
        html = download.get(self, href, 3)
        max_span = BeautifulSoup(html.text, 'lxml').find_all('span')[10].get_text()
        for page in range(1, int(max_span) + 1):
            page_url = href + '/' + str(page)
            self.img(page_url)
    def img(self, page_url):
        img_html = download.get(self, page_url, 3) ##这儿替换了
        img_url = BeautifulSoup(img_html.text, 'lxml').find('div', class_='main-image').find('img')['src']
        self.save(img_url)
    def save(self, img_url):
        name = img_url[-9:-4]
        print(u'开始保存:', img_url)
        img = download.get(self, img_url, 3) ##这儿替换了,并加上timeout参数
        f = open(name + '.jpg', 'ab')
        f.write(img.content)
        f.close()
    def mkdir(self, path): ##这个函数创建文件夹
        path = path.strip()
        isExists = os.path.exists(os.path.join("D:\mzitu", path))
        if not isExists:
            print(u'建了一个名字叫做', path, u'的文件夹!')
            os.makedirs(os.path.join("D:\mzitu", path))
            return True
        else:
            print(u'名字叫做', path, u'的文件夹已经存在了!')
            return False
Mzitu = mzitu() ##实例化
Mzitu.all_url('http://www.mzitu.com/all') ##给函数all_url传入参数  你可以当作启动爬虫(就是入口)

好了!搞完收工!大家可以看一下和上一次我们写的爬虫有哪些变化就知道我们做了什么啦!

 

2016/11/4 更新:今天做教程的时候发现我忽略了一个问题,上面的写法,属于子类继承父类,这种写法 子类没法用__init__;所以我改了一下写法,(其余都没变,不用担心。)直接贴代码了:

首先是下载模块(Download.py):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
import requests
import re
import random
import time
class download():
    def __init__(self):
        self.iplist = []  ##初始化一个list用来存放我们获取到的IP
        html = requests.get("http://haoip.cc/tiqu.htm")  ##不解释咯
        iplistn = re.findall(r'r/>(.*?)<b', html.text, re.S)  ##表示从html.text中获取所有r/><b中的内容,re.S的意思是包括匹配包括换行符,findall返回的是个list哦!
        for ip in iplistn:
            i = re.sub('\n', '', ip)  ##re.sub 是re模块替换的方法,这儿表示将\n替换为空
            self.iplist.append(i.strip())  ##添加到我们上面初始化的list里面
        self.user_agent_list = [
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/22.0.1207.1 Safari/537.1",
            "Mozilla/5.0 (X11; CrOS i686 2268.111.0) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.57 Safari/536.11",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1092.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.6 (KHTML, like Gecko) Chrome/20.0.1090.0 Safari/536.6",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/19.77.34.5 Safari/537.1",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.9 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.0) AppleWebKit/536.5 (KHTML, like Gecko) Chrome/19.0.1084.36 Safari/536.5",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 5.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_8_0) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1063.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1062.0 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.1) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.1 Safari/536.3",
            "Mozilla/5.0 (Windows NT 6.2) AppleWebKit/536.3 (KHTML, like Gecko) Chrome/19.0.1061.0 Safari/536.3",
            "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24",
            "Mozilla/5.0 (Windows NT 6.2; WOW64) AppleWebKit/535.24 (KHTML, like Gecko) Chrome/19.0.1055.1 Safari/535.24"
        ]
    def get(self, url, timeout, proxy=None, num_retries=6): ##给函数一个默认参数proxy为空
        UA = random.choice(self.user_agent_list) ##从self.user_agent_list中随机取出一个字符串
        headers = {'User-Agent': UA}  ##构造成一个完整的User-Agent (UA代表的是上面随机取出来的字符串哦)
        if proxy == None: ##当代理为空时,不使用代理获取response(别忘了response啥哦!之前说过了!!)
            try:
                return requests.get(url, headers=headers, timeout=timeout)##这样服务器就会以为我们是真的浏览器了
            except:##如过上面的代码执行报错则执行下面的代码
                if num_retries > 0: ##num_retries是我们限定的重试次数
                    time.sleep(10) ##延迟十秒
                    print(u'获取网页出错,10S后将获取倒数第:', num_retries, u'次')
                    return self.get(url, timeout, num_retries-1)  ##调用自身 并将次数减1
                else:
                    print(u'开始使用代理')
                    time.sleep(10)
                    IP = ''.join(str(random.choice(self.iplist)).strip()) ##下面有解释哦
                    proxy = {'http': IP}
                    return self.get(url, timeout, proxy,) ##代理不为空的时候
        else: ##当代理不为空
            try:
                IP = ''.join(str(random.choice(self.iplist)).strip()) ##将从self.iplist中获取的字符串处理成我们需要的格式(处理了些什么自己看哦,这是基础呢)
                proxy = {'http': IP} ##构造成一个代理
                return requests.get(url, headers=headers, proxies=proxy, timeout=timeout) ##使用代理获取response
            except:
                if num_retries > 0:
                    time.sleep(10)
                    IP = ''.join(str(random.choice(self.iplist)).strip())
                    proxy = {'http': IP}
                    print(u'正在更换代理,10S后将重新获取倒数第', num_retries, u'次')
                    print(u'当前代理是:', proxy)
                    return self.get(url, timeout, proxy, num_retries - 1)
                else:
                    print(u'代理也不好使了!取消代理')
                    return self.get(url, 3)
request = download()  ##

这个模块就多了  request = download()

第二个(def mzitu.py):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
from bs4 import BeautifulSoup
import os
from Download import request ##导入模块变了一下
from pymongo import MongoClient
class mzitu():
    def all_url(self, url):
        html = request.get(url, 3) ##这儿更改了一下(是不是发现  self 没见了?)
        all_a = BeautifulSoup(html.text, 'lxml').find('div', class_='all').find_all('a')
        for a in all_a:
            title = a.get_text()
            print(u'开始保存:', title)
            path = str(title).replace("?", '_')
            self.mkdir(path)
            os.chdir("D:\mzitu\\"+path)
            href = a['href']
            self.html(href)
    def html(self, href):
        html = request.get(href, 3)##这儿更改了一下(是不是发现  self 没见了?)
        max_span = BeautifulSoup(html.text, 'lxml').find_all('span')[10].get_text()
        for page in range(1, int(max_span) + 1):
            page_url = href + '/' + str(page)
            self.img(page_url)
    def img(self, page_url):
        img_html = request.get(page_url, 3) ##这儿更改了一下(是不是发现  self 没见了?)
        img_url = BeautifulSoup(img_html.text, 'lxml').find('div', class_='main-image').find('img')['src']
        self.save(img_url)
    def save(self, img_url):
        name = img_url[-9:-4]
        print(u'开始保存:', img_url)
        img = request.get(img_url, 3) ##这儿更改了一下(是不是发现  self 没见了?)
        f = open(name + '.jpg', 'ab')
        f.write(img.content)
        f.close()
    def mkdir(self, path):
        path = path.strip()
        isExists = os.path.exists(os.path.join("D:\mzitu", path))
        if not isExists:
            print(u'建了一个名字叫做', path, u'的文件夹!')
            os.makedirs(os.path.join("D:\mzitu", path))
            return True
        else:
            print(u'名字叫做', path, u'的文件夹已经存在了!')
            return False
Mzitu = mzitu() ##实例化
Mzitu.all_url('http://www.mzitu.com/all') ##给函数all_url传入参数  你可以当作启动爬虫(就是入口)

改动的地方我都有明确标注哦!仔细看看有什么不同吧。

转载请注明:静觅 ? 小白爬虫第二弹之健壮的小爬虫

喜欢 (108)or分享 (4)
本站仅提供存储服务,所有内容均由用户发布,如发现有害或侵权内容,请点击举报
打开APP,阅读全文并永久保存 查看更多类似文章
猜你喜欢
类似文章
【热】打开小程序,算一算2024你的财运
js获取浏览器内核
UserAgent查看/UA检测
使用CURL访问站点的时候出现403的解决办法
各大浏览器UserAgent总结(超全)
浏览器的“套娃行为”有多凶残?
fake-useragent库:值得花2分钟学习的库
更多类似文章 >>
生活服务
热点新闻
分享 收藏 导长图 关注 下载文章
绑定账号成功
后续可登录账号畅享VIP特权!
如果VIP功能使用有故障,
可点击这里联系客服!

联系客服