腾讯wegame隐私条款爬取

作者: zuijiapangzi 分类: Python,WeGame 发布时间: 2021-11-07 15:20

前言:

2021年11月1日,个人隐私法正式实施。然后各大APP网站都开始更新隐私条款,然后我准备打开英雄联盟玩几把,然后发现wegame的游戏隐私保护也更新了,想起之前在B站看到的一个视频,这个更新之后的隐私条款非常令人震惊,并且强制勾选。所以我就想抓取下目前tx提出的隐私条款,并且做一个备份。成品如下,不过好像有部分没抓出来,晚点再看看。最后恭喜EDG!!!

链接:https://www.aliyundrive.com/s/ywHQ5vF3emn

  • > 第十六条 个人信息处理者不得以个人不同意处理其个人信息或者撤回同意为由,拒绝提供产品或者服务;处理个人信息属于提供产品或者服务所必需的除外。

抓取代码如下。

# -*- coding:UTF-8 -*-

import re
from urllib import request
from bs4 import BeautifulSoup
import time

#初始化变量
user_agent = "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/95.0.4638.69 Safari/537.36"
all_url = []
other_url = []

#整体内容
def gettab(file_url):
    print("-------  正在读取文件中url  ------")
    f = open('./' + file_url, 'r', encoding='utf-8')
    url_pool = f.readlines()
    f.close()
    if url_pool:
        if type(url_pool) is list:
            for url in url_pool:
                url = url.replace('\n', '')
                if url not in all_url:
                    all_url.append(url)
                    getpage(url)
    else:
        print(type(url_pool))

#每页URL及内容
def getpage(url):
    print("--  正在获取页面:", url)
    time.sleep(5)
    req = request.Request(url, headers={'User-Agent':user_agent})
    try:
        reqpose = request.urlopen(req).read()
        soup = BeautifulSoup(reqpose, 'html.parser')
        if len(soup.find_all('div', class_='content')) > 0:    #判断页面,协议页面是
            div = soup.find_all('div', class_='content')[0]
            text = soup.find_all('div', class_='content')[0].text.replace('\n\n', '\n').replace('\n\n', '\n')
            if len(soup.find_all('p', class_='author')) > 0:
                author = soup.find_all('p', class_='author')[0].text
            else:
                author = 'no author'
            title = soup.find_all("title")[0].text.split('-')[0].replace(' ', '')
            spage(title, author, text, url, 'txt')
            spage(title, author, str(soup), url, 'html')
            for a in div.find_all('a', attrs={'target':"_blank"}):
                b = a.get('href')
                testurl(b)
        else:
            other_url.append(url)
    except Exception as e:
        print('error:', e, url)

#保存,写入文件
def spage(title, author, page, url, type):
    try:
        f = open(title + '.' + type, 'w', encoding='utf-8')
        f.write(title + '\n' + '发布人:' + author + '\n' + page)
        f.close()
        #print('正在保存:' + title + 'rul:' + url)
    except Exception as e:
        print('error2:', e)

#检查url
def testurl(url):
    if url:                     #是否存在
        if "http" == url[0:4]:  #需要添加判断是否带http头
            a = url
        elif url[0:2] == '//':
            a = 'https:' + url
        else:
            return
    else:
        #print("url test error")
        return
    if a not in all_url:
        try:
            b = re.match('(.*qq.com)', a)    #判断 qq域名
        except Exception as e:
            print('rc:', e)
        if b:
            all_url.append(a)
            getpage(a)
    else:
        #print('url已存在:', url)
        return

if __name__ == '__main__':
    gettab('url.txt')

发表回复

您的电子邮箱地址不会被公开。


Warning: error_log(/www/wwwroot/www.wspby.top/wp-content/plugins/spider-analyser/#log/log-2602.txt): failed to open stream: No such file or directory in /www/wwwroot/www.wspby.top/wp-content/plugins/spider-analyser/spider.class.php on line 2900