簡體   English   中英

在 Discord Bot 腳本中創建定時循環以重新加載 web 頁面(網絡爬蟲機器人)

[英]Creating a timed loop inside Discord Bot script to reload web page (web scraper bot)

我目前正在設計一個 discord 機器人,它會抓取 web 頁面,該頁面會不斷更新與 PBE 服務器相關的補丁。 我現在讓機器人成功運行 Heroku。 我遇到的問題是我想創建一個自動(定時循環)刷新,它將重新加載我請求的網站。 就目前而言,它只加載網站的一個實例,如果該網站發生更改/更新,我的任何內容都不會更新,因為我正在使用網站的“舊”請求。

有沒有辦法讓我將代碼埋在 function 中,這樣我就可以創建一個定時循環,還是只需要圍繞我的網站請求創建一個? 那看起來怎么樣? 謝謝!

from bs4 import BeautifulSoup
from urllib.request import urlopen
from discord.ext import commands
import discord

# what I want the commands to start with
bot = commands.Bot(command_prefix='!')

# instantiating discord client
token = "************************************"
client = discord.Client()

# begin the scraping of passed in web page
URL = "*********************************"
page = urlopen(URL)
soup = BeautifulSoup(page, 'html.parser')
pbe_titles = soup.find_all('h1', attrs={'class': 'news-title'})  # using soup to find all header tags with the news-title
                                                                 # class and storing them in pbe_titles
linksAndTitles = []
counter = 0

# finding tags that start with 'a' as in a href and appending those titles/links
for tag in pbe_titles:
    for anchor in tag.find_all('a'):
        linksAndTitles.append(tag.text.strip())
        linksAndTitles.append(anchor['href'])

# counts number of lines stored inside linksAndTitles list
for i in linksAndTitles:
    counter = counter + 1
print(counter)

# separates list by line so that it looks nice when printing
allPatches = '\n'.join(str(line) for line in linksAndTitles[:counter])
# stores the first two lines in list which is the current pbe patch title and link
currPatch = '\n'.join(str(line) for line in linksAndTitles[:2])


# command that allows user to type in exactly what patch they want to see information for based off date
@bot.command(name='patch')
async def pbe_patch(ctx, *, arg):
    if any(item.startswith(arg) for item in linksAndTitles):
        await ctx.send(arg + " exists!")
    else:
        await ctx.send('The date you entered: ' + '"' + arg + '"' + ' does not have a patch associated with it or that patch expired.')


# command that displays the current, most up to date, patch
@bot.command(name='current')
async def current_patch(ctx):
    response = currPatch
    await ctx.send(response)


bot.run(token)

我玩過

while True:

循環,但每當我在其中嵌套任何內容時,我都無法訪問其他地方的代碼。

discord有特殊的裝飾tasks來定期運行一些代碼

from discord.ext import tasks

@tasks.loop(seconds=5.0)
async def scrape(): 
    # ... your scraping code ...


# ... your commands ...


scrape.start()
bot.run(token)

它將每 5 秒重復一次 function scrape


文檔:任務


在 Linux 上,我最終會使用標准服務cron定期運行一些腳本。 該腳本可以抓取數據並保存在文件或數據庫中, discord可以從該文件或數據庫中讀取數據。 但是cron每 1 分鍾檢查一次任務,因此它不能更頻繁地運行任務。


編輯:

最少的工作代碼。

我使用為抓取學習創建的頁面http://books.toscrape.com

我改變了一些元素。 bot時無需創建client ,因為bot是一種特殊的client

我將titlelink保留為字典

            {
                'title': tag.text.strip(),
                'link': url + anchor['href'],
            }

所以以后更容易創建像

title: A Light in the ...
link: http://books.toscrape.com/catalogue/a-light-in-the-attic_1000/index.html

import os
import discord
from discord.ext import commands, tasks
from bs4 import BeautifulSoup
from urllib.request import urlopen

# default value at start (before `scrape` will assign new value)
# because some function may try to use these variables before `scrape` will create them
links_and_titles = []   # PEP8: `lower_case_namese`
counter = 0
items = []

bot = commands.Bot(command_prefix='!')

@tasks.loop(seconds=5)
async def scrape():
    global links_and_titles
    global counter
    global items

    url = "http://books.toscrape.com/"
    page = urlopen(url)
    soup = BeautifulSoup(page, 'html.parser')
    #pbe_titles = soup.find_all('h1', attrs={'class': 'news-title'})  
    pbe_titles = soup.find_all('h3')  

    # remove previous content
    links_and_titles = []

    for tag in pbe_titles:
        for anchor in tag.find_all('a'):
            links_and_titles.append({
                'title': tag.text.strip(),
                'link': url + anchor['href'],
            })

    counter = len(links_and_titles)
    print('counter:', counter)
    items = [f"title: {x['title']}\nlink: {x['link']}" for x in links_and_titles]

@bot.command(name='patch')
async def pbe_patch(ctx, *, arg=None):
    if arg is None:
        await ctx.send('Use: !patch date')
    elif any(item['title'].startswith(arg) for item in links_and_titles):        
        await ctx.send(arg + " exists!")
    else:
        await ctx.send(f'The date you entered: "{arg}" does not have a patch associated with it or that patch expired.')

@bot.command(name='current')
async def current_patch(ctx, *, number=1):
    if items:
        responses = items[:number]
        text = '\n----\n'.join(responses)
        await ctx.send(text)
    else:
        await ctx.send('no patches')

scrape.start()

token = os.getenv('DISCORD_TOKEN')
bot.run(token)

PEP 8 -- Python 代碼的樣式指南

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM