簡體   English   中英

Python REGEX 刪除包含 substring 的字符串

[英]Python REGEX remove string containing substring

我正在編寫一個腳本,它將為 URL 抓取新聞通訊。 時事通訊中有一些不相關的 URL(例如文章鏈接、郵件鏈接、社交鏈接等)。 我添加了一些邏輯來刪除這些鏈接,但由於某種原因,並非所有鏈接都被刪除。 這是我的代碼:

from os import remove
from turtle import clear
from bs4 import BeautifulSoup
import requests
import re
import pandas as pd

termSheet = "https://fortune.com/newsletter/termsheet"
html = requests.get(termSheet)
htmlParser = BeautifulSoup(html.text, "html.parser")
termSheetLinks = []

for companyURL in htmlParser.select("table#templateBody p > a"):
    termSheetLinks.append(companyURL.get('href'))

for link in termSheetLinks:
    if "fortune.com" in link in termSheetLinks:
        termSheetLinks.remove(link)
    if "forbes.com" in link in termSheetLinks:
        termSheetLinks.remove(link)
    if "twitter.com" in link in termSheetLinks:
        termSheetLinks.remove(link)

print(termSheetLinks)

當我最近運行它時,這是我的 output,盡管我試圖刪除所有包含“fortune.com”的鏈接:

['https://fortune.com/company/blackstone-group?utm_source=email&utm_medium=newsletter&utm_campaign=term-sheet&utm_content=2022080907am', 'https://fortune.com/company/tpg?utm_source=email&utm_medium=newsletter&utm_campaign=term-sheet&utm_content=2022080907am', 'https://casproviders.org/asd-guidelines/', 'https://fortune.com/company/carlyle-group?utm_source=email&utm_medium=newsletter&utm_campaign=term-sheet&utm_content=2022080907am', 'https://ir.carlyle.com/static-files/433abb19-8207-4632-b173-9606698642e5', 'mailto:termsheet@fortune.com', 'https://www.afresh.com/', 'https://www.geopagos.com/', 'https://montana-renewables.com/', 'https://descarteslabs.com/', 'https://www.dealer-pay.com/', 'https://www.sequeldm.com/', 'https://pueblo-mechanical.com/', 'https://dealcloud.com/future-proof-your-firm/', 'https://apartmentdata.com/', 'https://www.irobot.com/', 'https://www.martin-bencher.com/', 'https://cell-matters.com/', 'https://www.lever.co/', 'https://www.sigulerguff.com/']

任何幫助將不勝感激!

在我看來,它不需要regex - append 只刪除那些不包含您的子字符串的列表,而不是刪除 URL,例如使用list comprehension

[companyURL.get('href') for companyURL in htmlParser.select("table#templateBody p > a") if not any(x in companyURL.get('href') for x in ["fortune.com","forbes.com","twitter.com"])]

例子

from bs4 import BeautifulSoup
import requests

termSheet = "https://fortune.com/newsletter/termsheet"
html = requests.get(termSheet)
htmlParser = BeautifulSoup(html.text, "html.parser")

myList = ["fortune.com","forbes.com","twitter.com"]
[companyURL.get('href') for companyURL in htmlParser.select("table#templateBody p > a") 
     if not any(x in companyURL.get('href') for x in myList)]

Output

['https://casproviders.org/asd-guidelines/',
 'https://ir.carlyle.com/static-files/433abb19-8207-4632-b173-9606698642e5',
 'https://www.afresh.com/',
 'https://www.geopagos.com/',
 'https://montana-renewables.com/',
 'https://descarteslabs.com/',
 'https://www.dealer-pay.com/',
 'https://www.sequeldm.com/',
 'https://pueblo-mechanical.com/',
 'https://dealcloud.com/future-proof-your-firm/',
 'https://apartmentdata.com/',
 'https://www.irobot.com/',
 'https://www.martin-bencher.com/',
 'https://cell-matters.com/',
 'https://www.lever.co/',
 'https://www.sigulerguff.com/']

在 for 迭代器之后刪除鏈接不會跳過任何條目。

from os import remove
from turtle import clear
from bs4 import BeautifulSoup
import requests
import re
import pandas as pd

termSheet = "https://fortune.com/newsletter/termsheet"
html = requests.get(termSheet)
htmlParser = BeautifulSoup(html.text, "html.parser")
termSheetLinks = []

for companyURL in htmlParser.select("table#templateBody p > a"):
    termSheetLinks.append(companyURL.get('href'))

lRemove = []
for link in termSheetLinks:
    if "fortune.com" in link:
        lRemove.append(link)
    if "forbes.com" in link:
        lRemove.append(link)
    if "twitter.com" in link:
        lRemove.append(link)
for l in lRemove:
    termSheetLinks.remove(l)

print(termSheetLinks)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM