[英]How to extract parameters from URL?
url = 'https://www.allrecipes.com/recipes/695/world-cuisine/asian/chinese/'
url2 = 'https://www.allrecipes.com/recipes/94/soups-stews-and-chili/'
new = url.split("/")[-4:]
new2 = url2.split("/")[-2:]
print(new)
print(new2)
Output : ['world-cuisine', 'asian', 'chinese', '']
['soups-stews-and-chili', '']
URL 的其他一些示例是:
'https://www.allrecipes.com/recipes/416/seafood/fish/salmon/'
'https://www.allrecipes.com/recipes/205/meat-and-poultry/pork/'
我们如何编写规则来遵循此类 URL 的分页 'https://www.allrecipes.com/recipes/695/world-cuisine/asian/chinese/?page=2'
规则(LinkExtractor(allow=(r'recipes/?page=\d+',)), follow=True)
我是 scrapy 和正则表达式的新手,因此非常感谢您帮助解决这个问题
您可以组合re
module + str.split
:
import re
urls = [
"https://www.allrecipes.com/recipes/695/world-cuisine/asian/chinese/",
"https://www.allrecipes.com/recipes/94/soups-stews-and-chili/",
"https://www.allrecipes.com/recipes/416/seafood/fish/salmon/",
"https://www.allrecipes.com/recipes/205/meat-and-poultry/pork/",
]
r = re.compile(r"(?:\d+/)(.*)/")
for url in urls:
print(r.search(url).group(1).split("/"))
印刷:
['world-cuisine', 'asian', 'chinese']
['soups-stews-and-chili']
['seafood', 'fish', 'salmon']
['meat-and-poultry', 'pork']
我不是 100% 确定我是否正确理解了您的问题,但我认为以下代码可以满足您的需求。
编辑
评论互动后更新代码
urls = [
'https://www.allrecipes.com/recipes/416/seafood/fish/salmon/',
'https://www.allrecipes.com/recipes/205/meat-and-poultry/pork/',
'https://www.allrecipes.com/recipes/695/world-cuisine/asian/chinese/',
'https://www.allrecipes.com/recipes/94/soups-stews-and-chili/',
'https://www.allrecipes.com/recipes/qqqq/94/soups-stews-and-chili/x/y/z/q'
]
for url in urls:
for index, part in enumerate(url.split('/')):
if part.isnumeric():
start = index+1
break
print(url.split('/')[start:-1])
output
['seafood', 'fish', 'salmon']
['meat-and-poultry', 'pork']
['world-cuisine', 'asian', 'chinese']
['soups-stews-and-chili']
['soups-stews-and-chili', 'x', 'y', 'z']
旧答案
urls = [
'https://www.allrecipes.com/recipes/416/seafood/fish/salmon/',
'https://www.allrecipes.com/recipes/205/meat-and-poultry/pork/',
'https://www.allrecipes.com/recipes/695/world-cuisine/asian/chinese/',
'https://www.allrecipes.com/recipes/94/soups-stews-and-chili/'
]
for url in urls:
print(url.split("/")[5:-1])
output
['seafood', 'fish', 'salmon']
['meat-and-poultry', 'pork']
['world-cuisine', 'asian', 'chinese']
['soups-stews-and-chili']
像这样的东西。 这个想法是找到“int”路径元素并从其右侧获取所有路径元素。
from collections import defaultdict
from typing import Dict, List
urls = ['https://www.allrecipes.com/recipes/416/seafood/fish/salmon/',
'https://www.allrecipes.com/recipes/205/meat-and-poultry/pork/']
def is_int(param: str) -> bool:
try:
int(param)
return True
except ValueError:
return False
data: Dict[str, List[str]] = defaultdict(list)
for url in urls:
elements = url.split('/')
elements.reverse()
loop = True
while loop:
for element in elements:
if len(element.strip()) < 1:
continue
if not is_int(element):
data[url].append(element)
else:
loop = False
break
print(data)
output
defaultdict(<class 'list'>, {'https://www.allrecipes.com/recipes/416/seafood/fish/salmon/': ['salmon', 'fish', 'seafood'], 'https://www.allrecipes.com/recipes/205/meat-and-poultry/pork/': ['pork', 'meat-and-poultry']})
在处理 url 时,尽量避免(或至少延迟) regex
并首先查看urllib
或类似的和/或split()
。
只有一个 url 具有完整的详细信息:
from urllib.parse import urlparse
urlparse(urls[4])
ParseResult(scheme='https', netloc='www.allrecipes.com', path='/recipes/695/world-cuisine/asian/chinese/', params='', query='page=2', fragment='')
仅循环path
列表和split()
:
# a list of urls
urls = ['https://www.allrecipes.com/recipes/695/world-cuisine/asian/chinese/',
'https://www.allrecipes.com/recipes/94/soups-stews-and-chili/',
'https://www.allrecipes.com/recipes/416/seafood/fish/salmon/',
'https://www.allrecipes.com/recipes/205/meat-and-poultry/pork/',
'https://www.allrecipes.com/recipes/695/world-cuisine/asian/chinese/?page=2']
for url in urls:
# https://www.allrecipes.com/recipes/695/world-cuisine/asian/chinese/
l = urlparse(url).path.split('/')
# ['', 'recipes', '695', 'world-cuisine', 'asian', 'chinese', '']
print(l[3:])
# ['world-cuisine', 'asian', 'chinese', '']
print('/'.join(l[3:]),'\n')
# world-cuisine/asian/chinese/
以上全部output:
['world-cuisine', 'asian', 'chinese', '']
world-cuisine/asian/chinese/
['soups-stews-and-chili', '']
soups-stews-and-chili/
['seafood', 'fish', 'salmon', '']
seafood/fish/salmon/
['meat-and-poultry', 'pork', '']
meat-and-poultry/pork/
['world-cuisine', 'asian', 'chinese', '']
world-cuisine/asian/chinese/
另一个例子(这次不仅仅是path
):
for parts in urls:
print(list(urlparse(parts)), '\n')
Output:
['https', 'www.allrecipes.com', '/recipes/695/world-cuisine/asian/chinese/', '', '', '']
['https', 'www.allrecipes.com', '/recipes/94/soups-stews-and-chili/', '', '', '']
['https', 'www.allrecipes.com', '/recipes/416/seafood/fish/salmon/', '', '', '']
['https', 'www.allrecipes.com', '/recipes/205/meat-and-poultry/pork/', '', '', '']
['https', 'www.allrecipes.com', '/recipes/695/world-cuisine/asian/chinese/', '', 'page=2', '']
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.