简体   繁体   English

使用BeautifulSoup遍历列表

[英]Iterate through a list with BeautifulSoup

I'm using BeautifulSoup4 to build a JSON formatted list that contains: 'title', 'company', 'location', 'date posted' and 'link' from a public Linkedin Job search, I have already this formatted the way I want it, however it's only listing one of the job listings from the page, and am looking to iterate through each job in the page, in this same format. 我正在使用BeautifulSoup4构建一个JSON格式的列表,其中包含:公开Linkedin职位搜索中的“标题”,“公司”,“位置”,“发布日期”和“链接”,我已经按照自己的方式设置了此格式它,但是它仅列出页面中的一份工作清单,并且希望以相同的格式遍历页面中的每个工作。

For example, am trying to achieve this: 例如,我正在尝试实现以下目标:

[{'title': 'Job 1', 'company': 'company 1.', 'location': 'sunny side, California', 'date posted': '2 weeks ago', 'link': 'example1.com'}]

[{'title': 'Job 2', 'company': 'company 2.', 'location': 'runny side, California', 'date posted': '2 days ago', 'link': 'example2.com'}]

I've tried changing lines 48, 52, 56, 60, and 64 from contents.find to contents.findAll, however, it returns everything and not in the order am trying to achieve. 我尝试将第48、52、56、60和64行从contents.find更改为contents.findAll,但是,它会返回所有内容,而不是按尝试实现的顺序返回。

from bs4 import BeautifulSoup
import requests

def strip_tags(html):
    s = MLStripper()
    s.feed(html)
    return s.get_data()


def search_website(url):
    # Search HTML Page
    result = requests.get(url)
    content = result.content

soup = BeautifulSoup(content, 'html.parser')

# Job List
jobs = []

for contents in soup.find_all('body'):
    # Title
    title = contents.find('h3', attrs={'class': 'result-card__title ''job-result-card__title'})
    formatted_title = strip_tags(str(title))

    # Company
    company = contents.find('h4', attrs={'class': 'result-card__subtitle job-result-card__subtitle'})
    formatted_company = strip_tags(str(company))

    # Location
    location = contents.find('span', attrs={'class': 'job-result-card__location'})
    formatted_location = strip_tags(str(location))

    # Date Posted
    posted = contents.find('time', attrs={'class': 'job-result-card__listdate'})
    formatted_posted = strip_tags(str(posted))

    # Apply Link
    links = contents.find('a', attrs={'class': 'result-card__full-card-link'})
    formatted_link = (links.get('href'))

    # Add a new compiled job to our dict
    jobs.append({'title': formatted_title,
                 'company': formatted_company,
                 'location': formatted_location,
                 'date posted': formatted_posted,
                 'link': formatted_link
                 })

# Return our jobs
return jobs


link = ("https://www.linkedin.com/jobs/search/currentJobId=1396095018&distance=25&f_E=3%2C4&f_LF=f_AL&geoId=102250832&keywords=software%20engineer&location=Mountain%20View%2C%20California%2C%20United%20States")


print(search_website(link))

I expect the output to look like 我希望输出看起来像

[{'title': 'x', 'company': 'x', 'location': 'x', 'date posted': 'x', 'link': 'x'}] [{'title': 'x', 'company': 'x', 'location': 'x', 'date posted': 'x', 'link': 'x'}] +..

Output when switched to FindAll returns: 切换为FindAll时的输出返回:

[{'title': 'x''x''x''x''x', 'company': 'x''x''x''x''x', 'location': 'x''x''x''x', 'date posted': 'x''x''x''x', 'link': 'x''x''x''x'}]

It's a simplified version of your code, but it should get you there: 它是代码的简化版本,但可以帮助您:

result = requests.get('https://www.linkedin.com/jobs/search/?distance=25&f_E=2%2C3&f_JT=F&f_LF=f_AL&geoId=102250832&keywords=software%20engineer&location=Mountain%20View%2C%20California%2C%20United%20States')

soup = bs(result.content, 'html.parser')

# Job List
jobs = []

for contents in soup.find_all('body'):
    # Title
    title = contents.find('h3', attrs={'class': 'result-card__title ''job-result-card__title'})        

    # Company
    company = contents.find('h4', attrs={'class': 'result-card__subtitle job-result-card__subtitle'})        

    # Location
    location = contents.find('span', attrs={'class': 'job-result-card__location'})        

    # Date Posted
    posted = contents.find('time', attrs={'class': 'job-result-card__listdate'})        

    # Apply Link
    link = contents.find('a', attrs={'class': 'result-card__full-card-link'})

    # Add a new compiled job to our dict
    jobs.append({'title': title.text,
                 'company': company.text,
                 'location': location.text,
                 'date posted': posted.text,
                 'link': link.get('href')
                 })

    for job in jobs:
        print(job)

Output: 输出:

{'title': 'Systems Software Engineer - Controls', 'company': 'Blue River Technology', 'location': 'Sunnyvale, California', 'date posted': '1 day ago', 'link': 'https://www.linkedin.com/jobs/view/systems-software-engineer-controls-at-blue-river-technology-1380882942?position=1&pageNum=0&trk=guest_job_search_job-result-card_result-card_full-click'}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM