简体   繁体   English

如何从python脚本运行snscrape命令?

[英]How to run snscrape command from python script?

I'm trying to download some tweets with snscrape .我正在尝试使用snscrape下载一些推文。 After installing, I can run a command like the following to download a few tweets:安装后,我可以运行如下命令来下载一些推文:

snscrape --jsonl --max-results 4 twitter-search "#SherlockHolmes since:2015-01-01 until:2015-01-15" > sherlock_tweets.json

Now I want to run this command from within a python script.现在我想从 python 脚本中运行这个命令。 As I understand it, the way to do this is using the subprocess.run method.据我了解,这样做的方法是使用 subprocess.run 方法。 I use the following code to run the command from python:我使用以下代码从 python 运行命令:

import subprocess

# Running this in a terminal works
cmd = '''snscrape --jsonl --max-results 4 twitter-search "#SherlockHolmes since:2015-01-01 until:2015-01-15" > sherlock_tweets.json'''
arglist = cmd.split(" ")

process = subprocess.run(arglist, shell=True)

Running this, however, gives the following error.但是,运行它会出现以下错误。

usage: snscrape [-h] [--version] [-v] [--dump-locals] [--retry N] [-n N] [-f FORMAT | --jsonl] [--with-entity] [--since DATETIME] [--progress]
                {telegram-channel,weibo-user,vkontakte-user,instagram-user,instagram-hashtag,instagram-location,twitter-thread,twitter-search,reddit-user,reddit-subreddit,reddit-search,facebook-group,twitter-user,twitter-hashtag,twitter-list-posts,facebook-user,facebook-community,twitter-profile}
                ...
snscrape: error: the following arguments are required: scraper

Why is the behaviour not the same in these two cases?为什么这两种情况下的行为不一样? How do I accomplish running the command from a python script, getting the exact same behaviour as I would entering it in a terminal?我如何完成从 python 脚本运行命令,获得与在终端中输入它完全相同的行为?

I don't know if you found the solution, but I ran this code and that worked for me :我不知道您是否找到了解决方案,但我运行了此代码并且对我有用:

import pandas as pd
import snscrape.modules.twitter as sntwitter

tweet_collection = pd.DataFrame({
'Username':[],
'Date'=[],
'Likes'=[],
'Content'=[]})

for tweet in sntwitter.TwitterSearchScraper(f'since:{date_beg} until:{date_end} from:{twitter_account}').get_items():
    tweets_collection = tweets_candidats.append({
        "Username":tweet.user.username,
        "Date":tweet.date,
        "Tweet":tweet.content,
        "Likes":tweet.likeCount,},ignore_index=True)
tweets_candidats.to_csv('Path/file.csv')

You can find more detail in the code on git hub您可以在 git hub 上的代码中找到更多详细信息

Twitter snscrape arguments Twitter 截图参数

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM