简体   繁体   中英

Storing a python script output to a variable in linux and run a wget command

So I made a python script that retrieves a url link and returns it as an output. Just to keep it simple, the content of the python script would be just:

print("https://www.testweb.com/file=ejfeafjaiaefjaof")

For example, if i execute the python script in my terminal:

python retrieve_url.py

it outputs:

https://www.testweb.com/file=ejfeafjaiaefjaof

So I then would like to utilize the "wget" command in linux with the url that is returned by my python script, so that i could do something like:

wget https://www.testweb.com/file=ejfeafjaiaefjaof

But instead, is it possible to store the python output url in a variable and execute a linux script in one go? I was thinking something like creating an executable script, run.sh

python retrieve_url.py
wget **some argument to store the python output**

and just use./run.sh.

To save the output of your python script like this

test=$(python test.py)

echo $test
"https://www.testweb.com/file=ejfeafjaiaefjaof"

wget just needs to read the URL from stdin, using -i- flag:

python retrieve_url.py | wget -i-

Output:

--2021-01-14 16:15:17--  https://example.org/
Resolving example.org (example.org)... 93.184.216.34
Connecting to example.org (example.org)|93.184.216.34|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1256 (1,2K) [text/html]
Saving to: ‘index.html’

index.html                                        100%[============================================================================================================>]   1,23K  --.-KB/s    in 0s

2021-01-14 16:15:17 (20,0 MB/s) - ‘index.html’ saved [1256/1256]

FINISHED --2021-01-14 16:15:17--
Total wall clock time: 0,6s
Downloaded: 1 files, 1,2K in 0s (20,0 MB/s)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM