简体   繁体   中英

Web Scraping with bash

I am doing web scraping with bash. I have these URL which is saved in a file URL.txt

?daypartId=1&catId=1
?daypartId=1&catId=11
?daypartId=1&catId=2

I want to pass these URL to an array in another file which would append in the base URL https://www.mcdelivery.com.pk/pk/browse/menu.html I want to append all the URl in URL.txt file in the end of the base url one by one.

You will need a way to read each line,

while IFS= read -r line ;do
        echo $line
done < "${file}"

Then inside of that file reading loop you will need to perform the operation to append and use the $line you have gotten.

curl http://example.com${line}

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM