简体   繁体   English

Web 用bash刮

[英]Web Scraping with bash

I am doing web scraping with bash.我正在用 bash 进行 web 刮擦。 I have these URL which is saved in a file URL.txt我有这些 URL 保存在文件 URL.txt

?daypartId=1&catId=1
?daypartId=1&catId=11
?daypartId=1&catId=2

I want to pass these URL to an array in another file which would append in the base URL https://www.mcdelivery.com.pk/pk/browse/menu.html I want to append all the URl in URL.txt file in the end of the base url one by one. I want to pass these URL to an array in another file which would append in the base URL https://www.mcdelivery.com.pk/pk/browse/menu.html I want to append all the URl in URL.txt file在底座 url 一一结束。

You will need a way to read each line,您将需要一种阅读每一行的方法,

while IFS= read -r line ;do
        echo $line
done < "${file}"

Then inside of that file reading loop you will need to perform the operation to append and use the $line you have gotten.然后在该文件读取循环中,您需要对 append 执行操作并使用您获得的 $line。

curl http://example.com${line}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM