简体   繁体   English

如何从网站URL获取图像并将所有图像存储在PC的文件夹中?

[英]How to fetch image from Website url and store all images in folder in PC?

Hello I want to fetch all images from this url on the website http://www.thesmokingtire.com/wp-content/uploads/ and store it on my D drive like d:// 您好,我想从http://www.the smoketire.com/wp-content/uploads/网站上的该URL获取所有图像,并将其存储在我的D盘上,例如d://。

How should i do this? 我应该怎么做?

I tried something like following i searched here But it doesnt work ,Please help me out. 我尝试了类似下面我在这里搜索的方法,但是它不起作用,请帮帮我。

<html>
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1.11.0/jquery.min.js"></script>
<script>
var dir = "http://www.thesmokingtire.com/wp-content/uploads/";
var fileextension = ".jpg";
$.ajax({

    //This will retrieve the contents of the folder if the folder is configured as 'browsable'
    url: dir,
    success: function (data) {
        //Lsit all png file names in the page
        $(data).find("a:contains(" + fileextension + ")").each(function () {
            var filename = this.href.replace(window.location.host, "").replace("http:///", "");
            $("body").append($("<img src=" + dir + filename + "></img>"));
        });
    }
});
</script>
<body>
</body>
</html> 

Yes as already suggested in comments section. 是,已在评论部分中建议。

For downloading the images from a site url.We don't have to use ajax requests always. 从网站网址下载图像。我们不必总是使用ajax请求。

In this case wget command should be helpful. 在这种情况下, wget命令应该会有所帮助。

wget -r http://sample.url.com
  • Wget is a free network utility to retrieve files from the World Wide Web using HTTP and FTP, the two most widely used Internet protocols. Wget是一个免费的网络实用程序,可使用HTTP和FTP(这两种使用最广泛的Internet协议)从万维网检索文件。 It works non-interactively, thus enabling work in the background, after having logged off. 它以非交互方式工作,因此在注销后可以在后台进行工作。

  • The recursive retrieval of HTML pages, as well as FTP sites is supported -- you can use Wget to make mirrors of archives and home pages, or traverse the web like a WWW robot (Wget understands /robots.txt). 支持HTML页面以及FTP站点的递归检索-您可以使用Wget制作档案和主页的镜像,或者像WWW机器人一样遍历Web(Wget理解/robots.txt)。

  • Wget works exceedingly well on slow or unstable connections, keeping getting the document until it is fully retrieved. Wget在慢速或不稳定的连接上工作得非常好,一直获取文档直到完全检索它。 Re-getting files from where it left off works on servers (both HTTP and FTP) that support it. 从停止的地方重新获取文件可在支持该文件的服务器(HTTP和FTP)上使用。 Matching of wildcards and recursive mirroring of directories are available when retrieving via FTP. 通过FTP检索时,可以使用通配符匹配和目录的递归镜像。 Both HTTP and FTP retrievals can be time-stamped, thus Wget can see if the remote file has changed since last retrieval and automatically retrieve the new version if it has. HTTP和FTP检索都可以加上时间戳,因此Wget可以查看自上次检索以来远程文件是否已更改,如果有,则自动检索新版本。

  • Wget supports proxy servers, which can lighten the network load, speed up retrieval and provide access behind firewalls. Wget支持代理服务器,可以减轻网络负载,加快检索速度并提供对防火墙的访问。 If you are behind a firewall that requires the use of a socks style gateway, you can get the socks library and compile wget with support for socks. 如果您位于需要使用袜子样式网关的防火墙后面,则可以获取袜子库并编译带有袜子支持的wget。

  • Most of the features are configurable, either through command-line options, or via initialization file .wgetrc. 大多数功能都可以通过命令行选项或初始化文件.wgetrc进行配置。 Wget allows you to install a global startup file (etc/wgetrc by default) for site settings. Wget允许您为站点设置安装全局启动文件(默认情况下为etc / wgetrc)。

Documentation of wget command . wget命令的文档

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM