[英]How can I read the directory contents on a remote server with PHP?
I have a url, http://www.mysite.com/images
and the images
directory allows Directory Listings. 我有一个网址,
http://www.mysite.com/images
images
目录允许目录列表。 How can I get the files in that directory with PHP? 如何使用PHP获取该目录中的文件?
Here is an example if you need to read the images over HTTP and the server is Apache: 如果您需要通过HTTP读取图像并且服务器是Apache,这是一个示例:
<?php
$url = 'http://www.mysite.com/images';
$html = file_get_contents($url);
$count = preg_match_all('/<td><a href="([^"]+)">[^<]*<\/a><\/td>/i', $html, $files);
for ($i = 0; $i < $count; ++$i) {
echo "File: " . $files[1][$i] . "<br />\n";
}
?>
If it is the same server you are running your PHP on, you can use opendir() and readdir(). 如果它与您运行PHP的服务器相同,则可以使用opendir()和readdir()。
I know this question is very old, but just to get me into the swing of using this forum I thought I'd add my view. 我知道这个问题已经很老了,但是为了让我能够使用这个论坛,我想我会添加我的观点。 I found the following happened (referring to the original answer to use regex.
我发现以下情况发生了(指的是使用正则表达式的原始答案。
My html turned out to be formatted like this: 我的html结果是这样的格式:
<td>
<a href="bricks.php">bricks.php</a>
</td>
So I ended up using this: 所以我最终使用了这个:
$count = preg_match_all('/<a href=\"([^\"?\/]+)">[^<]*<\/a>/i', $html, $files);
I wanted to use the following (which tested ok in the online generator testers, but it failed to find a match in the php code): 我想使用以下(在在线生成器测试器中测试好了,但它在php代码中找不到匹配):
$count = preg_match_all('/<td>(?:[\w\n\f])<a href="([^"]+)">[^<]*<\/a>(?:[\w\n\f])<\/td>/i', $html, $files);
You can use regex to take urls from listing. 您可以使用正则表达式从列表中获取网址。 (No you can't use DOMDOCUMENT as it's not a valid HTML)
(不,你不能使用DOMDOCUMENT,因为它不是有效的HTML)
You need FTP access (an FTP account to that URL). 您需要FTP访问(该URL的FTP帐户)。 If you have this, then you can log into the server with FTP and use:
如果你有这个,那么你可以使用FTP登录服务器并使用:
opendir()
and 和
readdir()
to accomplish what you are trying to do. 完成你想要做的事情。
If you do not have access to the server, you will need to scrape the site's HTML, and it gets more complex -> so I can let somebody else tackle that one... but google search "scrape html site" or something similar, there are plenty of pre-written functions that can do similar things. 如果您无法访问服务器,则需要抓取网站的HTML,并且它会变得更加复杂 - >所以我可以让别人解决这个问题......但谷歌搜索“scrape html site”或类似的东西,有很多预先编写的函数可以做类似的事情。
ie http://www.thefutureoftheweb.com/blog/web-scrape-with-php-tutorial 即http://www.thefutureoftheweb.com/blog/web-scrape-with-php-tutorial
http://www.bradino.com/php/screen-scraping/ http://www.bradino.com/php/screen-scraping/
// tho a late-comer, this one seems a bit more reader-friendly if not faster //这是一个后来者,如果不是更快的话,这个似乎更容易读者
$url = 'http://whatevasite/images/';
$no_html = strip_tags(file_get_contents($url));
$arr = explode('Parent Directory', $no_html);
$files = trim($arr[1]);
$files = explode("\n ", $files);
var_dump($files);
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.