简体   繁体   English

在powershell中下载网站文件

[英]Downloading website files in powershell

I'm trying to get a script to query files on an IIS website, then download those files automatically.我正在尝试获取一个脚本来查询 IIS 网站上的文件,然后自动下载这些文件。 So far, I have this:到目前为止,我有这个:

$webclient = New-Object System.Net.webclient
$source = "http://testsite:8005/"
$destination = "C:\users\administrator\desktop\testfolder\"
#The following line returns the links in the webpage
$testcode1 = $webclient.downloadstring($source) -split "<a\s+" | %{ [void]($_ -match "^href=['"]([^'">\s]*)"); $matches[1] }
foreach ($line in $test2) {
    $webclient.downloadfile($source + $line, $destination + $line)
}


I'm not that good at PowerShell yet, and I get some errors, but I manage to get a couple test files I threw into my wwwroot folder (the web.config file seems undownloadable, so I'd imagine thats one of my errors).我还不太擅长 PowerShell,并且遇到了一些错误,但是我设法将几个测试文件放入了 wwwroot 文件夹(web.config 文件似乎无法下载,所以我想这是我的错误之一)。 When I tried to change my $source value to a subfolder on my site that had some test text files(example = http://testsite:8005/subfolder/ , I get errors and no downloads at all. Running my $testcode1 will give me the following links in my subfolder:当我尝试将我的$source值更改$source我网站上有一些测试文本文件的子文件夹时(例如 = http://testsite:8005/subfolder/ ,我收到错误并且根本没有下载。运行我的$testcode1会给我的子文件夹中的以下链接:
/subfolder/test2/txt
/
/subfolder/test1.txt
/subfolder/test2.txt
I don't know why it lists the test2 file twice.我不知道为什么它会列出 test2 文件两次。 I figured my problem was that since it was returning the subfolder/file format, that I was getting errors because I was trying to download $source + $line , which would essentially be http://testsite:8005/subfolder/subfolder/test1.txt , but when I tried to remedy that by adding in a $root value that was the root directory of my site and do a foreach($line in $testcode1) { $webclient.downloadfile($root + $line, $destination + $line) } , I still get errors.我认为我的问题是因为它返回子文件夹/文件格式,所以我收到错误,因为我试图下载$source + $line ,这基本上是http://testsite:8005/subfolder/subfolder/test1.txt ,但是当我试图通过添加作为我网站的根目录的$root值并执行foreach($line in $testcode1) { $webclient.downloadfile($root + $line, $destination + $line) } ,我仍然遇到错误。
If some of you high speed gurus can help show me the error of my ways, I'd be grateful.如果你们中的一些高速大师能帮助我指出我的方法的错误,我将不胜感激。 I am looking to download all the files in each subfolder on my site, which I know would involve use of some recursive action, but again, I currently do not have the skill level myself to do that.我希望下载我网站上每个子文件夹中的所有文件,我知道这会涉及使用一些递归操作,但同样,我自己目前没有这样做的技能水平。 Thank you in advance on helping me out!预先感谢您帮助我!

Best way to download files from a website is to use从网站下载文件的最佳方式是使用

Invoke-WebRequest –Uri $url

Once you are able to get hold of the html you can parse the content for the links.一旦您能够掌握 html,您就可以解析链接的内容。

$result = (((Invoke-WebRequest –Uri $url).Links | Where-Object {$_.href -like “http*”} ) | select href).href

Give it a try.试一试。 Its simpler than $webclient = New-Object System.Net.webclient它比 $webclient = New-Object System.Net.webclient 更简单

This is to augment A_N's answer with two examples.这是用两个例子来补充 A_N 的答案。

Download this Stackoverflow question to C:/temp/question.htm .将此 Stackoverflow 问题下载到C:/temp/question.htm

Invoke-RestMethod -Uri stackoverflow.com/q/19572091/1108891 -OutFile C:/temp/question.htm

Download a simple text document to C:/temp/rfc2616.txt .下载一个简单的文本文件到C:/temp/rfc2616.txt

Invoke-RestMethod -Uri tools.ietf.org/html/rfc2616 -OutFile C:/temp/rfc2616.txt

I made a simple Powershell script to clone an openbsd package repo.我制作了一个简单的 Powershell 脚本来克隆 openbsd 包存储库。 It probably would work / could be implemented in other ways/use cases for similar things.对于类似的事情,它可能会工作/可以以其他方式/用例实现。

GitHub link GitHub 链接

# Quick and dirty script to clone a package repo. Only tested against OpenBSD.
[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12
$share = "\\172.16.10.99\wmfbshare\obsd_repo\"
$url = "https://ftp3.usa.openbsd.org/pub/OpenBSD/snapshots/packages/amd64/"
cd $share
$packages = Invoke-WebRequest -Uri $url -UseBasicParsing $url
$dlfolder = "\\172.16.10.99\wmfbshare\obsd_repo\"
foreach ($package in $packages.links.href){
    if ((get-item $package -ErrorAction SilentlyContinue)){
        write-host "$package already downloaded"
    } else {
        write-host "Downlading $package"
        wget "$url/$package" -outfile "$dlfolder\$package"
    }
}

I would try this:我会试试这个:

$webclient = New-Object System.Net.webclient
$source = "http://testsite:8005/"
$destination = "C:\users\administrator\desktop\testfolder\"
#The following line returns the links in the webpage
$testcode1 = $webclient.downloadstring($source) -split "<a\s+" | %{ [void]($_ -match  "^href=['"]([^'">\s]*)"); $matches[1] }
foreach ($line in $testcode1) {
    $Destination = "$destination\$line"
    #Create a new directory if it doesn't exist
    if (!(Test-Path $Destination)){
        New-Item $Destination -type directory -Force
    }
    $webclient.downloadfile($source + $line, $destination + $line)
}

I think your only issue here is that you were grabbing a new file from a new directory, and putting it into a folder that didn't exist yet (I could be mistaken).我认为您在这里唯一的问题是您从新目录中抓取了一个新文件,并将其放入尚不存在的文件夹中(我可能会误会)。

You can do some additional troubleshooting if that doesn't fix your problem:如果这不能解决您的问题,您可以进行一些额外的故障排除:

Copy each line individually into your powershell window and run them up to the foreach loop.将每一行单独复制到您的 powershell 窗口中,并将它们运行到 foreach 循环。 Then type out your variable holding all the gold:然后输入包含所有黄金的变量:

    $testcode1

When you enter that into the console, it should spit out exactly what's in there.当您将其输入到控制台时,它应该准确地吐出其中的内容。 Then you can do additional troubleshooting like this:然后你可以像这样进行额外的故障排除:

    "Attempting to copy $Source$line to $Destination$line"

And see if it looks the way it should all the way on down.看看它是否看起来应该一直向下。 You might have to adjust my code a bit.您可能需要稍微调整一下我的代码。

-Dale Harris -戴尔哈里斯

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM