简体   繁体   English

C#WebClient禁用缓存

[英]C# WebClient disable cache

Good day. 美好的一天。

I'm using the WebClient class in my C# application in order to download the same file every minute, and then the application performs a simple check to see if the file has been changed, and if it does do something with it. 我在我的C#应用​​程序中使用WebClient类,以便每分钟下载相同的文件,然后应用程序执行简单的检查以查看文件是否已更改,以及它是否确实对其执行了操作。

Well since this file is downloaded every minute the WebClient caching system is caching the file, and not downloading the file again, just simply getting it from the cache, and that gets in the way of checking if the file downloaded is new. 好吧,因为这个文件每分钟下载一次, WebClient缓存系统就会缓存文件,而不是再次下载文件,只是简单地从缓存中获取文件,这样就会检查下载的文件是否是新文件。

So i would like to know how can disable the caching system of the WebClient class. 所以我想知道如何禁用WebClient类的缓存系统。

I've tried. 我试过了。

Client.CachePolicy = new System.Net.Cache.RequestCachePolicy(System.Net.Cache.RequestCacheLevel.BypassCache);

I also tried headers. 我也试过标题。

WebClient.Headers.Add("Cache-Control", "no-cache");

Didn't work as well. 没有工作。 So how can i disable the cache for good? 那我怎么能禁用缓存呢?

Thanks. 谢谢。

EDIT 编辑

I also tried the following CacheLevels : NoCacheNoStore , BypassCache , Reload . 我还尝试了以下CacheLevelsNoCacheNoStoreBypassCacheReload No effect, however if i reboot my computer the cache seems to be cleared, but i can't be rebooting the computer every time. 没有效果,但是如果我重新启动计算机,缓存似乎被清除,但我不能每次都重新启动计算机。

UPDATE in face of recent activity (8 Set 2012) 面对最近的活动更新(8集2012)

The answer marked as accepted solved my issue. 标记为已接受的答案解决了我的问题。 To put it simple, I used Sockets to download the file and that solved my issue. 简单来说,我使用套接字下载文件,这解决了我的问题。 Basically a GET request for the desired file, I won't go into details on how to do it, because I'm sure you can find plenty of "how to" right here on SO in order to do the same yourself. 基本上是对所需文件的GET请求,我不会详细介绍如何操作,因为我确信你可以在SO上找到大量的“如何”,以便自己做同样的事情。 Although this doesn't mean that my solution is also the best for you, my first advice is to read other answers and see if any are useful. 虽然这并不意味着我的解决方案对你来说也是最好的,但我的第一个建议是阅读其他答案,看看是否有用。

Well anyway, since this questions has seen some recent activity, I thought about adding this update to include some hints or ideas that I think should be considered by those facing similar problems who tried everything they could think off, and are sure the problem doesn't lie with their code. 好吧无论如何,因为这个问题已经看到了最近的一些活动,我想添加这个更新,以包括一些暗示或想法,我认为应该考虑那些面临类似问题的人,他们尝试了他们可以想到的一切,并且确定问题没有撒谎他们的代码。 Likely to be the code for most cases, but sometimes we just don't quite see it, just go have a walk and come back after a few minutes, and you will probably see it point blank range like it was the most obvious thing in the first place. 可能是大多数情况下的代码,但有时候我们只是不太看到它,只需要散步几分钟后再回来,你可能会看到它是空白范围,就像它是最明显的一样。第一名。

Either way if you're sure, then in that case I advise to check weather your request goes through some other device with caching capabilities (computers, routers, proxies, ...) until it gets to the intended destination. 无论哪种方式,如果您确定,那么在这种情况下,我建议检查天气您的请求通过其他具有缓存功能的设备(计算机,路由器,代理,...),直到它到达预定目的地。

Consider that most requests go through some of such devices mentioned before, more commonly routers, unless of course, you are directly connected to the Internet via your service provider network. 考虑到大多数请求都通过之前提到的一些此类设备,更常见的是路由器,除非您通过服务提供商网络直接连接到Internet。

In one time my own router was caching the file, odd I know, but it was the case, whenever I rebooted it or connected directly to the Internet my caching problem went away. 有一段时间我自己的路由器正在缓存文件,奇怪我知道,但事实就是如此,每当我重新启动它或直接连接到Internet时,我的缓存问题就消失了。 And no there wasn't any other device connected to the router that can be blamed, only the computer and router. 并且没有任何其他设备连接到路由器可以被指责,只有计算机和路由器。

And by the way, a general advice, although it mostly applies to those who work in their company development computers instead of their own. 顺便说一句,一般建议,虽然它主要适用于那些在公司开发计算机而不是自己的计算机上工作的人。 Can by any change your development computer be running a caching service of sorts? 可以通过任何更改您的开发计算机运行各种缓存服务? It is possible. 有可能的。

Furthermore consider that many high end websites or services use Content Delivery Networks (CDN), and depending on the CDN provider, whenever a file is updated or changed, it takes some time for such changes to reflect in the entire network. 此外,考虑到许多高端网站或服务使用内容交付网络 (CDN),并且根据CDN提供商,无论何时更新或更改文件,这些更改都需要一些时间才能反映在整个网络中。 Therefore it might be possible you were in the bad luck of asking for a file which might be in a middle of a update, and the closest CDN server to you hasn't finished updating. 因此,您可能有可能要求提供可能处于更新中间的文件,并且最近的CDN服务器尚未完成更新。

In any case , specially if you are always requesting the same file over and over, or if you can't find where the problem lies, then if possible, I advise you to reconsider your approach in requesting the same file time after time, and instead look into building a simple Web Service , to satisfy the needs you first thought about satisfying with such file in the first place. 在任何情况下 ,特别是如果你总是一遍又一遍地请求同一个文件,或者你找不到问题所在,那么如果可能的话,我建议你重新考虑你的方法,一次又一次地请求同一个文件,而是考虑构建一个简单的Web服务 ,以满足您首先考虑满足此类文件的需求。

And if you are considering such option, I think you will probably have a easier time building a REST Style Web API for your own needs. 如果您正在考虑这样的选项,我想您可能会更容易根据自己的需要构建REST样式Web API

I hope this update is useful in some way to you, sure it would be for me while back. 我希望这个更新在某种程度上对你有用,肯定会在我回来的时候。 Best of luck with your coding endeavors. 祝您的编码工作顺利。

You could try appending some random number to your url as part of a querystring each time you download the file. 每次下载文件时,您都可以尝试将一些随机数附加到您的网址,作为查询字符串的一部分 This ensures that urls are unique each time. 这确保了每次都是唯一的URL。

For ex- 对于前

Random random = new Random();
string url = originalUrl + "?random=" + random.Next().ToString();
webclient.DownloadFile(url, downloadedfileurl);

From the above I would guess that you have problem somewhere else. 从上面我猜想你在其他地方有问题。 Can you log http requests on server side? 你能在服务器端记录http请求吗? What do you get when you alter some random seed parameter? 当你改变一些随机种子参数时你会得到什么?

Maybe SERVER caches the file (if the log shows that request is really triggered every minute. 也许SERVER缓存文件(如果日志显示请求确实是每分钟触发一次。

Do you use ISA or SQUID? 你使用ISA还是SQUID?

What is http response code for your request? 什么是您的请求的http响应代码?

I know that answering with answers might not be popular, but comment doesn't allow me this much text :) 我知道回答答案可能不受欢迎,但评论不允许我这么多文字:)

EDIT: 编辑:

Anyway, use HttpRequest object instead of WebClient , and hopefully (if you place your doubts in WebClient ) everything will be solved. 无论如何,使用HttpRequest对象而不是WebClient ,并希望(如果你把疑虑放在WebClient )一切都将得到解决。 If it wasn't solved with HttpRequest , then the problem really IS somewhere else. 如果它没有用HttpRequest解决,那么问题确实在其他地方。

Further refinement: 进一步完善:

Go even lower: How do I Create an HTTP Request Manually in .Net? 更低: 如何在.Net中手动创建HTTP请求?

This is pure sockets, and if the problem still persists, then open a new question and tag it WTF :) 这是纯粹的套接字,如果问题仍然存在,那么打开一个新问题并标记为WTF :)

Try NoCacheNoStore : 尝试NoCacheNoStore

Never satisfies a request by using resources from the cache and does not cache resources. 永远不会通过使用缓存中的资源来满足请求,也不会缓存资源。 If the resource is present in the local cache, it is removed. 如果资源存在于本地缓存中,则会将其删除。 This policy level indicates to intermediate caches that they should remove the resource. 此策略级别向中间缓存指示应删除资源。 In the HTTP caching protocol, this is achieved using the no-cache cache control directive. 在HTTP缓存协议中,这是使用no-cache缓存控制指令实现的。

client.CachePolicy = new System.Net.Cache.RequestCachePolicy(System.Net.Cache.RequestCacheLevel.NoCacheNoStore); 

In some scenarios, network debugging software can cause this issue. 在某些情况下,网络调试软件可能会导致此问题。 To make sure your url is not cached, you can append a random number as last parameter to make url unique. 要确保您的网址没有被缓存,您可以附加一个随机数作为最后一个参数,以使网址唯一。 This random parameter in most cases is ignored by servers (which try to read parameters sent as name value pairs). 在大多数情况下,这个随机参数被服务器忽略(它尝试读取作为名称值对发送的参数)。

Example: http://www.someserver.com/?param1=val1&ThisIsRandom=RandomValue 示例: http//www.someserver.com/?param1 = val1&ThishissandandRandom = RandomValue

Where ThisIsRandom=RandomValue is the new parameter added. 其中ThisIsRandom = RandomValue是添加的新参数。

Using HTTPRequest is definitely the right answer for your problem. 使用HTTPRequest绝对是您问题的正确答案。 However, if you wish to keep your WebBrowser/WebClient object from using cached pages, you should include not just "no-cache" but all of these headers: 但是,如果您希望保持WebBrowser / WebClient对象不使用缓存页面,则不应仅包括"no-cache"而应包括所有这些标头:

<meta http-equiv="Cache-control" content="no-cache">
<meta http-equiv="Cache-control" content="no-store">
<meta http-equiv="Pragma" content="no-cache">
<meta http-equiv="Expires" content="-1">

In IE11, it didn't work for me until I included either one or both of the last two. 在IE11中,直到我包含最后两个中的一个或两个,它才对我有效。

All methods here seems can't solve a problem: If a web page has ever been accessible and now deleted from the server, the method HttpWebResponse.GetResponse() will give you a response for a cached copy starting with "Before a period of sufficient time has passed, or you restart computer, it will NOT trigger the expected exception for 404 page not found error, you cannot know that web page now does not exsit at all now. 这里的所有方法似乎都无法解决问题:如果网页已经可以访问并且现在已从服务器中删除,则HttpWebResponse.GetResponse()方法将为您提供对缓存副本的响应,该副本以“足够长的时间段”开头时间已经过去了,或者你重新启动计算机,它不会触发404页面找不到错误的预期异常,你现在根本不知道网页现在根本不存在。

I tried everything: 我尝试了一切:

  • Set header like ("Cache-Control", "no-cache") 设置标题(“Cache-Control”,“no-cache”)
  • Set "request.CachePolicy" to "noCachePolicy" 将“request.CachePolicy”设置为“noCachePolicy”
  • Delete IE tem/history files. 删除IE tem / history文件。
  • Use wired Internet without router .......... DOES NOT WORK! 使用没有路由器的有线互联网..........不工作!

Fortunately, if the web page has changed its content, HttpWebResponse.GetResponse() will give you a fresh page to reflect the change. 幸运的是,如果网页已更改其内容, HttpWebResponse.GetResponse()将为您提供一个新页面以反映更改。

I Guess you will have to use webrequest/webresponse rather than webclient 我猜你必须使用webrequest / webresponse而不是webclient

    WebRequest request = WebRequest.Create(uri);
     // Define a cache policy for this request only. 
     HttpRequestCachePolicy noCachePolicy = new HttpRequestCachePolicy(HttpRequestCacheLevel.NoCacheNoStore);
     request.CachePolicy = noCachePolicy;
     WebResponse response = request.GetResponse();

//below is the function for downloading the file

   public static int DownloadFile(String remoteFilename,
                           String localFilename)
    {
        // Function will return the number of bytes processed
        // to the caller. Initialize to 0 here.
        int bytesProcessed = 0;

        // Assign values to these objects here so that they can
        // be referenced in the finally block
        Stream remoteStream = null;
        Stream localStream = null;
        WebResponse response = null;

        // Use a try/catch/finally block as both the WebRequest and Stream
        // classes throw exceptions upon error
        try
        {
            // Create a request for the specified remote file name
            WebRequest request = WebRequest.Create(remoteFilename);
            // Define a cache policy for this request only. 
            HttpRequestCachePolicy noCachePolicy = new HttpRequestCachePolicy(HttpRequestCacheLevel.NoCacheNoStore);
            request.CachePolicy = noCachePolicy;
            if (request != null)
            {
                // Send the request to the server and retrieve the
                // WebResponse object 
                response = request.GetResponse();

                if (response != null)
                {
                    if (response.IsFromCache)
                        //do what you want

                    // Once the WebResponse object has been retrieved,
                    // get the stream object associated with the response's data
                    remoteStream = response.GetResponseStream();

                    // Create the local file
                    localStream = File.Create(localFilename);

                    // Allocate a 1k buffer
                    byte[] buffer = new byte[1024];
                    int bytesRead;

                    // Simple do/while loop to read from stream until
                    // no bytes are returned
                    do
                    {
                        // Read data (up to 1k) from the stream
                        bytesRead = remoteStream.Read(buffer, 0, buffer.Length);

                        // Write the data to the local file
                        localStream.Write(buffer, 0, bytesRead);

                        // Increment total bytes processed
                        bytesProcessed += bytesRead;
                    } while (bytesRead > 0);
                }
            }
        }
        catch (Exception e)
        {
            Console.WriteLine(e.Message);
        }
        finally
        {
            // Close the response and streams objects here 
            // to make sure they're closed even if an exception
            // is thrown at some point
            if (response != null) response.Close();
            if (remoteStream != null) remoteStream.Close();
            if (localStream != null) localStream.Close();
        }

        // Return total bytes processed to caller.
        return bytesProcessed;
    }
client.CachePolicy = new RequestCachePolicy(RequestCacheLevel.BypassCache);

Should work. 应该管用。 Just make sure you clear the cache and delete any temporary downloaded files in Internet Explorer before running the code as System.Net and IE both use the same cache. 运行代码之前,只需确保清除缓存并删除Internet Explorer中的任何临时下载文件,因为System.Net和IE都使用相同的缓存。

I had a similar problem with powershell using webClient, which was also present after switching to use webRequest. 我使用webClient与powershell有类似的问题,在切换到使用webRequest之后也存在。 What I discovered is that the socket is reused and that causes all sorts of server/network side caching (and in my case a load balancer got in the way too especially problematic with https). 我发现的是套接字被重用并导致各种服务器/网络端缓存(在我的情况下,负载均衡器在使用https方面尤其成问题)。 The way around this is to disable keepalive and possibly pipeling in the webrequest object as below which will force a new socket for each request: 解决这个问题的方法是在webrequest对象中禁用keepalive和可能的管道,如下所示,这会强制为每个请求添加一个新套接字:

#Define Funcs Function httpRequest {
     param([string]$myurl)
     $r = [System.Net.WebRequest]::Create($myurl)
     $r.keepalive = 0
     $sr = new-object System.IO.StreamReader (($r.GetResponse()).GetResponseStream())
     $sr.ReadToEnd() }

Check that you are not being rate limited! 检查您的价格是否受到限制! I was getting this back from an nginx server: 我从一个nginx服务器回来了:

403 Forbidden 403禁止

Rate limited exceeded, please try again in 24 hours. 超出限额,请在24小时内再试一次。

Here is the program I was using (C#) 这是我正在使用的程序(C#)

C:\Users\jake.scott.WIN-J8OUFV09HQ8\AppData\Local\Temp\2\tmp7CA.tmp
Download file complete
The remote server returned an error: (403) Forbidden.

The console prints out: 控制台打印出来:

 C:\\Users\\jake.scott.WIN-J8OUFV09HQ8\\AppData\\Local\\Temp\\2\\tmp7CA.tmp Download file complete The remote server returned an error: (403) Forbidden. 

since I use the following: 因为我使用以下内容:

wclient.CachePolicy = new System.Net.Cache.RequestCachePolicy(System.Net.Cache.RequestCacheLevel.NoCacheNoStore);
wclient.Headers.Add("Cache-Control", "no-cache");

I get no cached file anymore. 我再也没有缓存文件了。

I additionally added this function I found, to delete IE temp files before every call: 我另外添加了我发现的这个函数,在每次调用之前删除IE临时文件:

private void del_IE_files()
{
    string path = Environment.GetFolderPath(Environment.SpecialFolder.InternetCache);
    //for deleting files

    System.IO.DirectoryInfo DInfo = new DirectoryInfo(path);
    FileAttributes Attr = DInfo.Attributes;
    DInfo.Attributes = FileAttributes.Normal;

    foreach (FileInfo file in DInfo.GetFiles())
    {
        file.Delete();
    }

    foreach (DirectoryInfo dir in DInfo.GetDirectories())
    {
        try
        {
            dir.Delete(true); //delete subdirectories and files
        }
        catch
        {

        }
    }
}

If you have Access to the webserver, open Internet Explorer go to 如果您有访问Web服务器的权限,请打开Internet Explorer

Internet Explorer -> Internet Options -> Browsing History "Settings" -> Temporary Internet Files "never"

Clear the Browser Cache and voila, it will work! 清除浏览器缓存,瞧,它会工作!

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM