简体   繁体   English

加载关闭的服务器

[英]Load a down server

I need to load/browse a (JavaScript based) website every day.我需要每天加载/浏览一个(基于 JavaScript 的)网站。 But the server of this site remains down most of the time.但是该站点的服务器大部分时间都处于停机状态。 So, I can't always browse this site.所以,我不能总是浏览这个网站。

Now, my question: How can I load this site even if it's server is down?现在,我的问题是:即使服务器已关闭,我如何加载该站点?

EDIT: There is a registration process in this site.编辑:本网站有一个注册过程。 When this website is responsive, I filled up the information (ie user name, email,phone,date of birth, address, country and so many...) and submit it.当这个网站有响应时,我填写了信息(即用户名、电子邮件、电话、出生日期、地址、国家等等……)并提交。 Then it check all the validation of registration process.然后它检查注册过程的所有验证。 If the passes all the validation then it returns a pdf file to download.如果通过了所有验证,则它会返回一个 pdf 文件以供下载。 but any validation fails don't return the pdf file to download.但任何验证失败都不会返回要下载的 pdf 文件。

In this case, can I cache all the web content and execute locally get the registration form as like as the server was responsive.在这种情况下,我可以缓存所有 Web 内容并在本地执行获取注册表单,就像服务器响应一样。

Additionally, should I run there any script?另外,我应该在那里运行任何脚本吗?

Suppose the content of this page that you are trying to access does not have real-time dynamic content then you might want to explore some of the page caching services like Google web cache?假设您尝试访问的此页面的内容没有实时动态内容,那么您可能想要探索一些页面缓存服务,例如 Google 网络缓存?

This site offers a variety of such services, http://www.cachedpages.com/本站提供多种此类服务, http://www.cachedpages.com/

Note that a cached page will not be up to date, you will have to check when it was last cached.请注意,缓存页面不会是最新的,您必须检查它上次缓存的时间。

A typical use case for such service would be for example stackoverflow is currently down and my search engine returns me with a stackover page that might contain answer to my question.此类服务的典型用例是,例如,stackoverflow 当前已关闭,我的搜索引擎向我返回一个 stackover 页面,其中可能包含我的问题的答案。 In such instance I will access the cached page to bypass the downtime.在这种情况下,我将访问缓存页面以绕过停机时间。

Alternatively you might want to write a small script to do a "curl" command and try to fetch content of the troublesome web server and cache it locally on your own computer so that when it goes down you can lookup the most recently cached version in your own comp.或者,您可能想编写一个小脚本来执行“curl”命令,并尝试获取麻烦的 Web 服务器的内容并将其缓存在您自己的计算机本地,以便当它出现故障时,您可以在您的计算机中查找最近缓存的版本自己的补偿

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM