[英]mql4: Get data from site
How I can get " Maintenance
" value from site 我如何从网站获取“
Maintenance
”价值
using a MQL4
script ? 使用
MQL4
脚本?
As I understand, I must set an internet connection, get data from site, parse it and get data. 据我了解,我必须设置Internet连接,从站点获取数据,对其进行解析并获取数据。
Is there a way how I can do it? 有办法吗?
I'll be grateful for any example. 我将不胜感激。
This means ( if not using a DLL-workaround ) the whole task would require to re-invent a wheel and build the HTML-parser inside MQL4. 这意味着(如果不使用DLL解决方法)整个任务将需要重新发明轮子并在MQL4内构建HTML解析器。 Doable but a waste of anyone's resources.
可行,但浪费任何人的资源。
Once going to use a DLL-imported functionalities, one may either just bypass the MQL4 code-execution restrictions and call Windows-API services to fork sub-process and make things move forwards, however Windows-API, in my opinion, is rather a feature rich interfacing framework, but for a pretty low-level access to elementary services, so you finally can find yourself re-inventing wheel again, well, now "outside" from the MQL4 sandbox restrictions. 一旦使用了DLL导入的功能,要么要么绕过MQL4代码执行限制,要么调用Windows-API服务来分叉子进程并使事情向前发展,但是在我看来,Windows-API是一种功能丰富的接口框架,但是对于基本服务的底层访问,因此您终于可以发现自己再次重新发明了轮子,现在,这已经超出了MQL4沙箱限制。
If you do not restrict our imagination, your project may benefit from rapid-prototyping in Python and setup a peer-to-peer distributed messaging/control in a heterogeneous Python / MQL4 environment. 如果您不限制我们的想象力,您的项目可能会受益于Python的快速原型设计,并在异构的Python / MQL4环境中设置对等分布式消息传递/控件。
Besides other benefits, Python strengths in smart and powerful ( not only ) web-content processing is fabulous, so this distributed approach will open your MQL4 projects into strategically new, unseen dimensions. 除了其他优点之外,Python在智能和强大的(不仅是)Web内容处理方面的优势非常出色,因此这种分布式方法将使您的MQL4项目进入战略上新的,看不见的维度。
Python smart-scraping ( not a dumb-force one ): Python智能抓取(不是傻瓜式的):
def askAtPublisherWebURL( aControlDICT,
aURL = "https://globalde?.?.?.?y?.com/en/products/.../...-DLON?Class_type=class_symbol=???&Class_exchange=???&ps=999&md=03-2014",
anOPT = "ESX",
aMaturityDATE = "03-2014",
anEmailRECIPIENT = "Me.Robot-GWY-2013-PoC@gmail.com",
aFileNAME = "ESX_2014-03_anObservedStateTIMESTAMP[]"
):
import time, urllib, re, winsound, urllib2 # late, dirty import
try:
aReturnFLAG = True
anOutputSTRING = "|TRYING: " + aURL # a crash-proof proxy-value for a case IOError <EXC> would appear
# --------------------------------------------------------# urllib2 MODE
anInputHANDLER = urllib2.urlopen( aURL, None, 120 ) # urllib2 MODE with a 120 [sec] timeout before urllib2.URLError ... still gets stuck during peak-hours ( on aMaturityDATE )
aListOfLINEs = anInputHANDLER.readlines()
anInputHANDLER.close()
# --------------------------------------------------------# urllib2 MODE
except urllib2.URLError as anExcREASON:
aReturnFLAG = False
# no RET here // JMP .FINALLY: to log IOError....
except exceptions.IOError as ( ErrNO, ErrSTR ): # an IOError <EXC> hase appeared, handle with care before JMP .FINALLY:
aReturnFLAG = False
else: # no IOError or any other <EXC>, process the <content> .. JMP .FINALLY:
# ------------------------------------------------------# HTML-processor
# smart html-processing goes here
# ...
# ------------------------------------------------------# HTML-processor
finally: # in any case do all this TIDY-UP-BEFORE-EXIT
# fileIO + pre-exit ops
# sendMsg4MQL() --> SIG MT4
return aReturnFLAG # MISRA-motivated single point of RET
For hawkish Pythoneers, the post intentionally uses non-PEP-8 source code formatting as it is authors experience that during a learning phase, code read-ability improves the focus on task solution and helps getting used to the underlying concepts rather than to spend efforts on formally adhering typography. 对于鹰派Pythoneers而言,该帖子有意使用非PEP-8源代码格式,因为作者发现在学习阶段,代码的可读性提高了对任务解决方案的关注,并帮助他们习惯了底层概念,而不是花很多精力。正式加入版式。 Hope the principle of providing help is respected and the non-PEP-8 styling format is forgiven in the name of the ease of reading.
希望尊重提供帮助的原则,并以易于阅读的名义宽恕非PEP-8样式格式。
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.