[英]What are the lowest possible latencies for a FIX engine to send a FIX message from client to server?
I am building a FIX engine in C++ but I don't have a reference to know what would be considered a good performance number. 我正在用C ++构建一个FIX引擎,但我没有引用知道什么是一个好的性能数字。 Taking into account the network time and the FIX parsing time, what would be a good time in microseconds for a client to send a FIX message to a server?
考虑到网络时间和FIX解析时间,客户端向服务器发送FIX消息的微秒时间是多少? Also does anyone know the current lowest possible latencies expected for this simple FIX-message-from-client-to-server operation?
也有人知道这个简单的FIX消息从客户端到服务器操作的当前最低可能延迟吗?
That will depend on how fast your FIX engine can parse bytes into a FixMessage
object and more importantly on how fast your network code is. 这取决于您的FIX引擎可以将字节解析为
FixMessage
对象的速度,更重要的是取决于网络代码的速度。 Are you writing the network stack too? 你也在写网络堆栈吗? Writing a FIX engine looks simple from outside but it is actually a complex task with too many corner cases and features you have to cover.
编写FIX引擎从外部看起来很简单,但它实际上是一个复杂的任务,需要覆盖太多的角落案例和功能。 Are you going to support retransmission?
你打算支持转播吗? Asynchronous audit-logging?
异步审计日志? FIX session timers?
FIX会话计时器? Repeating groups inside repeating groups?
在重复组内重复组? You should consider using an open-source or commercial FIX engine.
您应该考虑使用开源或商业FIX引擎。
As for latencies you should expect, I am unaware of any FIX engine that can go below 4.5 microseconds. 至于你应该期待的延迟,我不知道任何可以低于4.5微秒的FIX引擎。 That's the one-way total time to write a
FixMessage
object to a ByteBuffer
, transfer the ByteBuffer over the network to the server, the server then reads the ByteBuffer from the network and parses it back to a FixMessage
object. 这是将
FixMessage
对象写入ByteBuffer
的单向总时间,通过网络将ByteBuffer传输到服务器,服务器然后从网络读取ByteBuffer并将其解析回FixMessage
对象。 If you are using a descent FIX engine, the bottleneck will be the network I/O, not the FIX parsing. 如果您使用的是下降FIX引擎,那么瓶颈将是网络I / O,而不是FIX解析。
To give you some numbers, here are the benchmark for CoralFIX , which is a FIX engine written in Java. 为了给你一些数字,这里是CoralFIX的基准,它是一个用Java编写的FIX引擎。 If you can go below that, please let me know :)
如果你可以低于这个,请告诉我:)
Messages: 1,800,000 (one-way)
Avg Time: 4.774 micros
Min Time: 4.535 micros
Max Time: 69.516 micros
75% = [avg: 4.712 micros, max: 4.774 micros]
90% = [avg: 4.726 micros, max: 4.825 micros]
99% = [avg: 4.761 micros, max: 5.46 micros]
99.9% = [avg: 4.769 micros, max: 7.07 micros]
99.99% = [avg: 4.772 micros, max: 9.481 micros]
99.999% = [avg: 4.773 micros, max: 24.017 micros]
Disclaimer : I am one of the developers of CoralFIX. 免责声明 :我是CoralFIX的开发者之一。
For the principally lowest achievable numbers do not forget to check the ASIC
/ FPGA
- based FIX-Protocol solutions. 对于主要可达到的最低数字,不要忘记检查基于
ASIC
/ FPGA
的FIX协议解决方案。 Any sequential / concurrent serial-processing has hard times to become faster than a parallel-silicon engine solution. 任何顺序/并发串行处理都难以比并行硅引擎解决方案更快。
One may achieve not much deeper than about a 25 ns
resolution on a code-driven measurement aStopwatch.start();callProcessUnderTest();aStopwatch.stop()
but the real problems and issues are somewhere else. 在代码驱动的测量
aStopwatch.start();callProcessUnderTest();aStopwatch.stop()
可能实现的深度不超过约25 ns
分辨率,但真正的问题和问题在其他地方。
To have some comparison on this, 20 ns
latency represents about as few as 5 m of AOC / active optical cables in 120 Gbps interconnects in colocation houses / HPC clusters. 为了对此进行一些比较,
20 ns
延迟表示托管机构/ HPC群集中120 Gbps互连中的AOC /有源光缆约为5 m。
Performance tuning and latency minimisation are both thrilling and remarkable efforts. 性能调整和延迟最小化都是令人兴奋和卓越的努力。
On a first sight, asking the world for "Lowest possible anything " sounds attractive, if not sexy as it is common to hear in recent media, however any serious attempt to answer to the said "Lowest possible" is hard without a proper care spent on problem disambiguation ( if not even demystification ( especially to avoid MAR/COM-generated promises to receive Answers before one asks / Instant Kharma / Eternal Heaven et al, you know 'em well enough to mention any more ) ). 第一眼看到,向世界询问“最低可能的任何东西 ”听起来很有吸引力,如果不是性感的,因为在最近的媒体中听到这种情况,但是如果 没有适当的照顾 , 任何认真回答所说 “最低可能”的 尝试 都很难问题消歧 (如果甚至没有去神秘化(特别是为了避免MAR / COM产生的承诺在一个问/ Instant Kharma / Eternal Heaven等人之前接收答案,你知道他们已经足够提及了))。
It is nothing new under sun, that for this very reason ITU-T / ITU-R and later IETF immense efforts have been spent on systematic care on defining specifications so as to avoid any potential misunderstanding, the less mis-interpretations ( be it in definitions of standards or acceptance testing procedures or specifications of a minimum performance envelopes a product / service has to meet so as to be fully inter-operable ). 在太阳下,这并不是什么新鲜事,因此ITU-T / ITU-R和后来的IETF花费了大量精力在系统关注定义规范上,以避免任何潜在的误解,减少错误的解释(无论如何)标准的定义或验收测试程序或产品/服务必须满足的最低性能包络的规格,以便完全互操作)。
So before taking any figure , be it in [ms], [us] or [ns], be sure whether we all have the same reference-setup of a S ystem- U nder- T est and be double-assured for which two reference points [FROM]-[TO]
the presented figure was in fact measured. 因此, 采取任何数字之前 ,无论是在[毫秒],[美]或[NS], 确保大家都是否具有A S ystem-ünder- 牛逼 EST和被双放心了其中两个相同的参考设置参考点
[FROM]-[TO]
实际测量了所呈现的数字。
________________________________________________________________________
+0 [us]-[__BaseLINE__] a decision to send anything is made @ <localhost>
|
|- <localhost> process wishes to request
| a FIX-MarketData
| Streaming Updates
| for a single item EURCHF
| during LON session opening time
|
|- <localhost> process wishes to request
| a FIX-MarketData
| Single FullRefresh
| for an itemset of:
| { EURUSD, GBPUSD, USDJPY,
| AUDUSD, USDCAD, USDCHF }
| during LON session opening time
|
+ [us]-o======< REFERENCE POINT[A] >===================================
|
|- transfer all DATA to a formatting / assembly process-entity
|
+ [us]-o======< REFERENCE POINT[B] >===================================
|
|- completed a FIX-message payload to be sent
|
+ [us]-o======< REFERENCE POINT[C] >===================================
|
|- completed a FIX-message Header/Trailer/CRC to dispatch
|
+ [us]-o======< REFERENCE POINT[D] >===================================
|
|- inlined SSH/DES/AES cryptor communication service processing
|
+ [us]-o======< REFERENCE POINT[E] >===================================
|
|- L3/2 transport service protocol SAR / PMD
|
+ [us]-o======< REFERENCE POINT[F] >===================================
|
|- L2/1 PHY-device wire-on/off-load process ( NIC / FPGA )-engine
|
+ [us]-o======< REFERENCE POINT[G] >===================================
|
|- E2E transport xmit/delivery processing / latency
|
+ [us]-o======< REFERENCE POINT[H] >===================================
|
|- L1/2 PHY-device on "receiving"-side wire-on/off-load process
|
+ [us]-o======< REFERENCE POINT[I] >===================================
|
|- L2/3 transport recv/handshaking processing / latency
|
+ [us]-o======< REFERENCE POINT[J] >===================================
|
|- inlined SSH/DES/AES decryptor processing
|
+ [us]-o======< REFERENCE POINT[K] >===================================
|
|- incoming FIX-message Header/Trailer/CRC check-in
|
+ [us]-o======< REFERENCE POINT[L] >===================================
|
|- authentication / FIX-Protocol message-counter cross-validation
|
+ [us]-o======< REFERENCE POINT[M] >===================================
|
|- FIX-message requested service content de-mapping
|
+ [us]-o======< REFERENCE POINT[N] >===================================
|
|- FIX-message requested service execution / handling
|
+ [us]-o======< REFERENCE POINT[O] >===================================
|
|- FIX-message requested service response pre-processing / assy
|
+ [us]-o======< REFERENCE POINT[P] >===================================
|
[__FinishLINE__] Ready To Send anything back to <localhost>
|
+ [us]-o======< REFERENCE POINT[Q] >===================================
________|_______________________________________________________________
: SUBTOTAL BEFORE A REQUESTED SERVICE'S RESPONSE-DELIVERY STARTS
________________________________________________________________________
As an ispiration, try to imagine something alike a uniform latency-reporting structure for all vendors ( or for your Project internal Dev/Test-teams ) like this example. 作为一种愿望,尝试为所有供应商(或您的项目内部开发/测试团队)设想类似于统一的延迟报告结构,如此示例。
如果你需要最低的延迟,那么FIX客户端和FIX服务器应该在同一台服务器上,即使在同一个应用程序中使用像disruptor这样的IPC解决方案
I attended a presentation of the Singapore Stock Exchange a few years ago. 几年前我参加了新加坡证券交易所的演讲。 They had recently purchase the Nasdaq OMX platform and claimed the lowest matching times at the time, at around 8 microseconds if messages were sent via the native protocol.
他们最近购买了纳斯达克OMX平台并声称当时的匹配时间最短,如果通过本机协议发送消息,则大约为8微秒。 They then said they support FIX, which would result in 2-3 microseconds on top of the matching time...
然后他们表示他们支持FIX,这将在匹配时间的基础上产生2-3微秒......
I guess you can use this 2-3 microsecond number as a sort of a minimal FIX overhead that an exchange claiming to be the fastest managed to achieve :) 我想你可以使用这个2-3微秒的数字作为一种最小的FIX开销,一个声称是最快的管理实现的交换:)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.