简体   繁体   中英

What is the relationship of the XDR protocol endianess and network byte order?

The XDR protocol uses big endian. How does this relate to the concept of network byte order? Is it a direct consequence, or are they independent?

I guess what I'm trying to ask is at which ISO/OSI layer is the network byte order defined. XDR operates at the presentation layer, so is its use of big endian caused by the fact that the network byte order standard also covers the presentation layer?

The XDR protocol uses big endian.

The XDR protocol uses network byte order, which is big-endian.

How does this relate to the concept of network byte order? Is it a direct consequence, or are they independent?

See above.

I guess what I'm trying to ask is at which ISO/OSI layer is the network byte order defined.

It isn't. XDR runs over TCP, and TCP runs over the TCP/IP network layer model, not the OSI model. XDR fits into the Application layer of that model. If OSI had any relationship with anything in the real world, which it doesn't, XDR would arguably fit into the Presentation layer, along with 3270 and practically nothing else. However the network byte order of TCP/IP applies to all layers.

XDR operates at the presentation layer, so is its use of big endian caused by the fact that the network byte order standard also covers the presentation layer?

It covers all layers, but it can only enforce that below the application layer.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM