简体   繁体   中英

C# BinaryReader ReadUTF from Java's DataOutputStream

I've been working on converting the classes over to C# ( DataInputStream and DataOutputStream ) and I've finished the DataOutputStream class, now the problems are all sitting in the InputStream class.

Note: The reason that I'm not using the Encoding class in C# is because DataInput/DataOutputStream in Java use a custom UTF-8 Encoding.

Basically, I have this code: ( C# ) which uses the BinaryReader class

public String ReadUTF()
    {
        int utflen = this.ReadUnsignedShort ();
        byte[] bytearr = null;
        char[] chararr = null;

        if(bytearr.Length < utflen)
        {
            bytearr = new byte[utflen * 2];
            chararr = new char[utflen * 2];
        }

        int c, char2, char3;
        int count = 0;
        int chararr_count=0;

        this.ReadFully(bytearr, 0, utflen);

        while (count < utflen) {
            c = (int) bytearr[count] & 0xff;
            if (c > 127) break;
            count++;
            chararr[chararr_count++]=(char)c;
        }

        while (count < utflen) {
            c = (int) bytearr[count] & 0xff;
            switch (c >> 4) {
            case 0: case 1: case 2: case 3: case 4: case 5: case 6: case 7:
                /* 0xxxxxxx*/
                count++;
                chararr[chararr_count++]=(char)c;
                break;
            case 12: case 13:
                /* 110x xxxx   10xx xxxx*/
                count += 2;
                if (count > utflen)
                    throw new Exception(
                        "malformed input: partial character at end");
                char2 = (int) bytearr[count-1];
                if ((char2 & 0xC0) != 0x80)
                    throw new Exception(
                        "malformed input around byte " + count);
                chararr[chararr_count++]=(char)(((c & 0x1F) << 6) |
                                                (char2 & 0x3F));
                break;
            case 14:
                /* 1110 xxxx  10xx xxxx  10xx xxxx */
                count += 3;
                if (count > utflen)
                    throw new Exception(
                        "malformed input: partial character at end");
                char2 = (int) bytearr[count-2];
                char3 = (int) bytearr[count-1];
                if (((char2 & 0xC0) != 0x80) || ((char3 & 0xC0) != 0x80))
                    throw new Exception(
                        "malformed input around byte " + (count-1));
                chararr[chararr_count++]=(char)(((c     & 0x0F) << 12) |
                                                ((char2 & 0x3F) << 6)  |
                                                ((char3 & 0x3F) << 0));
                break;
            default:
                /* 10xx xxxx,  1111 xxxx */
                throw new Exception(
                    "malformed input around byte " + count);
            }
        }
        // The number of chars produced may be less than utflen
        return new String(chararr, 0, chararr_count);
    }

here's my ReadUnsignedShort method

public int ReadUnsignedShort()
    {
        int ch1 = BinaryReader.Read();
        int ch2 = BinaryReader.Read();
        if ((ch1 | ch2) < 0)
        {
            throw new EndOfStreamException(); // Temp- To be changed
        }
        return (ch1 << 8) + (ch2 << 0); 
    }

Here's the Readfully method too that's used:

public void ReadFully(byte[] b, int off, int len)
    {
        if(len < 0)
        {
            throw new IndexOutOfRangeException();
        }

        int n = 0;
        while(n < len) 
        {
            int count = ClientInput.Read(b, off + n, len - n);
            if(count < 0)
            {
                throw new EndOfStreamException(); // Temp - to be changed
            }
            n += count;
        }
    }

With the OutputStream the problem was that I was using the Write(int) instead of the Write(byte) function, but I don't think that's the case here, either that or I must be blind.

If you're interested in how the UTF String is sent, here's the C# Conversion for it:

public int WriteUTF(string str)
    {
        int strlen = str.Length;
        int utflen = 0;
        int c, count = 0;

        for(int i = 0; i < strlen; i++) 
        {
            c = str.ToCharArray()[i];
            if((c >= 0x0001) && (c <= 0x007F)) 
            {
                utflen++;
            } 
            else if(c > 0x07FF)
            {
                utflen += 3;
            }
            else
            {
                utflen += 2;
            }
        }

        if(utflen > 65535)
        {
            throw new Exception("Encoded string is too long: " + utflen + " bytes");
        }

        byte[] bytearr = null;
        bytearr = new byte[(utflen*2) + 2];

        bytearr[count++] = (byte) (((uint)utflen >> 8) & 0xFF);
        bytearr[count++] = (byte) (((uint)utflen >> 0) & 0xFF);

        int x = 0;
        for(x = 0; x < strlen; x++) 
        {
            c = str.ToCharArray()[x];
            if (!((c >= 0x0001) && (c <= 0x007F))) break;
            bytearr[count++] = (byte)c;
        }

        for(;x < strlen; x++)
        {
            c = str.ToCharArray()[x];
            if ((c >= 0x0001) && (c <= 0x007F)) 
            {
                bytearr[count++] = (byte)c;
            }
            else if (c > 0x07FF)
            {
                bytearr[count++] = (byte) (0xE0 | ((c >> 12) & 0x0F));
                bytearr[count++] = (byte) (0x80 | ((c >>  6) & 0x3F));
                bytearr[count++] = (byte) (0x80 | ((c >>  0) & 0x3F));
            }
            else
            {
                bytearr[count++] = (byte) (0xC0 | ((c >>  6) & 0x1F));
                bytearr[count++] = (byte) (0x80 | ((c >>  0) & 0x3F));
            }
        }
        ClientOutput.Write (bytearr, 0, utflen+2);
        return utflen + 2;
    }

Hopefully I've provided enough information to get a little help with reading the UTF Values, this is really putting a road-block in my progress rate for my project.

If I understand the "question" correctly (such as it is — you say there's a "roadblock" but you fail to explain what exactly the "roadblock" is), you are trying to implement in C# the code to read and write text from the stream. If so, then (and I know if you're new to .NET this isn't immediately obvious) explicitly handling the text encoding yourself is insane.

BinaryReader and BinaryWriter have methods to handle this. When you create the objects, you can pass an Encoding instance (eg System.Text.Encoding.UTF8, System.Text.Encoding.Unicode, etc.) which is used to interpret or create binary data for text. You can use BinaryReader.ReadChars(int) to read text, and BinaryWriter.Write(char[]) to write text.

If for some reason that doesn't work, at the very least you can use an Encoding instance directly to interpret or create binary data for some text. Encoding.GetString(byte[]) will convert binary to text, and Encoding.GetBytes(string) will convert text to binary. Again, using a specific Encoding instance for the actual text encoding you're dealing with.

have written a C# Conversion of Java's DataInputStream and DataOutputStream you can collect them here.

https://bitbucket.org/CTucker1327/c-datastreams/src

To construct these classes you would pass a BinaryWriter or BinaryReader into the constructor.

To Construct DataOutputStream

DataOutputStream out = new DataOutputStream(new BinaryWriter(Stream));

To Construct DataInputStream

DataInptuStream in = new DataInputStream(new BinaryReader(Stream));

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM