简体   繁体   中英

Does HttpUtility.UrlEncode match the spec for 'x-www-form-urlencoded'?

Per MSDN

URLEncode converts characters as follows:

  • Spaces ( ) are converted to plus signs (+).
  • Non-alphanumeric characters are escaped to their hexadecimal representation.

Which is similar, but not exactly the same as W3C

application/x-www-form-urlencoded

This is the default content type. Forms submitted with this content type must be encoded as follows:

  1. Control names and values are escaped. Space characters are replaced by '+', and then reserved characters are escaped as described in RFC1738 , section 2.2: Non-alphanumeric characters are replaced by '%HH', a percent sign and two hexadecimal digits representing the ASCII code of the character. Line breaks are represented as "CR LF" pairs (ie, '%0D%0A').

  2. The control names/values are listed in the order they appear in the document. The name is separated from the value by '=' and name/value pairs are separated from each other by '&'.

My question is, has anyone done the work to determine whether URLEncode produces valid x-www-form-urlencoded data?

Well, the documentation you linked to is for IIS 6 Server.UrlEncode, but your title seems to ask about .NET System.Web.HttpUtility.UrlEncode . Using a tool like Reflector, we can see the implementation of the latter and determine if it meets the W3C spec.

Here is the encoding routine that is ultimately called (note, it is defined for an array of bytes, and other overloads that take strings eventually convert those strings to byte arrays and call this method). You would call this for each control name and value (to avoid escaping the reserved characters = & used as separators).

protected internal virtual byte[] UrlEncode(byte[] bytes, int offset, int count)
{
    if (!ValidateUrlEncodingParameters(bytes, offset, count))
    {
        return null;
    }
    int num = 0;
    int num2 = 0;
    for (int i = 0; i < count; i++)
    {
        char ch = (char) bytes[offset + i];
        if (ch == ' ')
        {
            num++;
        }
        else if (!HttpEncoderUtility.IsUrlSafeChar(ch))
        {
            num2++;
        }
    }
    if ((num == 0) && (num2 == 0))
    {
        return bytes;
    }
    byte[] buffer = new byte[count + (num2 * 2)];
    int num4 = 0;
    for (int j = 0; j < count; j++)
    {
        byte num6 = bytes[offset + j];
        char ch2 = (char) num6;
        if (HttpEncoderUtility.IsUrlSafeChar(ch2))
        {
            buffer[num4++] = num6;
        }
        else if (ch2 == ' ')
        {
            buffer[num4++] = 0x2b;
        }
        else
        {
            buffer[num4++] = 0x25;
            buffer[num4++] = (byte) HttpEncoderUtility.IntToHex((num6 >> 4) & 15);
            buffer[num4++] = (byte) HttpEncoderUtility.IntToHex(num6 & 15);
        }
    }
    return buffer;
}

public static bool IsUrlSafeChar(char ch)
{
    if ((((ch >= 'a') && (ch <= 'z')) || ((ch >= 'A') && (ch <= 'Z'))) || ((ch >= '0') && (ch <= '9')))
    {
        return true;
    }
    switch (ch)
    {
        case '(':
        case ')':
        case '*':
        case '-':
        case '.':
        case '_':
        case '!':
            return true;
    }
    return false;
}

The first part of the routine counts the number of characters that need to be replaced (spaces and non- url safe characters). The second part of the routine allocates a new buffer and performs replacements:

  1. Url Safe Characters are kept as is: az AZ 0-9 ()*-._!
  2. Spaces are converted to plus signs
  3. All other characters are converted to %HH

RFC1738 states (emphasis mine):

Thus, only alphanumerics, the special characters "$-_.+!*'(),", and
reserved characters used for their reserved purposes may be used
unencoded within a URL.

On the other hand, characters that are not required to be encoded
(including alphanumerics) may be encoded within the scheme-specific
part of a URL, as long as they are not being used for a reserved
purpose.

The set of Url Safe Characters allowed by UrlEncode is a subset of the special characters defined in RFC1738. Namely, the characters $, are missing and will be encoded by UrlEncode even when the spec says they are safe. Since they may be used unencoded (and not must ), it still meets the spec to encode them (and the second paragraph states that explicitly).

With respect to line breaks, if the input has a CR LF sequence then that will be escaped %0D%0A . However, if the input has only LF then that will be escaped %0A (so there is no normalization of line breaks in this routine).

Bottom Line: It meets the specification while additionally encoding $, , and the caller is responsible for providing suitably normalized line breaks in the input.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM