简体   繁体   English

JS中的AES加密相当于C#

[英]AES encryption in JS equivalent of C#

I need to encrypt a string using AES encryption. 我需要使用AES加密来加密字符串。 This encryption was happening in C# earlier, but it needs to be converted into JavaScript (will be run on a browser). 这种加密是在C#中发生的,但它需要转换为JavaScript(将在浏览器上运行)。

The current code in C# for encryption is as following - C#中用于加密的当前代码如下 -

public static string EncryptString(string plainText, string encryptionKey)
{
    byte[] clearBytes = Encoding.Unicode.GetBytes(plainText);

    using (Aes encryptor = Aes.Create())

    {
        Rfc2898DeriveBytes pdb = new Rfc2898DeriveBytes(encryptionKey, new byte[] { 0x49, 0x76, 0x61, 0x6e, 0x20, 0x4d, 0x65, 0x64, 0x76, 0x65, 0x64, 0x65, 0x76 });
        encryptor.Key = pdb.GetBytes(32);
        encryptor.IV = pdb.GetBytes(16);
        using (MemoryStream ms = new MemoryStream())
        {
            using (CryptoStream cs = new CryptoStream(ms, encryptor.CreateEncryptor(), CryptoStreamMode.Write))

            {
                cs.Write(clearBytes, 0, clearBytes.Length);
                cs.Close();
            }
            plainText = Convert.ToBase64String(ms.ToArray());
        }
    }
    return plainText;
}

I have tried to use CryptoJS to replicate the same functionality, but it's not giving me the equivalent encrypted base64 string. 我曾尝试使用CryptoJS来复制相同的功能,但它没有给我相同的加密base64字符串。 Here's my CryptoJS code - 这是我的CryptoJS代码 -

function encryptString(encryptString, secretKey) {
    var iv = CryptoJS.enc.Hex.parse('Ivan Medvedev');
    var key = CryptoJS.PBKDF2(secretKey, iv, { keySize: 256 / 32, iterations: 500 });

    var encrypted = CryptoJS.AES.encrypt(encryptString, key,{iv:iv);
    return encrypted;
}

The encrypted string has to be sent to a server which will be able to decrypt it. 必须将加密的字符串发送到能够解密它的服务器。 The server is able to decrypt the encrypted string generated from the C# code, but not the encrypted string generated from JS code. 服务器能够解密从C#代码生成的加密字符串,但不能解密从JS代码生成的加密字符串。 I tried to compare the encrypted strings generated by both the code and found that the C# code is generating longer encrypted strings. 我试图比较两个代码生成的加密字符串,发现C#代码生成更长的加密字符串。 For example keeping 'Example String' as plainText and 'Example Key' as the key, I get the following result - 例如,将'Example String'保持为plainText并将'Example Key'作为键,我得到以下结果 -

C# - eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=
JS - 9ex5i2g+8iUCwdwN92SF+A==

The length of JS encrypted string is always shorter than the C# one. JS加密字符串的长度始终短于C#。 Is there something I am doing wrong? 有什么我做错了吗? I just have to replicated the C# code into the JS code. 我只需要将C#代码复制到JS代码中。

Update: 更新:
My current code after Zergatul's answer is this - Zergatul回答后我现在的代码是 -

function encryptString(encryptString, secretKey) {
    var keyBytes = CryptoJS.PBKDF2(secretKey, 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
    console.log(keyBytes.toString());

    // take first 32 bytes as key (like in C# code)
    var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
    // skip first 32 bytes and take next 16 bytes as IV
    var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);

    console.log(key.toString());
    console.log(iv.toString());

    var encrypted = CryptoJS.AES.encrypt(encryptString, key, { iv: iv });
    return encrypted;
}

As illustrated in his/her answer that if the C# code converts the plainText into bytes using ASCII instead of Unicode, both the C# and JS code will produce exact results. 正如他/她的回答所示,如果C#代码使用ASCII而不是Unicode将plainText转换为字节,则C#和JS代码都会产生精确的结果。 But since I am not able to modify the decryption code, I have to convert the code to be equivalent of the original C# code which was using Unicode. 但由于我无法修改解密代码,因此我必须将代码转换为使用Unicode的原始C#代码。

So, I tried to see, what's the difference between both the bytes array between ASCII and Unicode byte conversion in C#. 所以,我试着看看,C#中ASCII和Unicode字节转换之间的字节数组有什么区别。 Here's what I found - 这是我发现的 -

ASCII Byte Array: [69,120,97,109,112,108,101,32,83,116, 114, 105, 110, 103]
Unicode Byte Array: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0, 114,0, 105,0, 110,0, 103,0]

So some extra bytes are available for each character in C# (So Unicode allocates twice as much bytes to each character than ASCII). 因此,C#中的每个字符都有一些额外的字节可用(因此Unicode为每个字符分配的字节数是ASCII的两倍)。

Here's the difference between both Unicode and ASCII conversion respectively - 以下是Unicode和ASCII转换的区别 -

ASCII
clearBytes: [69,120,97,109,112,108,101,32,83,116,114,105,110,103,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eQus9GLPKULh9vhRWOJjog==

Unicode:
clearBytes: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0,114,0,105,0,110,0,103,0,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=

So since both the key and iv being generated have exact same byte array in both Unicode and ASCII approach, it should not have generated different output, but somehow it's doing that. 因此,由于生成的key和iv在Unicode和ASCII方法中都具有完全相同的字节数组,所以它不应该生成不同的输出,但不知何故它正在这样做。 I think it's because of clearBytes' length, as it's using its length to write to CryptoStream. 我认为这是因为clearBytes的长度,因为它使用它的长度来写入CryptoStream。

I tried to see what's the output of the generated bytes in the JS code is and found that it uses words which needed to be converted into Strings using toString() method. 我试图看看JS代码中生成的字节的输出是什么,并发现它使用需要使用toString()方法转换为字符串的单词。

keyBytes: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d654a2eb12ee944fc53a9d30df93d76a7
key: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d
iv: 654a2eb12ee944fc53a9d30df93d76a7

Since, I am not able to affect the generated encrypted string's length in the JS code (No access to the write stream directly), thus still stuck here. 因为,我无法影响JS代码中生成的加密字符串的长度(无法直接访问写入流),因此仍然困在这里。

Here is the example how to reproduce the same ciphertext between C# and CryptoJS : 以下是如何在C#CryptoJS之间重现相同的密文的CryptoJS

static void Main(string[] args)
{
    byte[] plainText = Encoding.Unicode.GetBytes("Example String"); // this is UTF-16 LE
    string cipherText;
    using (Aes encryptor = Aes.Create())
    {
        var pdb = new Rfc2898DeriveBytes("Example Key", Encoding.ASCII.GetBytes("Ivan Medvedev"));
        encryptor.Key = pdb.GetBytes(32);
        encryptor.IV = pdb.GetBytes(16);
        using (MemoryStream ms = new MemoryStream())
        {
            using (CryptoStream cs = new CryptoStream(ms, encryptor.CreateEncryptor(), CryptoStreamMode.Write))
            {
                cs.Write(plainText, 0, plainText.Length);
                cs.Close();
            }
            cipherText = Convert.ToBase64String(ms.ToArray());
        }
    }

    Console.WriteLine(cipherText);
}

And JS: 和JS:

var keyBytes = CryptoJS.PBKDF2('Example Key', 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
// take first 32 bytes as key (like in C# code)
var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
// skip first 32 bytes and take next 16 bytes as IV
var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
// use the same encoding as in C# code, to convert string into bytes
var data = CryptoJS.enc.Utf16LE.parse("Example String");
var encrypted = CryptoJS.AES.encrypt(data, key, { iv: iv });
console.log(encrypted.toString());

Both codes return: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w= 两个代码都返回: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=

TL;DR the final code looks like this - TL; DR最终代码如下所示 -

function encryptString(encryptString, secretKey) {
    encryptString = addExtraByteToChars(encryptString);
    var keyBytes = CryptoJS.PBKDF2(secretKey, 'Ivan Medvedev', { keySize: 48 / 4, iterations: 1000 });
    console.log(keyBytes.toString());
    var key = new CryptoJS.lib.WordArray.init(keyBytes.words, 32);
    var iv = new CryptoJS.lib.WordArray.init(keyBytes.words.splice(32 / 4), 16);
    var encrypted = CryptoJS.AES.encrypt(encryptString, key, { iv: iv, });
    return encrypted;
}

function addExtraByteToChars(str) {
    let strResult = '';
    for (var i = 0; i < str.length; ++i) {
        strResult += str.charAt(i) + String.fromCharCode(0);
    }
    return strResult;
}

Explanation: 说明:

The C# code in the Zergatul's answer (Thanks to him/her) was using ASCII to convert the plainText into bytes, while my C# code was using Unicode. Zergatul的答案中的C#代码(感谢他/她)使用ASCII将plainText转换为字节,而我的C#代码使用Unicode。 Unicode was assigning extra byte to each character in the resultant byte array, which was not affecting the generation of both key and iv bytes, but affecting the result since the length of the encryptedString was dependent on the length of the bytes generated from plainText. Unicode为结果字节数组中的每个字符分配了额外的字节,这不会影响key和iv字节的生成,但会影响结果,因为encryptedString的长度取决于plainText生成的字节长度。
As seen in the following bytes generated for each of them using "Example String" and "Example Key" as the plainText and secretKey respectively - 如下面的字节所示,每个字节使用“Example String”和“Example Key”分别作为plainText和secretKey生成 -

ASCII
clearBytes: [69,120,97,109,112,108,101,32,83,116,114,105,110,103,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eQus9GLPKULh9vhRWOJjog==

Unicode:
clearBytes: [69,0,120,0,97,0,109,0,112,0,108,0,101,0,32,0,83,0,116,0,114,0,105,0,110,0,103,0,]
encryptor.Key: [123,213,18,82,141,249,182,218,247,31,246,83,80,77,195,134,230,92,0,125,232,210,135,115,145,193,140,239,228,225,183,13,]
encryptor.IV: [101,74,46,177,46,233,68,252,83,169,211,13,249,61,118,167,]
Result: eAQO+odxOdGlNRB81SHR2XzJhyWtz6XmQDko9HyDe0w=

The JS result was similar too, which confirmed that it's using ASCII byte conversion - JS结果也是类似的,这证实了它使用的是ASCII字节转换 -

keyBytes: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d654a2eb12ee944fc53a9d30df93d76a7
key: 7bd512528df9b6daf71ff653504dc386e65c007de8d2877391c18cefe4e1b70d
iv: 654a2eb12ee944fc53a9d30df93d76a7  

Thus I just need to increase the length of the plainText to make it use Unicode equivalent byte generation (Sorry, not familiar with the term). 因此,我只需要增加plainText的长度,使其使用Unicode等效字节生成(对不起,不熟悉该术语)。 Since Unicode was assigning 2 space for each character in the byteArray, keeping the second space as 0, I basically created gap in the plainText's characters and filled that gap with character whose ASCII value was 0 using the addExtraByteToChars() function. 由于Unicode为byteArray中的每个字符分配了2个空格,将第二个空格保持为0,我基本上在plainText的字符中创建了间隙,并使用addExtraByteToChars()函数填充了ASCII值为0的addExtraByteToChars() And it made all the difference. 它使一切变得不同。

It's a workaround for sure, but started working for my scenario. 这肯定是一种解决方法,但开始为我的方案工作。 I suppose this may or may not prove useful to others, thus sharing the findings. 我想这可能会或可能不会对其他人有用,从而分享调查结果。 If anyone can suggest better implementation of the addExtraByteToChars() function (probably some term for this conversion instead of ASCII to Unicode or a better, efficient, and not hacky way to do that), please suggest it. 如果有人可以建议更好地实现addExtraByteToChars()函数(可能是这个转换的一个术语而不是ASCII到Unicode或更好,更高效,而不是hacky方式),请提出建议。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM