简体   繁体   English

Swift HMAC 不匹配 NodeJS HMAC,但只是有时!

[英]Swift HMAC doesn't match NodeJS HMAC, but only sometimes!

I have discovered a HUGE issue in my code, and I have literally no idea what is causing this.我在我的代码中发现了一个巨大的问题,我根本不知道是什么原因造成的。

SO, when I send requests to my server I hash a string thats in the request.所以,当我向我的服务器发送请求时,我 hash 是请求中的一个字符串。 This is sometimes user input.这有时是用户输入。

My app is multi language so I have to support all "ä" chars etc.我的应用程序是多语言的,所以我必须支持所有“ä”字符等。

So with the normal english letters/chars numbers etc, this hashing method works like a dream.因此,对于普通的英文字母/字符数字等,这种散列方法就像做梦一样。 BUT when the string being hashed and compared contains a "ä" or a "ö" (Not specifically those, it literally might be that any char not in the Base64 set will cause this) the hash doesn't match!但是当被散列和比较的字符串包含“ä”或“ö”时(不是特别是那些,实际上可能是不在 Base64 集中的任何字符都会导致这种情况)hash 不匹配!

This is an absolute and complete disaster, and I have not noticed it this far.这是一场绝对彻底的灾难,我至今没有注意到它。 I have tried basically everything I know to try to fix this, and googling, and I am out of luck so far.我基本上已经尝试了我所知道的一切来尝试解决这个问题,并在谷歌上搜索,到目前为止我很不走运。

I generate the hash in Swift inputting the string and secretToken into this function and saving the output as a HTTP header: I generate the hash in Swift inputting the string and secretToken into this function and saving the output as a HTTP header:

func hmac(string: String, key: String) -> String {

    var digest = [UInt8](repeating: 0, count: Int(CC_SHA256_DIGEST_LENGTH))

    CCHmac(CCHmacAlgorithm(kCCHmacAlgSHA256), key, key.count, string, string.count, &digest)

    let data = Data(digest)

    return data.map { String(format: "%02hhx", $0) }.joined()

}

How I compare the hash in NodeJS:我如何比较 NodeJS 中的 hash:

if (hashInTheRequest === crypto.createHmac('sha256', secretToken).update(stringToHash).digest('hex')) {
    //Good to go
}

Thanks in advance!提前致谢!

This could be due to a composition issue.这可能是由于构图问题。 You mentioned non-latin characters, but didn't specify any concrete examples, where you had problems.您提到了非拉丁字符,但没有具体说明您遇到问题的具体示例。

What is composition?什么是作曲?

Unicode aims to be able to represent any character used by humanity. Unicode 旨在能够代表人类使用的任何字符。 However, many characters are similar, such as u , ü , û and ū .但是,许多字符是相似的,例如uüûū The original idea was to just assign a code point to every possible combination.最初的想法是只为每个可能的组合分配一个代码点。 As one might imagine, this is not the most effective way to store things.正如人们想象的那样,这不是最有效的存储方式。 Instead, the "base" character is used, and then a combining character is added to it.相反,使用“基础”字符,然后添加一个组合字符。

Let's look at an example: ü我们来看一个例子: ü

ü can be represented as U+00FC , also known as LATIN SMALL LETTER U WITH DIAERESIS . ü可以表示为U+00FC ,也称为LATIN SMALL LETTER U WITH DIAERESIS

ü can also be represented as U+0075 ( u ), followed by U+0308 ( ◌̈ ), also known as LATIN SMALL LETTER U , followed by COMBINING DIARESIS . ü也可以表示为U+0075 ( u ),后跟U+0308 ( ◌̈ ),也称为LATIN SMALL LETTER U ,后跟COMBINING DIARESIS

Why is this problematic?为什么这是有问题的?

Because hash functions don't know what a string is.因为 hash 函数不知道字符串是什么。 All they care about is bytes.他们关心的只是字节。 As such, a string has to be decoded to a string of bytes.因此,必须将字符串解码为字节串。 As was shown above, there are multiple different ways to decode a string, which means that two different systems can decode the same logical string to different bytes, thus resulting in different hashes.如上所示,解码字符串有多种不同的方式,这意味着两个不同的系统可以将相同的逻辑字符串解码为不同的字节,从而产生不同的哈希值。

How can I fix this?我怎样才能解决这个问题?

You have to explicitly define how the string will be decoded on both platforms, to ensure that both decode the strings in the exact same manner.您必须明确定义字符串在两个平台上的解码方式,以确保两者以完全相同的方式解码字符串。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM