简体   繁体   English

如何更改 ssh 密钥上的密码

[英]How to change password on ssh key

When I tried to change my password on an ssh key, I received the following error message:当我尝试更改 ssh 密钥上的密码时,我收到以下错误消息:

@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
@         WARNING: UNPROTECTED PRIVATE KEY FILE!          @
@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
Permissions 0644 for '/Users/username/.ssh/id_rsa' are too open. 
It is recommended that your private key files are NOT accessible by others.
This private key will be ignored.

What does it mean?这是什么意思? How can I fix this?我怎样才能解决这个问题?

Edit: This solved my problem编辑:这解决了我的问题

sudo chmod 600 ~/.ssh/id_rsa

Choose the algorithm that is the quickest, since you probably care about doing this in real time.选择最快的算法,因为您可能关心实时执行此操作。 Generally for smaller blocks of data, the algorithms compress about the same (give or take a few bytes) mostly because the algorithms need to transmit the dictionary or Huffman trees in addition to the payload.通常对于较小的数据块,算法压缩大致相同(给予或接受几个字节),主要是因为除了有效载荷之外,算法还需要传输字典或哈夫曼树。

I highly recommend Deflate (used by zlib and Zip) for a number of reasons.出于多种原因,我强烈推荐 Deflate(由 zlib 和 Zip 使用)。 The algorithm is quite fast, well tested, BSD licensed, and is the only compression required to be supported by Zip (as per the infozip Appnote).该算法非常快,经过良好测试,获得 BSD 许可,并且是 Zip 需要支持的唯一压缩(根据 infozip Appnote)。 Aside from the basics, when it determines that the compression is larger than the decompressed size, there's a STORE mode which only adds 5 bytes for every block of data (max block is 64k bytes).除了基础知识,当它确定压缩比解压缩的大小大时,有一个存储模式,它只为每个数据块添加 5 个字节(最大块为 64k 字节)。 Aside from the STORE mode, Deflate supports two different types of Huffman tables (or dictionaries): dynamic and fixed.除了 STORE 模式,Deflate 还支持两种不同类型的 Huffman 表(或字典):动态和固定。 A dynamic table means the Huffman tree is transmitted as part of the compressed data and is the most flexible (for varying types of nonrandom data).动态表意味着哈夫曼树作为压缩数据的一部分传输,并且是最灵活的(对于不同类型的非随机数据)。 The advantage of a fixed table is that the table is known by all decoders and thus doesn't need to be contained in the compressed stream.固定表的优点是所有解码器都知道该表,因此不需要包含在压缩流中。 The decompression (or Inflate) code is relatively easy.解压(或 Inflate)代码相对容易。 I've written both Java and Javascript versions based directly off of zlib and they perform rather well.我已经直接基于 zlib 编写了 Java 和 Javascript 版本,并且它们的性能相当好。

The other compression algorithms mentioned have their merits.提到的其他压缩算法都有其优点。 I prefer Deflate because of its runtime performance on both the compression step and particularly in decompression step.我更喜欢 Deflate,因为它在压缩步骤,特别是在解压缩步骤中的运行时性能。

A point of clarification: Zip is not a compression type, it is a container.澄清一点:Zip 不是压缩类型,它是一个容器。 For doing packet compression, I would bypass Zip and just use the deflate/inflate APIs provided by zlib.为了进行数据包压缩,我会绕过 Zip 并只使用 zlib 提供的 deflate/inflate API。

If you want to "compress TCP packets", you might consider using a RFC standard technique.如果您想“压缩 TCP 数据包”,您可以考虑使用 RFC 标准技术。

  • RFC1978 PPP Predictor Compression Protocol RFC1978 PPP 预测器压缩协议
  • RFC2394 IP Payload Compression Using DEFLATE使用 DEFLATE 的RFC2394 IP 有效负载压缩
  • RFC2395 IP Payload Compression Using LZS使用 LZS 的RFC2395 IP 有效负载压缩
  • RFC3173 IP Payload Compression Protocol (IPComp) RFC3173 IP 有效载荷压缩协议 (IPComp)
  • RFC3051 IP Payload Compression Using ITU-T V.44 Packet Method使用 ITU-T V.44 数据包方法的RFC3051 IP 有效负载压缩
  • RFC5172 Negotiation for IPv6 Datagram Compression Using IPv6 Control Protocol RFC5172 协商使用 IPv6 控制协议进行 IPv6 数据报压缩
  • RFC5112 The Presence-Specific Static Dictionary for Signaling Compression (Sigcomp) RFC5112 用于信令压缩的特定于存在的静态字典 (Sigcomp)
  • RFC3284 The VCDIFF Generic Differencing and Compression Data Format RFC3284 VCDIFF 通用差分和压缩数据格式
  • RFC2118 Microsoft Point-To-Point Compression (MPPC) Protocol RFC2118 Microsoft 点对点压缩 (MPPC) 协议

There are probably other relevant RFCs I've overlooked.我可能忽略了其他相关的 RFC。

ASCII 消息压缩散点图

This is a follow-up to Rick's excellent answer which I've upvoted.这是我赞成的 Rick 出色答案的后续行动。 Unfortunately, I couldn't include an image in a comment.不幸的是,我无法在评论中包含图片。

I ran across this question and decided to try deflate on a sample of 500 ASCII messages that ranged in size from 6 to 340 bytes.我遇到了这个问题,并决定尝试对 500 条 ASCII 消息的样本进行 deflate,这些消息的大小范围从 6 到 340 字节。 Each message is a bit of data generated by an environmental monitoring system that gets transported via an expensive (pay-per-byte) satellite link.每条消息都是由环境监测系统生成的一些数据,这些数据通过昂贵的(按字节付费)卫星链路传输。

The most fun observation is that the crossover point at which messages are smaller after compression is the same as the Ultimate Question of Life, the Universe, and Everything : 42 bytes.最有趣的观察是压缩后消息较小的交叉点与生命、宇宙和一切终极问题相同: 42个字节。

To try this out on your own data, here's a little bit of node.js to help:要在您自己的数据上进行尝试,这里有一些 node.js 可以提供帮助:

const zlib = require('zlib')
const sprintf = require('sprintf-js').sprintf
const inflate_len = data_packet.length
const deflate_len = zlib.deflateRawSync(data_packet).length
const delta = +((inflate_len - deflate_len)/-inflate_len * 100).toFixed(0)
console.log(`inflated,deflated,delta(%)`)
console.log(sprintf(`%03i,%03i,%3i`, inflate_len, deflate_len, delta))

All of those algorithms are reasonable to try.所有这些算法都值得尝试。 As you say, they aren't optimized for tiny files, but your next step is to simply try them.正如您所说,它们并未针对小文件进行优化,但您的下一步是简单地尝试它们。 It will likely take only 10 minutes to test-compress some typical packets and see what sizes result.测试压缩一些典型数据包并查看结果大小可能只需要 10 分钟。 (Try different compress flags too). (也尝试不同的压缩标志)。 From the resulting files you can likely pick out which tool works best.从生成的文件中,您可能会挑选出最有效的工具。

The candidates you listed are all good first tries.你列出的候选人都是很好的第一次尝试。 You might also try bzip2.您也可以尝试 bzip2。

Sometimes simple "try them all" is a good solution when the tests are easy to do.. thinking too much sometimes slow you down.有时,当测试很容易完成时,简单的“全部尝试”是一个很好的解决方案……想太多有时会减慢你的速度。

You may test bicom .您可以测试bicom This algorithm is forbidden for commercial use.该算法禁止用于商业用途。 If you want it for professional or commercial usage look at "range coding algorithm".如果您希望将其用于专业或商业用途,请查看“范围编码算法”。

While fftcc's answer gives you detailed instructions how to make your permissions conforming to ssh's requirements, it may be useful to understand just why these requirements exist.虽然 fftcc 的回答为您提供了如何使您的权限符合 ssh 要求的详细说明,但理解为什么存在这些要求可能会很有用。

You can think of a pair of private/public keys as a secret and a test .您可以将一对私钥/公钥视为一个secret和一个test

The secret is the private key: It is only known to you.秘诀就是私钥:只有您自己知道。 It is like a door key.它就像一把门钥匙。 It fits in exactly one lock.它恰好适合一把锁。 The lock tests the secret : The public key is able to verify the private key.锁验:公钥可以验证私钥。 (The actual cryptography is more interesting: The public key can test the private key without knowing it, which a door lock cannot do.) (实际的密码学更有趣:公钥可以在不知情的情况下测试私钥,这是门锁做不到的。)

The door lock is public: Everybody can see it and try to put their key in it (and trust me, they do), but it will only ever accept the right one.门锁是公开的:每个人都可以看到它并尝试将钥匙放入其中(相信我,他们这样做),但它只会接受正确的钥匙。

If you let people copy the private key (or your door key), they can enter your server (or your house).如果你让别人复制私钥(或你的门钥匙),他们就可以进入你的服务器(或你的房子)。 Therefore, nobody may read that private key.因此,没有人可以读取该私钥。

As explained in the other answer, write permissions in any directory above the secret let a user recursively acquire permissions until they reach the secret, which is why ssh imposes requirements even on the user's home directory above it, which seems weird at first.正如另一个答案中所解释的那样,在秘密之上的任何目录中的写入权限让用户递归地获取权限,直到他们到达秘密,这就是为什么 ssh 甚至对它上面的用户主目录施加要求,起初看起来很奇怪。 By the way, I'm not sure whether ssh simply assumes or checks that the directories above the user's home directory are also write protected for the public: That's not necessarily a given.顺便说一句,我不确定 ssh 是否只是假设或检查用户主目录之上的目录是否也对公众进行了写保护:这不一定是给定的。

From these principles the necessary permissions for the various files make mostly sense.从这些原则来看,各种文件的必要权限在很大程度上是有意义的。 A certain complication arises because one would assume that some files like authorized_keys, especially on the server side, need third party read access so that sshd can read them.出现了某种复杂情况,因为人们会假设某些文件,如 authorized_keys,尤其是在服务器端,需要第三方读取访问权限,以便 sshd 可以读取它们。 But sshd runs as root — it opens a socket on a privileged address, after all — and can read anything it desires, independent of file permissions.但是 sshd 以 root 身份运行——毕竟它在特权地址上打开一个套接字——并且可以读取它想要的任何内容,而不受文件权限的影响。

By the way, this implies that your local system admin intern can read your secret keys which you use to access your bitcoin wallet, chmod 600 my butt.顺便说一下,这意味着您的本地系统管理员实习生可以读取您用来访问比特币钱包的密钥,chmod 600 my butt。 For that reason it is possible to encrypt your private key, which seems... redundant at first but makes perfect sense if you have seen our admin.出于这个原因,可以加密您的私钥,这乍一看似乎是多余的,但如果您见过我们的管理员,那将是非常有意义的。 This question was actually concerned with encrypting the private key;这个问题实际上是关于加密私钥的; the permission issues were purely incidental.权限问题纯属偶然。

sudo ssh-keygen -f ~/.ssh/YOU_PRIVATE_SSH_KEY -p

If the terminal displays the message Permissions 0644 then run the command as root如果终端显示消息Permissions 0644 root身份运行命令

If the terminal displays the message failed: Permission denied如果终端显示消息failed: Permission denied

To fix permission issues, first you need to set the correct ownership and permissions for the home directory and directory .ssh : 1要解决权限问题,首先需要为主目录和目录设置正确的所有权和权限.ssh : 1

sudo chown -R user:user $HOME sudo chmod 750 $HOME sudo chmod -R 700 $HOME/.ssh This creates the permissions for all files in.ssh necessary to satisfy SSH requirements. sudo chown -R user:user $HOME sudo chmod 750 $HOME sudo chmod -R 700 $HOME/.ssh这会为 .ssh 中的所有文件创建满足 SSH 要求所需的权限。 SSH recommendations and requirements (underlined) for individual directory files.ssh are listed below (from the manual page): SSH 针对各个目录文件的建议和要求(带下划线)。ssh 如下所列(来自手册页):

~/.ssh/id_rsa (or any PRIV KEY — private, primary key) — These files contain sensitive data [namely your authentication secret] and should be readable by the user, but not accessible for others (read/write/execute) — eg 0600. The ssh program will simply ignore the private key file if it is accessible by others. ~/.ssh/id_rsa (or any PRIV KEY — private, primary key) — 这些文件包含敏感数据 [即您的身份验证密码],用户应该可以读取,但其他人不能访问(读/写/执行)—例如 0600。ssh 程序将简单地忽略私钥文件,如果其他人可以访问它的话。

sudo chmod 600 ~/.ssh/id_rsa

~/.ssh/config — due to the possibility of abuse, this file must have strict permissions: read/write for the user and not writable for others - it is enough to install 0644. ~/.ssh/config — 由于可能被滥用,此文件必须具有严格的权限:用户可读/写,其他人不可写 - 安装 0644 就足够了。

sudo chmod 644 ~/.ssh/config

~ /.ssh/authorized_keys — This file is not highly sensitive, but the recommended read and write permissions for the user and are not writable for others are 0644. 2 ~/.ssh/authorized_keys——这个文件不是高度敏感,但是推荐给用户的读写权限是0644,其他人不可写。 2

sudo chmod 644 ~/.ssh/authorized_keys

~ /.ssh/known_hosts — This file is not highly sensitive, but read and write permissions are recommended for the user and read-only for others, hence 0644. ~ /.ssh/known_hosts——这个文件不是高度敏感的,但是建议用户有读写权限,其他人只读,因此是0644。

chmod 644 ~/.ssh/known_hosts

~/.ssh/ — There is no general requirement to keep all the contents of this directory secret, but the recommended read/write/execute permissions are for the user and inaccessible for others — 0700 is enough. ~/.ssh/ — 一般没有要求将此目录的所有内容保密,但推荐的读/写/执行权限是用户自己的,其他人无法访问的 — 0700 就足够了。

sudo chmod 700 ~/.ssh

~ /.ssh /id_rsa.pub (OR ANY PUBLIC KEY) — These files are not confidential and can (but not necessarily) be readable by anyone. ~ /.ssh /id_rsa.pub(或任何公钥)——这些文件不是机密文件,任何人都可以(但不一定)可以读取。


1 Write permissions to a directory lets a user change the permissions of the files and directories it contains. 1对目录的写入权限允许用户更改其包含的文件和目录的权限。 .ssh contains the secret private key which must not be known by anybody except the owner. .ssh包含除所有者以外任何人都不得知道的秘密私钥。 If a different user had write access to the containing directory ( .ssh ) they could change the permission of the secret key in that directory and read the file.如果不同的用户对包含目录 ( .ssh ) 具有写入权限,他们可以更改该目录中密钥的权限并读取文件。 This argument recursively applies all the way up to the file system root.此参数递归地一直应用到文件系统根目录。

2 Even though authorized_keys does not contain strict secrets — all keys in it are public — it specifies who can log in: Anybody with the (unknown but verifiable) private keys associated with the public keys listed in the file. 2即使authorized_keys不包含严格的秘密——其中的所有密钥都是公开的——它指定了谁可以登录:任何拥有(未知但可验证的)私钥与文件中列出的公钥相关联的人。 Therefore, write privilege to authorized_keys must be restricted to the account owner.因此,对 authorized_keys 的写入权限必须仅限于帐户所有者。

我认为文件大小并不重要——如果我没记错的话,GIF 中的 LZW 会每 4K 重置一次它的字典。

ZLIB should be fine. ZLIB 应该没问题。 It is used in MCCP.它用于 MCCP。

However, if you really need good compression, I would do an analysis of common patterns and include a dictionary of them in the client, which can yield even higher levels of compression.但是,如果您确实需要良好的压缩,我会分析常见模式并在客户端中包含它们的字典,这可以产生更高级别的压缩。

I've had luck using zlib compression libraries directly and not using any file containers.我很幸运直接使用 zlib 压缩库而不使用任何文件容器。 ZIP, RAR, have overhead to store things like filenames. ZIP、RAR 有存储文件名之类的开销。 I've seen compression this way yield positive results (compression less than original size) for packets down to 200 bytes.我已经看到以这种方式压缩对低至 200 字节的数据包产生积极的结果(压缩小于原始大小)。

You can try delta compression .您可以尝试增量压缩 Compression will depend on your data.压缩将取决于您的数据。 If you have any encapsulation on the payload, then you can compress the headers.如果您对有效负载有任何封装,则可以压缩标头。

Form the man page:形成手册页:

ssh-keygen -p [-f keyfile] [-m format] [-N new_passphrase]
                   [-P old_passphrase]

I did what Arno Setagaya suggested in his answer: made some sample tests and compared the results.我按照 Arno Setagaya 在他的回答中的建议做了:做了一些样本测试并比较了结果。

The compression tests were done using 5 files, each of them 4096 bytes in size.压缩测试使用 5 个文件完成,每个文件的大小为 4096 字节。 Each byte inside of these 5 files was generated randomly.这 5 个文件中的每个字节都是随机生成的。

IMPORTANT: In real life, the data would not likely be all random, but would tend to have quiet a bit of repeating bytes.重要提示:在现实生活中,数据不太可能都是随机的,但往往会有一些安静的重复字节。 Thus in real life application the compression would tend to be a bit better then the following results.因此,在现实生活中的应用中,压缩比以下结果要好一些。

NOTE: Each of the 5 files was compressed by itself (ie not together with the other 4 files, which would result in better compression).注意:这 5 个文件中的每一个都是单独压缩的(即不与其他 4 个文件一起压缩,这将导致更好的压缩)。 In the following results I just use the sum of the size of the 5 files together for simplicity.在下面的结果中,为了简单起见,我只使用了 5 个文件大小的总和。

I included RAR just for comparison reasons, even though it is not open source.我包含 RAR 只是为了比较的原因,即使它不是开源的。

Results: (from best to worst)结果:(从最好到最差)

LZOP: 20775 / 20480 * 100 = 101.44% of original size LZOP:20775 / 20480 * 100 = 原始大小的 101.44%

RAR : 20825 / 20480 * 100 = 101.68% of original size RAR:20825 / 20480 * 100 = 原始大小的 101.68%

LZMA: 20827 / 20480 * 100 = 101.69% of original size LZMA:20827 / 20480 * 100 = 原始大小的 101.69%

ZIP : 21020 / 20480 * 100 = 102.64% of original size邮编:21020 / 20480 * 100 = 原始尺寸的 102.64%

BZIP: 22899 / 20480 * 100 = 111.81% of original size BZIP:22899 / 20480 * 100 = 原始大小的 111.81%

Conclusion: To my surprise ALL of the tested algorithms produced a larger size then the originals!!!结论:令我惊讶的是,所有经过测试的算法都产生了比原始算法更大的尺寸!!! I guess they are only good for compressing larger files, or files that have a lot of repeating bytes (not random data like the above).我想它们只适用于压缩较大的文件,或具有大量重复字节的文件(不是像上面那样的随机数据)。 Thus I will not be using any type of compression on my TCP packets.因此我不会对我的 TCP 数据包使用任何类型的压缩。 Maybe this information will be useful to others who consider compressing small pieces of data.也许这些信息对其他考虑压缩小块数据的人有用。

EDIT: I forgot to mention that I used default options (flags) for each of the algorithms.编辑:我忘了提到我为每个算法使用了默认选项(标志)。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM