简体   繁体   中英

Test SSH connection between Avi Vantage Controller and Service Engine Host

The Avi docs say to add an ssh public key to the known_hosts file on the SE hosts so the controller can login and install and start the service engine host.

I'm pretty sure this isn't working properly. How can I test the ssh connection between the controller and the service engine host(s)? Where is the controller's private key stored?

I am guessing this is in reference to creating a "LinuxServer" Cloud in Avi. On Avi, you have to do the following: 1) configure a SSHUser (Administration > Settings > SSH Key Settings). alternatively, this can also be created from UI during LinuxServer cloud creation. 2) Create the LinuxServer cloud (Infrastructure > Clouds) with appropriate hosts and select the SSHUser from the dropdown.

The SSH keys configured are stored encrypted in Avi controller DB and not exposed via API/REST or on file system. The Avi Controller modules use the decrypted key to connect to each host and provision the SE.

I suppose the docs are not clear - you dont add the Avi Controller's public key to each host, instead you add "your" custom SSH key pair into Avi Controller (via step 1 above) and add the correspinding public key on each host.

With regards to "testing" the SSH connection, since these are your owned keys, you can plain "ssh -i username@host" to test the SSH. Alternatively, the Cloud status will also provide information if SSH using the configured key failed for any reason.

Please refer: http://kb.avinetworks.com/installing-avi-vantage-for-a-linux-server-cloud/ for complete install guide.

Let me know if your question was related to a different Cloud/Topic.

Adding to what @Siva explained, the status of the connection is displayed in the controller cloud page (From menu Infrastructure->Clouds, click on the cloud where host are added). Also if you hover the mouse over the State column of the host then you can see the detailed reason of the failure.

This Host Status in linux server cloud , in this case "Default-Cloud" is a linux server cloud with 3 host, out of which on one of the host ssh fails. In this example the host 10.10.99.199 is a fake entry ie there is no host with that IP hence SSH fails, where as 10.10.22.71 and 10.10.22.35 are the host for which SSH credentials passed, then the Service Engine was deployed on them and are ready for Virtual Services(load balancer or SSL termination etc..) to be placed on them.

@Davidn Coleman, In the comment you mentioned that you added the public key to authorized_hosts (you need to add the key to authorized_keys), and also the user for whom you added the ssh authorization is not root(ie /home/user/.ssh/authorized_keys) then make the user is sudoer (add the entry in /etc/sudoers for this user) and also make sure the permission for .ssh dir and authorized_keys are set correctly (for security reasons and good practise).

The following is the snippet for the host 10.10.22.35.

[root@localhost ~]# ls -lrtha
total 318M
-rw-r--r--.  1 root root  129 Dec 28  2013 .tcshrc
-rw-r--r--.  1 root root  100 Dec 28  2013 .cshrc
-rw-r--r--.  1 root root  176 Dec 28  2013 .bashrc
-rw-r--r--.  1 root root  176 Dec 28  2013 .bash_profile
-rw-r--r--.  1 root root   18 Dec 28  2013 .bash_logout
-rw-------.  1 root root 1.2K May 27 13:56 anaconda-ks.cfg
drwxr-xr-x.  3 root root   17 May 27 14:07 .cache
drwxr-xr-x.  3 root root   17 May 27 14:07 .config
dr-xr-xr-x. 17 root root 4.0K May 31 08:15 ..
drwxr-----.  3 root root   18 May 31 08:25 .pki
-rw-------.  1 root root 1.9K May 31 08:46 .viminfo
drwx------.  2 root root   28 May 31 09:09 .ssh
-rw-r--r--.  1 root root 317M May 31 09:13 se_docker.tgz
-rw-r--r--.  1 root root 1.2M May 31 09:13 dpdk_klms.tar.gz
dr-xr-x---.  6 root root 4.0K May 31 09:14 .
-rw-r--r--.  1 root root 1.1K May 31 09:14 avise.service
-rw-------.  1 root root 3.4K Jun  1 09:14 .bash_history

[root@localhost ~]# ls -lrtha .ssh/
total 8.0K
-rw-r--r--. 1 root root  399 May 31 09:09 authorized_keys
drwx------. 2 root root   28 May 31 09:09 .
dr-xr-x---. 6 root root 4.0K May 31 09:14 ..

[root@localhost ~]# pwd
/root

We will automatically test the SSH connection and display status as appropriate. For security reasons, the private key configured is stored in plain key format anywhere on the file system.

Did you "create" a ssh key or "import" a ssh key - if you imported, you could use plain ssh -i <path-to-imported-private-key user@host from your workstation where the private key resides.

Refer to @Aziz comment for details on host status display. Also note the correction about authorized_keys (not authorized_hosts )

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM