简体   繁体   中英

Ansible cannot ssh into VM created by Vagrant

I have a very simple Vagrantfile:

config.vm.define "one" do |one|
  one.vm.box = "centos/7"
end
config.ssh.insert_key = false
end

(Note it was creating vm but exiting with a failure untill I installed vbguest plugin)

After vm was created I wanted to execute a simple Ansible job. My inventory file (Vagrant forwarded 22 port on guest to 2222 on host):

[one]
127.0.0.1 ansible_ssh_port=2222 ansible_ssh_user=vagrant ansible_ssh_private_key_file=C:/Users/Lukasz/.vagrant.d/insecure_private_key

And here's the Docker command (from Windows cmd):

docker run --rm -v /c/Users/Lukasz/ansible/ansible:/home:rw -w /home williamyeh/ansible:ubuntu14.04 ansible-playbook -i inventory/testvms site.yml --check -vvvv

Finally, here's the output of the command:

<127.0.0.1> ESTABLISH SSH CONNECTION FOR USER: vagrant
<127.0.0.1> SSH: EXEC ssh -vvv -C -o ControlMaster=auto -o ControlPersist=60s -o StrictHostKeyChecking=no -o Port=2222 -o 'IdentityFile="C:/Users/Lukasz/.vagrant.d/insecure_private_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o User=vagrant -o ConnectTimeout=10 -o PreferredAuthentications=privatekey -o ControlPath=/root/.ansible/cp/ansible-ssh-%h-%p-%r 127.0.0.1 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo ~/.ansible/tmp/ansible-tmp-1488381378.63-13786642598588 `" && echo ansible-tmp-1488381378.63-13786642598588="` echo ~/.ansible/tmp/ansible-tmp-1488381378.63-13786642598588 `" ) && sleep 0'"'"''
fatal: [127.0.0.1]: UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh: OpenSSH_7.2p2 Ubuntu-4ubuntu2.1, OpenSSL 1.0.2g  1 Mar 2016\r\ndebug1: Reading configuration data /etc/ssh/ssh_config\r\ndebug1: /etc/ssh/ssh_config line 19: Applying options for *\r\ndebug1: auto-mux: Trying existing master\r\ndebug1: Control socket \"/root/.ansible/cp/ansible-ssh-127.0.0.1-2222-vagrant\" does not exist\r\ndebug2: resolving \"127.0.0.1\" port 2222\r\ndebug2: ssh_connect_direct: needpriv 0\r\ndebug1: Connecting to 127.0.0.1 [127.0.0.1] port 2222.\r\ndebug2: fd 3 setting O_NONBLOCK\r\ndebug1: connect to address 127.0.0.1 port 2222: Connection refused\r\nssh: connect to host 127.0.0.1 port 2222: Connection refused\r\n",
    "unreachable": true
}

I can ssh to this VM manualy with no problem - specifying user, port and private key.

Am I doing something wrong?

EDIT 1:

I have mounted folder with the private key: -v /c/Users/Lukasz/.vagrant.d/:/home/.ssh/ and refer to it from inventory file: ansible_ssh_private_key_file=/home/.ssh/insecure_private_key . Also assigned a static IP in the vagrantfile, and used it in docker command. Errror now is "Connection timed out".

There's a misunderstanding of how loopback addresses work and also an underestimation of how complex system you actually run.

In the scenario described in your question, you are running four machines with four separate network stacks:

  1. a physical machine Windows
  2. a CentOS VM (supposedly running under VirtualBox, orchestrated by Vagrant)
  3. a Docker Linux machine which is running in the background when you install Docker for Windows (judging from your sentence " the docker command (from windows cmd) ")
  4. an Ansible container running under the Docker's Linux machine

Each of these machines has its own loopback address ( 127.0.0.1 ) which is not accessible from any other machine.


You have one port mapping:

Vagrant set a mapping for tnt CentOS virtual machine under the control of VirtualBox so that the VM's port 22 is accessible on the Windows machine loopback address ( 127.0.0.1 ) port 2222.

And thus you can connect with SSH client from Windows to the CentOS machine.


However, Docker for Windows runs a separate Linux machine and configures the docker command so that when you execute docker from Windows command-line prompt, you actually work directly on this Linux machine (as you run containers, you don't actually need to access this Docker host directly, so you can be unaware of its existence).

Like it was not enough, each container you run will have its own loopback 127.0.0.1 address.

As a result there is no way an Ansible container would reach the loopback address of your physical Windows machine.


Probably the easiest solution would be to configure the CentOS box to run on a public network, with a static IP address (see Vagrant: Public Networks ) by adding for example the following line to the Vagrantfile :

config.vm.network "public_network", ip: "192.168.0.17"

Then you should use this address in the inventory file and follow Konstantin's advice to make the private key available to the container:

[one]
192.168.0.17 ansible_ssh_user=vagrant ansible_ssh_private_key_file=/path/to/insecure_private_key/mapped/inside/container

It seems that you specify windows path for ansible_ssh_private_key_file in your inventory, but use this inventory from inside the container.

You should map C:/Users/Lukasz/.vagrant.d/ into your container and set ansible_ssh_private_key_file from container's perspective.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM