There are a lot of automation tools available today. One of those I've really enjoyed working with is Ansible.

One of my favorite features of the tool is the fact there is no need for any agent deployments. By default it utilizes SSH so if you have targets with SSH then you likely can manage them with Ansible.

Ansible also has a ton of modules already available for cloud (Amazon, Azure, Google, Openstack, Rackspace, etc), database, networking devices and more. See the complete module list HERE.

ansible_ec2.jpg

In this post we'll go through a basic Ansible install, demonstrate how to create two EC2 instances with Ansible, utilize dynamic inventory capabilities and setup SSH authentication to access and mange those instances.

Installing Ansible

Installing Ansible on a system will make the system the control or management point for all automation activities you'll perform. I'm starting with a basic Debian build in this example. I'm actually using the Turnkey Core image available at www.turnkeylinux.org if you want to follow these steps exactly.

Start the OS and create a regular user account. Install sudo as well and give the user appropriate sudo permissions in the sudoers file. There are a few different ways to install Ansible, but the easiest way is to build it from source. We'll use git clone from the user's home directory to perform the task. When complete we'll have an ansible sub-directory installed in this location. The following command will setup Ansible on our system.

$ git clone git://github.com/ansible/ansible.git --recursive

Ansible requires Python and additional modules, so we'll install some additional system software to cover it all.

$ sudo apt-get install python-setuptools
$ sudo easy_install pip
$ sudo apt-get install python-dev libffi-dev
$ sudo apt-get install build-essential libssl-dev
$ sudo pip install paramiko PyYAML Jinja2 httplib2 six boto

We also need sshpass.

$ sudo apt-get install sshpass

Now that we have everything we need for Ansible installed, we'll do some file creation and environment preparation.

$ mkdir ~/ansible/etc
$ echo "127.0.0.1" > ~/ansible/etc/hosts
$ export ANSIBLE_INVENTORY=~/ansible/etc/hosts
$ source ~/ansible/hacking/env-setup

So let's give Ansible a test try. We'll execute a simple ping on localhost.

$ ansible localhost -m ping --ask-pass
SSH password:
localhost | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

This might fail with the following message:

"msg": "Using a SSH password instead of a key is not possible because Host Key checking is enabled and sshpass does not support this.  Please add this host's fingerprint to your known_hosts file to manage this host."

If so, you'll need to do as it states and edit the /etc/ssh/ssh_config file and add the following:

StrictHostKeyChecking no

Ansible and EC2

In preparation to create EC2 instance you'll need to coordinate this with the key pair you'll use for authentication to AWS. There are several things you'll need to have ready to go for this to work. First you need an AWS account. You'll want to have an IAM user with access keys created and also a policy associated to the account to allow EC2 administration. You'll also need to have already generated an ssh key pair and loaded this to AWS in the specific region you'll be using to manage your EC2 instances. And finally there should be a security group created to allow port 22 (ssh) incoming connections to your instances.

I need to point out as well there could be AWS charges to your account depending on your specific account details. The resources I'm using in this exercise do qualify for the AWS Free Tier services, but it totally depends on your account. Any charges should be minimal if they do apply, but you'll be responsible for those. Also if you already have existing EC2 instances on your account, be sure to avoid running any unintentional commands on them during your Ansible testing. Just to be clear on this going in...


Let's get started. First we'll need to export your AWS access keys for the IAM user.

export AWS_SECRET_ACCESS_KEY=your_secret_key_here
export AWS_ACCESS_KEY_ID=your_access_key_here

Now we'll create an ansible playbook (yml) to startup two (2) EC2 instances. These will be Amazon Linux instances. Ansible will utilize the Access keys in your environment variables to authenticate for this task. The file will look similar to this. Call it ec2_create_web-db.yml. Keep in mind if you use a different region, the AMI (image) will be different and you'll need to change this accordingly. You also need to provide your names for the security group and ssh key pair. See the Ansible EC2 documentation for more details.

- hosts: localhost
  connection: local
  gather_facts: False

  tasks:
    - name: Provision a web server EC2 instance
      ec2:
         key_name: your_key_pair_here
         instance_type: t2.micro
         image: ami-6e84fa0e
         region: us-west-1
         zone: us-west-1a
         group: your_security_group_here
         wait: true
         exact_count: 1
         instance_tags:
            AnsibleName: AnsibleWeb
            AnsibleType: AnsibleTest
         count_tag:
            Name: WEB
      register: web

    - name: Show the public ip and dns name for each EC2 instance
      debug: msg="Instance {{ item.id }} IP {{ item.public_ip }} NAME {{ item.public_dns_name }}"
      with_items: "{{ web.instances }}"

    - name: Provision a database server EC2 instance
      ec2:
         key_name: your_key_pair_here
         instance_type: t2.micro
         image: ami-6e84fa0e
         region: us-west-1
         zone: us-west-1a
         group: your_security_group_here
         wait: true
         exact_count: 1
         instance_tags:
            AnsibleName: AnsibleDatabase
            AnsibleType: AnsibleTest
         count_tag:
            Name: DB
      register: db

    - name: Show the public ip and dns name for each EC2 instance
      debug: msg="Instance {{ item.id }} IP {{ item.public_ip }} NAME {{ item.public_dns_name }}"
      with_items: "{{ db.instances }}"

Let's run the ansible playbook command to create these two instances. I've removed a lot of output. Ansible prints out a lot of details.

$ ansible-playbook ec2_create_web-db.yml

PLAY [localhost] ***************************************************************

TASK [Provision a web server EC2 instance] *************************************
changed: [localhost]
...
TASK [Show the public ip and dns name for each EC2 instance] *******************
    "msg": "Instance i-XXXXXXXX IP xx.xx.xx.xxx NAME ec2-xx-xx-xx-xxx.us-west-1.compute.amazonaws.com"
...
TASK [Provision a database server EC2 instance] ********************************
changed: [localhost]
...
TASK [Show the public ip and dns name for each EC2 instance] *******************
    "msg": "Instance i-XXXXXXXX IP xx.xx.xx.xxx NAME ec2-xx-xx-xx-xxx.us-west-1.compute.amazonaws.com"
...
PLAY RECAP *********************************************************************
localhost                  : ok=4    changed=2    unreachable=0    failed=0

If all goes well, you'll see these two instances in your management console dashboard.

ansible_ec2_instances.jpg

Ansible Dynamic Inventory Setup for EC2 Instances

Since managing a static inventory file for EC2 instances is not very practical, a dynamic inventory script can be used to maintain instance inventory. Ansible provides two files you can download to enable EC2 dynamic inventory.

The first is the ec2.py script file. Download this file and save it to your ~/ansible/etc directory.

$ cd ~/ansible/etc
$ wget http://raw.github.com/ansible/ansible/devel/contrib/inventory/ec2.py

Make it executable.

$ chmod 755 ec2.py

The second file we need is ec2.ini. You may want to look through this file and make changes as necessary. It is well documented. Save it in the same directory.

$ wget https://raw.githubusercontent.com/ansible/ansible/devel/contrib/inventory/ec2.ini

We'll setup environment variables so Ansible can use them.

$ export ANSIBLE_INVENTORY=~/ansible/etc/ec2.py
$ export EC2_INI_PATH=~/ansible/etc/ec2.ini

Let's test the EC2 dynamic inventory process.

$ ~/ansible/etc/ec2.py --list

You should get a large json response string with information about ALL your running EC2 instances. If you have additional running instances beyond what was created by Ansible, they will be listed here as well. Depending on how you setup your account access policy, it's possible you may see an error similar to this.

ERROR: "Forbidden", while: getting ElastiCache clusters

You'll need to edit the ec2.ini file and uncomment the elasticache line.

# To exclude ElastiCache instances from the inventory, uncomment and set to False.
elasticache = False

We'll do a test with Ansible now. Note the Amazon Linux image uses the ec2-user account for access.

$ ansible -m ping tag_AnsibleType_AnsibleTest -u ec2-user
XX.XXX.XX.X | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh.",
    "unreachable": true
}
XX.XXX.XX.XX | UNREACHABLE! => {
    "changed": false,
    "msg": "Failed to connect to the host via ssh.",
    "unreachable": true
}

The inventory script found our two instances, but we don't yet have SSH access. We'll fix that next. Also notice we used one of the tags we added when the instances were created. The tags provide an easy way for us to group instances for various purposes and tasks.

Enable SSH Authentication

Now we'll do some SSH authentication setup and test access to the EC2 instances we just created. You'll need your private key paired to the public key which is stored in the AWS region of your instances. My private key is encrypted (highly recommended) so I need to start up the ssh-agent and load my private key so I won't need to provide credentials each time.

Let's start from our home directory. Make sure you have your private key file/identity saved in ~/.ssh/id_rsa. The exact filename will depend on your setup. Note it MUST match the name you used in AWS.

$ cd
$ ssh-agent > .ssh/authdat
$ source .ssh/authdat

Add your identity to the ssh-agent. You'll need to provide the passphrase.

$ ssh-add ~/.ssh/id_rsa
Enter passphrase for ~/.ssh/id_rsa:
Identity added: ~/.ssh/id_rsa (~/.ssh/id_rsa)

#Check it's loaded.
$ ssh-add -l
2048 xx:xx:xx:xx:... ~/.ssh/id_rsa (RSA)

Let's try the Ansible ping command again.

$ ansible -m ping tag_AnsibleType_AnsibleTest -u ec2-user
XX.XXX.XXX.XXX | SUCCESS => {
    "changed": false,
    "ping": "pong"
}
XX.XXX.XXX.XX | SUCCESS => {
    "changed": false,
    "ping": "pong"
}

You should get a ping/pong response for each instance. Let's do something more useful now with our instances. We'll do OS updates.


$ ansible  -u ec2-user -m yum -a "name=* state=latest" tag_AnsibleType_AnsibleTest

This should have produced a large amount of output for each instance as it was updated. We've completed our first task for our Ansible EC2 instances! Now with these basics, you should be ready to explore more. Don't forget to stop or terminate these Ansible instances when your done to avoid [additional] charges. Have Ansible do it for you!

Comments

There are no comments on this post.

Recent Posts

Archives


Contact Cecil