<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>Infrastructure - Cognizant Transmutation</title>
	<atom:link href="https://www.ibd.com/tag/infrastructure/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.ibd.com</link>
	<description>Internet Bandwidth Development: Composting the Internet for over Two Decades</description>
	<lastBuildDate>Thu, 05 Aug 2021 05:45:17 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.1</generator>

 
<atom:link rel="hub" href="https://pubsubhubbub.appspot.com"/><atom:link rel="hub" href="https://pubsubhubbub.superfeedr.com"/><atom:link rel="hub" href="https://websubhub.com/hub"/><site xmlns="com-wordpress:feed-additions:1">156814061</site>	<item>
		<title>Deploy WordPress to Amazon EC2 Micro Instance with Opscode Chef</title>
		<link>https://www.ibd.com/howto/deploy-wordpress-to-amazon-ec2-micro-instance-with-opscode-chef/</link>
					<comments>https://www.ibd.com/howto/deploy-wordpress-to-amazon-ec2-micro-instance-with-opscode-chef/#comments</comments>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Mon, 03 Jan 2011 07:08:16 +0000</pubDate>
				<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Opscode Chef]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[blogging]]></category>
		<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[EC2]]></category>
		<category><![CDATA[Infrastructure]]></category>
		<category><![CDATA[ubuntu]]></category>
		<category><![CDATA[Wordpress]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=599</guid>

					<description><![CDATA[<p>Updates September 9, 2011 Included the latest Chef Knife ec2 server create argument that sets the EBS Volume to not be deleted on the termination&#8230;</p>
<p>The post <a href="https://www.ibd.com/howto/deploy-wordpress-to-amazon-ec2-micro-instance-with-opscode-chef/">Deploy WordPress to Amazon EC2 Micro Instance with Opscode Chef</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<h2>Updates</h2>
<h3>September 9, 2011</h3>
<p>Included the latest Chef Knife ec2 server create argument that sets the EBS Volume to not be deleted on the termination of the EC2 Instance</p>
<h2>Intro</h2>
<p>Up until recently a friend lent me a Virtual Machine in he Cloud for my Blog. I didn&#8217;t have to do anything to manage it. But his company is no longer supporting those machines so I had to move my blog.<br />
Right around that time Amazon announced their Micro Instances at a very low price. I also wanted to try out the new Opscode Chef knife commands that bootstrap an EC2 instance from scratch as well as their Chef Server SaaS. So this was a good reason to combine all these to create my new Blog Instance. And now Amazon even offers the ability to have a single micro instance free for a year! (You still have to pay for I/O charges but they are really cheap compared to the instance charges, unless you have a blog that is too popular, but then you&#8217;ll need a bigger server anyway)<br />
<strong>Spoiler Alert:</strong> It was way too easy and no problem at all! (Though I did end up having to write a few support cookbooks like <em>vsftpd</em>, but now you don&#8217;t have to)</p>
<h3>Some Assumptions for this post</h3>
<ul>
<li>You are using a *nix platform for your local development (ie your laptop is a Mac, Linux, *BSD or equivalent) and that your target server you want to deploy to is a relatively recent Ubuntu Linux.</li>
<li>You have or will install git client on your local development box</li>
<li>You followed the directions or have done the equivalent of the instructions in the Opscode <a href="http://help.opscode.com/faqs/start/how-to-get-started" target="_blank" rel="noopener">How to Get Started</a> pages as noted below</li>
</ul>
<h2>Set up an Account on Amazon Web Services</h2>
<p>If you don&#8217;t already have an Amazon EC2 Account, go to the <a href="http://aws.amazon.com/" target="_blank" rel="noopener">Amazon Web Services</a> page and click on the <a href="http://www.amazon.com/gp/aws/registration/registration-form.html" target="_blank" rel="noopener">Sign Up Now button</a>. Create all your user info and then Sign Up for Amazon EC2. You&#8217;ll need to put in  credit card info at this point since you&#8217;ll need to pay for the EC2 instance you&#8217;ll be using shortly. After you complete your signup, you&#8217;ll need to get your credentials at the <a href="http://aws-portal.amazon.com/gp/aws/developer/account/index.html?action=access-key" target="_blank" rel="noopener">AWS Security Credentials page</a>.  Copy down your Access Key ID and click on Show under the Secret Access Key and get that as well. You will need these values to put into your knife.rb file that you will get to in the following steps.</p>
<h2>Get an Opscode Platform Account</h2>
<p>Its free and easy. Just go to the <a href="https://cookbooks.opscode.com/users/new" target="_blank" rel="noopener">Opscode Platform Signup page</a>. Fill in your information and submit. There is no cost for up to 5 client nodes. Once you set up and confirm your account you can go thru the <a href="http://help.opscode.com/faqs/start/how-to-get-started" target="_blank" rel="noopener">How to Get Started</a> pages which includes how to set up your client development machine (installing Chef Client, Knife and various dependencies) as well as downloading your private key, organization key and your Knife Configuration File. You should go thru all 5 steps of the Getting Started section. And please do follow their examples of using git. The rest of this post assumes you have git installed and will use it for your own repository even if you don&#8217;t push it to an upstream git repository.</p>
<p>Once you have completed that you will be ready to use the remaining steps of this blog post. The remaining steps will assume you put your chef-repo in the same location as the Opscode instructions suggested (~/chef-repo). If you put it somewhere else, just adjust your path to your chef-repo as appropriate.</p>
<p>It also assumes you got your private user key (<em>your_user_name.pem</em>) and organization validator key (<em>your_organization-validator.pem</em>) and knife.rb in Section 3 of How to Get Started: <a href="http://help.opscode.com/faqs/start/chef-client" target="_blank" rel="noopener">Setting Up a Chef Client</a>. In that section you ran the command <code>knife configure client ./client-config</code> inside your ~/chef-repo/ directory. That will have created ~/chef-repo/.chef and put the keys and knife.rb in that directory.</p>
<p>For the use of this blog post, we will use the username: <em>rberger_test</em> and organization name: <em>install_wordpress</em>. So the private user key name for this example will be: <em>rberger_test.pem</em> and the organization validator key will be called <em>install_wordpress-validator.pem.</em> You should copy your keys someplace that you will not loose outside of ~/chef-repo. There are ways to <a title="Create a new private user key" href="http://help.opscode.com/faqs/account/getting-a-new-private-key-for-your-opscode-user" target="_blank" rel="noopener">create new ones</a>, but its always easier not to have to. Bottom line, is its expected that your keys and the knife.rb will be in your <em>~/chef-repo/.chef </em>directory at this point.</p>
<h2>Set up your Development Environment</h2>
<p>Your development environment is your home or work computer/laptop. Its the machine that is local to you. It is on this machine that you put together your Cookbooks. From here you push your cookbooks to the Opscode Chef Server, issue the commands to configure AWS and launch your AWS instances.</p>
<h3>Tweak up your chef-repo</h3>
<p>I like to keep the &#8220;standard&#8221; chef recipes that get downloaded from git or from cookbook.opscode.com in their own directory (called <em>cookbooks</em>) and all the cookbooks I create or highly modify in another directory (<em>site-cookbooks</em>). In Step 2 of the How to Get Started: <a href="http://help.opscode.com/faqs/start/user-environment" target="_blank" rel="noopener">Setting Up Your User Environment</a>, they had you create a <em>~/chef-repo</em> directory and populate it from git or from a tar ball. You should add the <em>site-cookbooks</em> directory to your <em>~/chef-repo</em>. We&#8217;re also going to add an empty <em>README.md</em> to the <em>site-cookbooks </em>directory so when we create our own git repository that directory will be there (an empty directory will not be added to a git repository)</p>
<pre><pre class="brush: bash; title: ; notranslate">
cd ~/chef-repo
mkdir site-cookbooks
echo &quot;Directory for customized cookbooks&quot; &amp;amp;gt; site-cookbooks/README.md
</pre>
<p>You will probably also not want to include your <em>.chef </em>directory with all your keys in what gets uploaded to any outside chef repository. If you are just keeping things local, you can skip this step. Edit <em>~/chef-repo/.gitignore</em> and add .<em>chef </em>to the file on its own line. You might also want to add <em>client-config</em> to <em>.gitignore</em> as well as any temporary or backup file suffixes you might have. For instance if you use Emacs, you would add <em>~*</em> (emacs backup files suffix), the .DS_Store which is something left by the Mac filesystem,  .rake_test_cache which is left around by Rake and metadata.json which is a file generated by chef. My <em>.gitignore</em> looks like:</p>
<pre><pre class="brush: bash; title: ; notranslate">
.chef
client-config
*~
.DS_Store
.rake_test_cache
metadata.json
</pre>
<p>If you created the <em>~chef-repo</em> from the git clone of the Opscode repository, you&#8217;ll want to get rid of the git configuration and history from the cloning of the Opscode chef-repo and create your own git repository:</p>
<pre><pre class="brush: bash; title: ; notranslate">
rm -rf .git
git init
git add -A
git commit -a -m &quot;Created my own basic chef-repo&quot;
</pre>
<p>The above commands will have removed the old git config that came when you did the git clone <em>http://github.com/opscode/chef-repo.git</em> command as part of the Opscode <a href="http://help.opscode.com/faqs/start/how-to-get-started" target="_blank" rel="noopener">How to Get Started</a> pages. The git init, add and commit will create a new local git repository for your own use not connected to the opscode repository. You can then add a remote repository if you want to be able to push your repository and future changes to another git repository such as github.com.</p>
<h3>Updating your knife.rb file with Amazon Credentials</h3>
<p>Add the following lines to the end of your ~/chef-repo/.chef/knife.rb file. You should have gotten your AWS Access Key and Secret Access key when you signed up to Amazon Web Services, but you can always go back and get it at <a href="http://aws-portal.amazon.com/gp/aws/developer/account/index.html?action=access-key" target="_blank" rel="noopener">AWS Security Credentials page</a>. Your final knife.rb should look something like this, except the various items that are customized to your setup. In the example below <em>rberger_test</em> would be replaced by your Opscode User name and <em>install_wordpress</em> would be replaced by your Opscode Organization name that was used when you went thru the Section 3 of the Opscode How to Get Started: <a href="http://help.opscode.com/faqs/start/chef-client" target="_blank" rel="noopener">Setting Up a Chef Client</a>.</p>
<pre><pre class="brush: ruby; highlight: [12,13]; title: ; notranslate">
current_dir = File.dirname(__FILE__)
log_level                :info
log_location             STDOUT
node_name                &quot;rberger_test&quot;
client_key               &quot;#{current_dir}/rberger_test.pem&quot;
validation_client_name   &quot;install_wordpress-validator&quot;
validation_key           &quot;#{current_dir}/install_wordpress-validator.pem&quot;
chef_server_url          &quot;https://api.opscode.com/organizations/install_wordpress&quot;
cache_type               'BasicFile'
cache_options( :path =&amp;amp;gt; &quot;#{ENV['HOME']}/.chef/checksums&quot; )
cookbook_path            [&quot;#{current_dir}/../cookbooks&quot;,&amp;amp;nbsp;&quot;#{current_dir}/../site-cookbooks&quot;]
knife[:aws_access_key_id]     = &quot;Your Access Key&quot;
knife[:aws_secret_access_key] = &quot;Your Secret Access Key&quot;
</pre>
<p>You can test that your knife.rb is setup enough to access AWS by issuing the command</p>
<pre><pre class="brush: bash; title: ; notranslate">knife ec2 server list</pre>
<p>And you should see something like this (just the heading and no instances unless you&#8217;ve launched some EC2 instances earlier:</p>
<pre><pre class="brush: bash; title: ; notranslate">
Instance ID &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Public IP &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Private IP &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; Flavor &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Image &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Security Groups &amp;amp;nbsp;State
</pre>
<h3>Get the Appropriate Cookbooks</h3>
<p>We&#8217;ll get cookbooks using the <a href="http://wiki.opscode.com/display/chef/Knife" target="_blank" rel="noopener">knife command</a> and the <a href="http://cookbooks.opscode.com/" target="_blank" rel="noopener">cookbooks.opscode.com</a> web service. We&#8217;ll be using the following cookbooks:</p>
<ul>
<li>chef</li>
<li>apache2</li>
<li>mysql</li>
<li>openssl</li>
<li>php</li>
<li>postfix</li>
<li>sudo</li>
<li>users</li>
<li>vsftpd</li>
<li>wordpress</li>
</ul>
<p>Use the knife command on your local development machine to pull down the cookbooks you need. The command we&#8217;re using (knife cookbook site vendor COOKBOOK) will automatically download the cookbooks and install them in the ~/chef-repo/cookbooks directory. It will also check them into your git repository as a vendor branch (Stay on the master branch at least until you have installed all the cookbooks).</p>
<pre><pre class="brush: bash; title: ; notranslate">
cd ~/chef-repo
knife cookbook site vendor chef -d
knife cookbook site vendor apache2 -d
knife cookbook site vendor mysql -d
knife cookbook site vendor openssl -d
knife cookbook site vendor php -d
knife cookbook site vendor postfix -d
knife cookbook site vendor sudo -d
knife cookbook site vendor users -d
knife cookbook site vendor vsftpd -d
knife cookbook site vendor wordpress -d
</pre>
<p>Those commands will download all the cookbooks and any other cookbook dependencies they may have into your ~/chef-repo/cookbooks directory and check each one in as a git branch in your repo. If you do an ls on your ~/chef-repo/cookbooks directory you should see:</p>
<pre><pre class="brush: plain; title: ; notranslate">
README.md &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; bluepill &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;couchdb &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; java &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;php &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; runit &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; users &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; xml
apache2 &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; build-essential daemontools &amp;amp;nbsp; &amp;amp;nbsp; mysql &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; postfix &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; sudo &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;vsftpd &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;zlib
apt &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; chef &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;erlang &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;openssl &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; rabbitmq_chef &amp;amp;nbsp; ucspi-tcp &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; wordpress
</pre>
<p>And if you do a git branch you should see your master branch as the current and a chef-vendor- for each of the cookbooks you installed:</p>
<pre><pre class="brush: plain; title: ; notranslate">
  chef-vendor-apache2
  chef-vendor-apt
  chef-vendor-bluepill
  chef-vendor-build-essential
  chef-vendor-chef
  chef-vendor-couchdb
  chef-vendor-daemontools
  chef-vendor-erlang
  chef-vendor-java
  chef-vendor-mysql
  chef-vendor-openssl
  chef-vendor-php
  chef-vendor-postfix
  chef-vendor-rabbitmq_chef
  chef-vendor-runit
  chef-vendor-sudo
  chef-vendor-ucspi-tcp
  chef-vendor-users
  chef-vendor-vsftpd
  chef-vendor-wordpress
  chef-vendor-xml
  chef-vendor-zlib
* master
</pre>
<p>If you ever want to update these standard cookbooks,  you can just redo the <code>knife cookbook site vendor Cookbook</code> command.</p>
<h2>Create site-cookbooks to extend standard cookbooks</h2>
<p>It is standard practice to put the official cookbooks in the <em>~chef-repo/cookbooks</em> directory, as we just did in the previous step. Any cookbook overrides, extensions or custom cookbooks go into the <em>~chef-repo/site-cookbooks</em> directory. If you create a cookbook directory in ~chef-repo/site-cookbooks with the same name as a cookbook in the <em>~chef-repo/cookbooks</em> directory, the files, templates and/or recipes in the <em>~chef-repo/site-cookbook</em> directory will override the matching files, templates and/or recipes in the cookbook of the same name in the <em>~chef-repo/cookbooks</em> directory. We will now extend two of the cookbooks; users and wordpress.</p>
<h3>Extend the Sudo cookbook so its suitable for EC2</h3>
<p>The standard sudo cookbook creates a sudoers file that requires passwords to activate sudo. Most EC2 environments do not allow passwords for logins and require that you login only with ssh keys. So we need to modify the Sudo cookbook to create the sudoers file with the NOPASSWORD flag set for all the users we want to have sudo powers. We just need to override the template file used in the standard sudo cookbook.</p>
<p>First have to make a directory for the new template in your site-cookbooks directory:</p>
<pre><pre class="brush: plain; title: ; notranslate">
mkdir -p site-cookbooks/sudo/templates/default
</pre>
<p>Copy the following into site-cookbooks/sudo/templates/default/sudoers.erb:</p>
<pre><pre class="brush: plain; title: ; notranslate">
#
# /etc/sudoers
#
# Generated by Chef for
#

Defaults !lecture,tty_tickets,!fqdn

# User privilege specification
root  ALL=(ALL) ALL

 ALL=(ALL) NOPASSWD:ALL

# Members of the sysadmin group may gain root privileges
%sysadmin ALL=(ALL) NOPASSWD:ALL

# Members of the group '' may gain root privileges
% ALL=(ALL) NOPASSWD:ALL

</pre>
<h3>Fix a bug in the latest version of the Standard Mysql Cookbook</h3>
<p>As I was writing this post, Opscode came out with a new version of the Mysql Cookbook that seems to have a bug with the Chef Client version 0.9.12. It may be fixed by the time you read this. If you are running Chef 0.9.12, check for line 59 of cookbooks/mysql/recipes/client.rb. Change</p>
<pre><pre class="brush: plain; title: ; notranslate">
if platform_version.to_f &amp;amp;gt;= 5.0
</pre>
<p>to:</p>
<pre><pre class="brush: plain; title: ; notranslate">
if node.platform_version.to_f &amp;amp;gt;= 5.0
</pre>
<h3>Extend the WordPress cookbook to do some custom actions</h3>
<p>We need to do a few custom actions after we install wordpress. The main one being to change the onwnership of the wordpress directory and most of the files to the user blog.</p>
<p>We need to add a user named <em>blog</em> that has its home directory the same as the wordpress directory. We will use this <em>blog</em> user to do automatic updates to wordpress. It will use vsftpd for secure ftp and will have only access to the wordpress directory.</p>
<p>We also need to add a swap file to the server. We could create a new cookbook to hold this as its not really wordpress related, but because this is such a simple system, we will just add a new recipe to wordpress to handle these miscellaneous actions.</p>
<h4>Create a recipe to add the blog user and change ownership of the wordpress directory</h4>
<p>First make the directories in site-cookbooks for extending the wordpress cookbook:</p>
<pre><pre class="brush: plain; title: ; notranslate">
mkdir -p site-cookbooks/wordpress/recipes
mkdir -p site-cookbooks/wordpress/attributes
mkdir -p site-cookbooks/wordpress/templates/default
</pre>
<p>Create and edit the file site-cookbooks/wordpress/attributes/wordpress.rb and put the following in it (note, this must have a different name than the one used in the standard wordpress cookbook templates directory):</p>
<pre><pre class="brush: plain; title: ; notranslate">
default[:wordpress][:blog_updater][:username] = &quot;blog&quot;

::Chef::Node.send(:include, Opscode::OpenSSL::Password)

default[:wordpress][:blog_updater][:password] = secure_password
# hash set by recipe or manually using makepasswd
default[:wordpress][:blog_updater][:hash] = nil

# For creating the swap partition. Swap_size is in GB
default[:wordpress][:gb_swap_size] = 2
default[:wordpress][:swap_file] = &quot;/swap_file&quot;
</pre>
<p>This will set the <em>[:wordpress][:blog_updater]</em> to be the name &#8220;blog&#8221;. This is the default for the username that will have the ability to use vsftpd to update wordpress and its plugins. We actually override this in the wordpress.rb role file. But we put a default here as well for good practice (ie. the cookbook will work even if someone doesn&#8217;t override the value in a role).</p>
<p>The <em>::Chef::Node.send(:include, Opscode::OpenSSL::Password)</em> line is there so we can use the Chef mechanism to create an auto-generated password (<em>secure_password</em>). We then use that mechanism to set the default password for the <em>blog_updater</em>.</p>
<p>Create and edit site-cookbooks/wordpress/recipes/blog_user.rb. put the following as the contents:</p>
<pre><pre class="brush: plain; title: ; notranslate">
# Get the password cryptographic hash for node[:wordpress][:blog_updater][:password
package &quot;makepasswd&quot;
package &quot;libshadow-ruby1.8&quot;
if node[:wordpress][:blog_updater][:hash].nil? || node[:wordpress][:blog_updater][:hash].empty?
  cmd = &quot;echo #{node[:wordpress][:blog_updater][:password]} | /usr/bin/makepasswd --clearfrom=- --crypt-md5 |awk '{ print $2 }'&quot;
  ruby_block &quot;create_blog_updater_pw&quot; do
    block do
      node.set[:wordpress][:blog_updater][:hash] = `#{cmd}`.chomp
    end
    action :create
  end
end

# Create the blog_updater user with their home directory being the wordpress directory and the group as the same group as the Apache runtime group
user &quot;#{node[:wordpress][:blog_updater][:username]}&quot; do
  home &quot;#{node[:wordpress][:dir]}&quot;
  gid &quot;#{node[:apache][:user]}&quot;
  shell &quot;/bin/bash&quot;
  supports :manage_home =&amp;amp;gt; true
  unless node[:wordpress][:blog_updater][:hash].nil? || node[:wordpress][:blog_updater][:hash].empty?
    password &quot;#{node[:wordpress][:blog_updater][:hash]}&quot;
  end
end

# Change the ownership of the wordpress directory so that the blog user can update
execute &quot;chown wordpress home for blog user&quot; do
  cwd &quot;#{node[:wordpress][:dir]}&quot;
  user &quot;root&quot;
  command &quot;chown -R #{node[:wordpress][:blog_updater][:username]}:#{node[:apache][:user]} #{node[:wordpress][:dir]}&quot;
  not_if { node[:wordpress][:dir].nil? || node[:wordpress][:dir].empty? || (not File.exists?(node[:wordpress][:dir])) }
end
</pre>
<p>The above code will create the blog_user as a Linux user on the target system and set its home directory to be the wordpress directory. This is to make it work with vsftpd.</p>
<h4>Create a template to override the default wordpress apache config</h4>
<p>The standard WordPress cookbook sets the Apache Server Name the FQDN of the EC2 Public DNS and sets the Server Aliases to the EC2 FQDN Private DNS name. This is pretty useless. We would like to have the cookbook set the Server Alias to the FQDN&#8217;s based on our own DNS names. To do this without overriding the whole standard WordPress cookbook, we can override one template and name it: <em>site-cookbooks/wordpress/templates/default/wordpress.conf.erb</em>.</p>
<pre><pre class="brush: plain; title: ; notranslate">

  ServerName
  ServerAlias
  DocumentRoot

  &amp;amp;gt;
    Options FollowSymLinks
    AllowOverride FileInfo
    Order allow,deny
    Allow from all

    Options FollowSymLinks
    AllowOverride None

  LogLevel info
  ErrorLog /-error.log
  CustomLog /-access.log combined

  RewriteEngine On
  RewriteLog /-rewrite.log
  RewriteLogLevel 0

</pre>
<p>The key changes are the ServerAlias line where we now add the <code>@node[:wordpress][:server_aliases]</code> will add any aliases specified by this attribute which we set in the wordpress.rb role file. We also change the AllowOverride to FileInfo for the docroot</p>
<h4>Create a recipe to add a swap file to the server</h4>
<p>The t1.micro instance only has 612MB of RAM. You can easily run out of that with a WordPress blog. So we have a recipe to add a swap file system utilizing some space the EBS  disk Volume. This recipe creates a 2GB file called /swap_file  using dd and then uses the mkswap and swapon commands to make that file into a swap partition. The recipe also updates the /etc/fstab file so that the swap file will be mounted again if the instance reboots.</p>
<p>Create and edit the file site-cookbooks/wordpress/recipes/add_swap.rb with the following content:</p>
<pre><pre class="brush: plain; title: ; notranslate">
mb_block_size = 100
count = (node[:wordpress][:gb_swap_size] * 1024) / mb_block_size
bash &quot;add_swap&quot; do
  user &quot;root&quot;
  code &amp;amp;lt; &quot;#{node[:wordpress][:swap_file]}&quot;
  )
end
</pre>
<p>Create and edit the file site-cookbooks/wordpress/templates/default/fstab.erb and put the following content:</p>
<pre><pre class="brush: plain; title: ; notranslate">
# /etc/fstab: static file system information.
#
proc                   /proc           proc   nodev,noexec,nosuid     0       0
      none            swap   sw                      0       0
/dev/sda1              /               ext3   defaults                0       0
/dev/sda2              /mnt            auto   defaults,nobootwait,comment=cloudconfig 0       0
</pre>
<h3>Create WordPress Role</h3>
<p>This example will use a single role named <em>wordpress</em>. Use your favorite editor to create a file in your repo with the path roles/wordpress.rb with the following contents (Substitute your domain for ibd.com and change the hostnames such as test and wordpress-test to names appropriate for your blog. Replace <em>rberger_test </em>with the userid you want to use to log into your server via ssh):</p>
<pre><pre class="brush: ruby; title: ; notranslate">
name &quot;wordpress&quot;
description &quot;Blog using wordpress&quot;
recipes &quot;apt&quot;, &quot;build-essential&quot;, &quot;chef::client_service&quot;, &quot;users::sysadmins&quot;,
        &quot;sudo&quot;, &quot;postfix&quot;, &quot;mysql::server&quot;, &quot;wordpress&quot;, &quot;wordpress::blog_user&quot;,
        &quot;wordpress::add_swap&quot;, &quot;vsftpd&quot;

override_attributes(
  &quot;postfix&quot; =&amp;amp;gt; {&quot;myhostname&quot; =&amp;amp;gt; &quot;test.ibd.com&quot;, &quot;mydomain&quot; =&amp;amp;gt; &quot;ibd.com&quot;},
  &quot;authorization&quot; =&amp;amp;gt; {
    &quot;sudo&quot; =&amp;amp;gt; {
      &quot;groups&quot; =&amp;amp;gt; [],
      &quot;users&quot; =&amp;amp;gt; [&quot;rberger_test&quot;, &quot;ubuntu&quot;]
    }
  },
  &quot;wordpress&quot; =&amp;amp;gt; {
     &quot;server_aliases&quot; =&amp;amp;gt; %w(test.ibd.com wordpress-test.ibd.com),
     &quot;version&quot; =&amp;amp;gt; &quot;3.0.4&quot;,
     &quot;checksum&quot; =&amp;amp;gt; &quot;c68588ca831b76ac8342d783b7e3128c9f4f75aad39c43a7f2b33351634b74de&quot;,
     &quot;blog_updater&quot; =&amp;amp;gt; {
       &quot;username&quot; =&amp;amp;gt; &quot;blog&quot;,
       &quot;password&quot; =&amp;amp;gt; &quot;big-secret&quot;
     }
   },
   &quot;vsftpd&quot; =&amp;amp;gt; {&quot;chroot_users&quot; =&amp;amp;gt; %w(blog)}
)
</pre>
<p>The recipes line will be used to determine which cookbook/recipes (order is important) should be loaded by Chef when the chef-client is run on your new server.</p>
<ul>
<li><strong>apt: </strong>Configures various APT components on Debian-like systems.</li>
<li><strong>build-essential: </strong>Installs C compiler / build tools</li>
<li><strong>chef::client_service:</strong> Sets up a Chef client daemon to run periodically</li>
<li><strong>users::sysadmins:</strong> Creates users with ssh authorized keys. Requires a databag to be configured with users info</li>
<li><strong>sudo:</strong> Installs sudo and configures the /etc/sudoers file</li>
<li><strong>postfix: </strong>Installs and configures postfix for outgoing email</li>
<li><strong>mysql::server: </strong>Installs &amp; configures packages required for mysql servers</li>
<li><strong>wordpress:</strong> Installs and configures WordPress according to the instructions at http://codex.wordpress.org/Installing_WordPress</li>
<li><strong>wordpress::blog_user:</strong> Custom add-on recipe to add a user named &#8220;blog&#8221; to use with vsftpd for automatic wordpress and plugin updates</li>
<li><strong>wordpress::add_swap:</strong> Custom add-on recipe to add a swap partition to the instance</li>
<li><strong>vsftpd:</strong> Very Basic installation and configuration of vsftpd to support Secure (SSL) SFTP</li>
</ul>
<p>The <em>override_attributes</em> are used to configure various cookbooks.</p>
<ul>
<li><strong>postfix</strong> &#8211; Parameters for the postfix cookbook. Mainly sets the host and domain name to be meaningful</li>
<li><strong>authorization</strong> &#8211; Configures the sudo cookbook. Tells which users and groups should have sudo capability</li>
<li><strong>wordpress </strong>Some of these override values in the base cookbook and others for the site-cookbook version
<ul>
<li><strong>server-aliases</strong> &#8211; Sets aliases for the blog name. Will be used as serveralias names in the apache config.</li>
<li><strong>version</strong> &#8211; The version of wordpress to download.</li>
<li><strong>checksum</strong> &#8211; The checksum of the tar image of the wordpress download.</li>
<li><strong>blog_updater</strong>&#8211; Info needed to create a user that will do auto updates to wordpress via vsftps
<ul>
<li><strong>username</strong> &#8211; The username of the user</li>
<li><strong>password</strong> &#8211; The password to create for the user</li>
</ul>
</li>
</ul>
</li>
<li><strong>vsftpd</strong> &#8211; Sets what user should be allowed to access via ftp and have their home directory chroot&#8217;d (should be the same as wordpress-blog_updater).</li>
</ul>
<h3>Upload the cookbooks and roles to Opscode Chef Platform</h3>
<p>Run the following commands while you are in ~/chef-repo. This will upload the wordpress role and all the cookbooks in your chef-repo to your account on the Opscode Chef Platform:</p>
<pre><pre class="brush: plain; title: ; notranslate">
knife role from file roles/wordpress.rb
knife cookbook upload -a
</pre>
<h3>Create the Users databag</h3>
<p>The <em>users</em> <em>cookbook</em> will take info from Opscode Chef Server Data Bag names <em>users</em>. There can be an item for each user that you want to create a login for. The standard Opscode <em>users cookbook </em>expects the users set up in the data bags to be in the group sysadmin and have the ability to sudo and gain root powers.</p>
<p>We&#8217;ll need to create an item for each user you would like to have on your system. I suggest you make at least one for yourself. Here is the data bag I used for my setup. I don&#8217;t show the ssh key. You&#8217;ll have to substitute your own public ssh key for &lt;<em>your public ssh key&gt;</em> that you will use to ssh to the server. Its a requirement that you have an ssh key as described in the next section on the <em>sudo</em> <em>cookbook</em>.</p>
<p>Here is the JSON representation of my user data bag item. Create a directory users in ~/chef-repo/data_bags/users and put the following JSON in the file ~/chef-repo/users/.json (where is the username you want to have on the target system. The id will be the name of the item in the data bag and what will become your username  (in this case <em>rberger_test</em>) You will also need to include the public ssh key you want associated with this user. You will need to have created a ssh keypair (private and public) locally using something like ssh-keygen. You don&#8217;t really need the openid. You should be able to set that to an empty string (&#8220;&#8221;):</p>
<pre><pre class="brush: plain; title: rberger_test.json; notranslate">
{
  &quot;id&quot;: &quot;rberger_test&quot;,
  &quot;comment&quot;: &quot;Robert J. Berger&quot;,
  &quot;uid&quot;: 2001,
  &quot;groups&quot;: &quot;sysadmin&quot;,
  &quot;shell&quot;: &quot;/bin/bash&quot;,
  &quot;openid&quot;: &quot;rberger_test.myopenid.com&quot;,
  &quot;ssh_keys&quot;: &quot;&quot;
}
</pre>
<p>You will need to create the users databag and then upload your version of the user JSON (rberger_test.json in the example) to the Chef server with the following commands:</p>
<pre><pre class="brush: plain; title: ; notranslate">
knife data bag create users
knife data bag from file users data_bags/users/rberger_test.json
</pre>
<p>With Amazon EC2 instances its best to only allow access without passwords using ssh keys. Since the login is protected by ssh keys, and you don&#8217;t have passwords associated with the users, you need to make sure sudo is set up to allow invoking sudo for specific users (sysadmins) without a password. The users cookbook creates such a user based on the users data bag. But the sudo cookbook does not set up sudoers to support not having password. We will modify the sudoers.erb template later. Make sure you don&#8217;t deploy without this modification as the default sudo cookbook will make it impossible to sudo on an EC2 instance after its run.</p>
<h2>Configure AWS</h2>
<p>You can do most of the following by using the a GUI web app such as <a href="https://console.aws.amazon.com/ec2/home" target="_blank" rel="noopener">Amazon&#8217;s AWS console</a>, the Firefox plugin <a href="http://aws.amazon.com/developertools/609?_encoding=UTF8&amp;jiveRedirect=1" target="_blank" rel="noopener">ElasticFox</a> other such GUI  tools or the command line <a href="http://aws.amazon.com/developertools/351?_encoding=UTF8&amp;queryArg=searchQuery&amp;x=0&amp;fromSearch=1&amp;y=0&amp;searchPath=developertools&amp;searchQuery=ec2-api-tools" target="_blank" rel="noopener">ec2-api-tools</a>. For now, we&#8217;ll show how to do this with the Amazon AWS Console.</p>
<h3>Set up Security Group</h3>
<p>Add a WordPress group, that enables  ssh, http and https. You should open at least http and https to all IP addresses (represented by Source IP: 0.0.0.0/0) You can decide to open up ssh to every IP or just to your own development network or host. In this example we&#8217;ll open it up to the world. Note: by default ping (ICMP) is not enabled so you can not ping your instance. You can enable ping by adding a line where it doesn&#8217;t matter what is in Connection Method, Protocol is ICMP, From Port and To Port is set to -1 and Source IP is 0.0.0.0/0.</p>
<p><img decoding="async" loading="lazy" class="alignleft wp-image-686 size-medium" title="AWS Management Console Security Group" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console-Security-Group-300x223.jpg?resize=300%2C223" alt="" width="300" height="223" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console-Security-Group.jpg?resize=300%2C223&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console-Security-Group.jpg?resize=150%2C111&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console-Security-Group.jpg?resize=400%2C298&amp;ssl=1 400w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console-Security-Group.jpg?w=816&amp;ssl=1 816w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></p>
<figure id="attachment_687" aria-describedby="caption-attachment-687" style="width: 300px" class="wp-caption aligncenter"><img decoding="async" loading="lazy" class="wp-image-687 size-medium" title="Enter Security Group Name" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Enter-Security-Group-Name-300x171.jpg?resize=300%2C171" alt="" width="300" height="171" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Enter-Security-Group-Name.jpg?resize=300%2C171&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Enter-Security-Group-Name.jpg?resize=150%2C85&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Enter-Security-Group-Name.jpg?resize=400%2C228&amp;ssl=1 400w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Enter-Security-Group-Name.jpg?w=541&amp;ssl=1 541w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-687" class="wp-caption-text">Enter the name and description of the Security Group</figcaption></figure>
<figure id="attachment_688" aria-describedby="caption-attachment-688" style="width: 300px" class="wp-caption alignleft"><img decoding="async" loading="lazy" class="wp-image-688 size-medium" title="Setting ports" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Setting-ports-300x184.jpg?resize=300%2C184" alt="" width="300" height="184" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Setting-ports.jpg?resize=300%2C184&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Setting-ports.jpg?resize=150%2C92&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Setting-ports.jpg?resize=400%2C245&amp;ssl=1 400w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Setting-ports.jpg?w=986&amp;ssl=1 986w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-688" class="wp-caption-text">Set the Ports that are to be enabled (Select the Connection Method, enter the Source IP, and click Save)</figcaption></figure>
<h3>Generate an SSH Key Pair for accessing your instance[s]</h3>
<p>You need to use the Amazon Key Pair generator to generate a key that will be used to make initial ssh connections to your new instances after they are created. You can also do this on the AWS <span style="font-size: 15.6px;">Management Console&#8217;s EC2 Key Pairs page:</span></p>
<figure id="attachment_694" aria-describedby="caption-attachment-694" style="width: 300px" class="wp-caption aligncenter"><img decoding="async" loading="lazy" class="wp-image-694 size-medium" title="AWS Management Console Key Pairs" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console-300x175.jpg?resize=300%2C175" alt="" width="300" height="175" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console.jpg?resize=300%2C175&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console.jpg?resize=150%2C87&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console.jpg?resize=400%2C234&amp;ssl=1 400w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console.jpg?w=894&amp;ssl=1 894w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-694" class="wp-caption-text">Navigate to the Key Pairs page and click on Create Key Pair</figcaption></figure>
<p>You can name the key pair anything, but you may want to use this key pair to access this and future instances, so you might want to name it something general like aws-east. Here we&#8217;re going to name it something more specific: aws-wordpress just for this example.</p>
<figure id="attachment_695" aria-describedby="caption-attachment-695" style="width: 300px" class="wp-caption alignleft"><img decoding="async" loading="lazy" class="wp-image-695 size-medium" title="Create Key Pair naming" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Create-Key-Pair-naming-300x183.jpg?resize=300%2C183" alt="" width="300" height="183" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Create-Key-Pair-naming.jpg?resize=300%2C183&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Create-Key-Pair-naming.jpg?resize=150%2C91&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Create-Key-Pair-naming.jpg?w=354&amp;ssl=1 354w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-695" class="wp-caption-text">Enter the name for the key</figcaption></figure>
<figure id="attachment_697" aria-describedby="caption-attachment-697" style="width: 300px" class="wp-caption alignnone"><img decoding="async" loading="lazy" class="wp-image-697 size-medium" title="keypair created message" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/keypair-created-message-300x198.jpg?resize=300%2C198" alt="" width="300" height="198" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/keypair-created-message.jpg?resize=300%2C198&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/keypair-created-message.jpg?resize=150%2C99&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/keypair-created-message.jpg?w=353&amp;ssl=1 353w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-697" class="wp-caption-text">After the key pair is created, make sure to save the private key that is downloaded automatically</figcaption></figure>
<p>At this point a file named asw-wordpress.pem will have been downloaded by your browser. Make sure not to loose it! Put it into your ~/.ssh directory and chmod it to 0600:</p>
<pre><pre class="brush: plain; title: ; notranslate">
chmod 0600 ~/.ssh/aws-wordpress.pem
</pre>
<p>The final Key Pairs page on the AWS Management Console should look something like:</p>
<figure id="attachment_696" aria-describedby="caption-attachment-696" style="width: 300px" class="wp-caption alignleft"><img decoding="async" loading="lazy" class="wp-image-696 size-medium" title="Final Keypair display" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Final-Keypair-display-300x177.jpg?resize=300%2C177" alt="" width="300" height="177" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Final-Keypair-display.jpg?resize=300%2C177&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Final-Keypair-display.jpg?resize=150%2C88&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Final-Keypair-display.jpg?resize=400%2C236&amp;ssl=1 400w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Final-Keypair-display.jpg?w=887&amp;ssl=1 887w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-696" class="wp-caption-text">Final Key Pair Page</figcaption></figure>
<h2>Create the Instance and Bootstrap Chef on the Instance</h2>
<p>The Chef Knife command has the ability to launch EC2 (and other cloud) instances. This process automatically installs chef and all its dependencies after the instance is created. If all goes well, it then loads and executes your roles and cookbooks on the instance creating your server.</p>
<p>You can see what options are available to this command:</p>
<pre><pre class="brush: plain; title: ; notranslate">
# knife ec2 server create --help
knife ec2 server create (options)
    -Z, --availability-zone ZONE     The Availability Zone
    -A, --aws-access-key-id KEY      Your AWS Access Key ID
    -K SECRET,                       Your AWS API Secret Access Key
        --aws-secret-access-key
        --user-data USER_DATA_FILE   The EC2 User Data file to provision the instance with
        --bootstrap-version VERSION  The version of Chef to install
    -N, --node-name NAME             The Chef node name for your new node
        --server-url URL             Chef Server URL
    -k, --key KEY                    API Client Key
        --color                      Use colored output
    -c, --config CONFIG              The configuration file to use
        --defaults                   Accept default values for all questions
    -d, --distro DISTRO              Bootstrap a distro using a template
        --ebs-no-delete-on-term      Do not delete EBS volumn on instance termination
        --ebs-size SIZE              The size of the EBS volume in GB, for EBS-backed instances
    -e, --editor EDITOR              Set the editor to use for interactive commands
    -E, --environment ENVIRONMENT    Set the Chef environment
    -f, --flavor FLAVOR              The flavor of server (m1.small, m1.medium, etc)
    -F, --format FORMAT              Which format to use for output
    -i IDENTITY_FILE,                The SSH identity file used for authentication
        --identity-file
    -I, --image IMAGE                The AMI for the server
        --no-color                   Don't use colors in the output
    -n, --no-editor                  Do not open EDITOR, just accept the data as is
        --no-host-key-verify         Disable host key verification
    -u, --user USER                  API Client Username
        --prerelease                 Install the pre-release chef gems
        --print-after                Show the data after a destructive operation
        --region REGION              Your AWS region
    -r, --run-list RUN_LIST          Comma separated list of roles/recipes to apply
    -G, --groups X,Y,Z               The security groups for this server
    -S, --ssh-key KEY                The AWS SSH key id
    -P, --ssh-password PASSWORD      The ssh password
    -x, --ssh-user USERNAME          The ssh username
    -s, --subnet SUBNET-ID           create node in this Virtual Private Cloud Subnet ID (implies VPC mode)
        --template-file TEMPLATE     Full path to location of template to use
    -V, --verbose                    More verbose output. Use twice for max verbosity
    -v, --version                    Show chef version
    -y, --yes                        Say yes to all prompts for confirmation
    -h, --help                       Show this message
</pre>
<p>The actual command we&#8217;ll use is:</p>
<pre><pre class="brush: plain; title: ; notranslate">
knife ec2 server create --run-list 'role[wordpress]' --node-name test-wordpress --flavor t1.micro \
--identity-file ~/.ssh/aws-wordpress.pem --image ami-a2f405cb --groups wordpress \
--ssh-key aws-wordpress --ssh-user ubuntu --ebs-no-delete-on-term
</pre>
<h3>Details of knife command to launch instance</h3>
<p><strong>role[wordpress]: </strong>The role[s] given to this instance. More than one can be specified by an orderd space separated list of strings: &#8216;role[role0]&#8217; &#8216;role[role1]&#8217; &#8230;</p>
<p><strong>&#8211;node-name test-wordpress:</strong> The name of the instance. Used by Chef to name the Node and Client</p>
<p><strong>&#8211;flavor t1.micro:</strong> The <a href="http://aws.amazon.com/ec2/instance-types/" target="_blank" rel="noopener">EC2 Instance Type</a>. Here we are using the smallest type. This is the only one that is <a href="http://aws.amazon.com/free/" target="_blank" rel="noopener">&#8220;free&#8221;</a></p>
<p><strong>&#8211;identity-file ~/.ssh/aws-wordpress.pem:</strong> The path to the ssh private key that was downloaded earlier from the AWS Management Console. You could potentially not include this if you added the key to your ssh-agent.</p>
<p><strong>&#8211;image ami-a2f405cb: </strong>The Amazon Machine Image assigned to this instance. It is the image of the root file system for the instance and thus determines what OS and software is booted when the instance is started. In this case it is the Canonical Ubuntu 10.4 32 bit AMI. You can find the latest Ubuntu AMIs for each region at the top of the home page of <a href="http://alestic.com/" target="_blank" rel="noopener">Eric Hammond&#8217;s super helpful site</a>.</p>
<p><strong>&#8211;groups wordpress:</strong> The Security Group[s] to be assigned to this instance. In this case its &#8220;wordpress&#8221; Multiple Groups can be assigned as a comma separated list</p>
<p><strong>&#8211;ssh-key aws-wordpress: </strong>The name of the SSH Key Pair that was downloaded from the AWS Management Console</p>
<p><strong>&#8211;ssh-user ubuntu: </strong>The user name for ssh access. This AMI uses &#8220;ubuntu&#8221;. The AMI&#8217;s usually are configured to allow only a single user to ssh by default. Different AMI&#8217;s use different names such as root or ec2-user.</p>
<p><strong>&#8211;ebs-no-delete-on-term: </strong>By default, the EBS Volume is deleted when the EC2 instance is terminated. By adding this flag it will instead make it so the EBS volume will continue to exist after the EC2 instance has been terminated. You want this for your final deployed site so that if something goes wrong with the EC2 instance you will still have your EBS volume and can use it to create a new EC2 instance without loosing your data. (That is the topic of another tutorial though!)</p>
<h3>Successful launch results</h3>
<p>After you fire off the knife ec2 server create command, you&#8217;ll see something like:</p>
<pre><pre class="brush: plain; title: ; notranslate">
[WARN] Fog::AWS::EC2#new is deprecated, use Fog::AWS::Compute#new instead (/Library/Ruby/Gems/1.8/gems/chef-0.9.12/lib/chef/knife/ec2_server_create.rb:145:in `run')
Instance ID: i-d10ae5bd
Flavor: t1.micro
Image: ami-a2f405cb
Availability Zone: us-east-1b
Security Groups: wordpress
SSH Key: aws-wordpress

Waiting for server..............
Public DNS Name: ec2-184-73-44-17.compute-1.amazonaws.com
Public IP Address: 184.73.44.17
Private DNS Name: domU-12-31-39-10-60-17.compute-1.internal
Private IP Address: 10.198.99.229

Waiting for sshd...done
INFO: Bootstrapping Chef on&amp;amp;nbsp;ec2-184-73-44-17.compute-1.amazonaws.com
</pre>
<p>That will be followed by loads of debugging info as the knife command bootstraps chef and its related packages and gems. This can go on for 10 to 20 minutes. Eventually you&#8217;ll see something along the lines of:</p>
<pre><pre class="brush: plain; title: ; notranslate">
Instance ID: i-d10ae5bd
Flavor: t1.micro
Image: ami-a2f405cb
Availability Zone: us-east-1b
Security Groups: wordpress
SSH Key: aws-wordpress
Public DNS Name: ec2-184-73-44-17.compute-1.amazonaws.com
Public IP Address: 184.73.44.17
Private DNS Name: domU-12-31-39-10-60-17.compute-1.internal
Private IP Address: 10.198.99.229
Run List: role[wordpress]
</pre>
<p>You can look just before this block and see if chef finished the running of the wordpress related cookbooks ok. If within a page above the last block you don&#8217;t see any errors then all is ok. The last few lines should be something like:</p>
<pre><pre class="brush: plain; title: ; notranslate">
[Mon, 03 Jan 2011 07:23:34 +0000] INFO: Chef Run complete in 10.945359 seconds
[Mon, 03 Jan 2011 07:23:34 +0000] INFO: cleaning the checksum cache
[Mon, 03 Jan 2011 07:23:34 +0000] INFO: Running report handlers
[Mon, 03 Jan 2011 07:23:34 +0000] INFO: Report handlers complete
</pre>
<p>If there are errors, you&#8217;ll have to debug your cookbooks which is beyond the scope of this post.</p>
<p>Now you should be able to log into your instance ether as the ubuntu default user or the user you created in the wordpress role and the Users Databag (rberger_test in this example):</p>
<pre><pre class="brush: plain; title: ; notranslate">
# Using the ubuntu user and an explicit ssh key
ssh -i ~/.ssh/aws-wordpress.pem ubuntu@ec2-184-73-44-17.compute-1.amazonaws.com

# Using the user created by the cookbook and a key that is already on you ssh-agent
ssh rberger_test@ec2-184-73-44-17.compute-1.amazonaws.com
</pre>
<h2>Configure DNS to have preferred FQDNs point to your instance</h2>
<p>You can access your site using the Amazon Public DNS name, but that would not be good in general. You probably want to access it via a URL like <em>http://www.myydomain.com</em>. To do this you must configure your DNS to add a CNAME to map your FQDN to the Amazon Public DNS name. How this is done is very specific to your DNS service provider. Bottom line is that you want to do a CNAME not an A record.  (I.E. an alias of your FQDN for the Amazon Public DNS name, not an A record that uses the Amazon IP address). There are some issues of using an A record with Amazon. You probably won&#8217;t see them for a simple situation such as hosting a single instance. But once you have many instances that need to talk to each other, using the CNAME will make life easier.</p>
<h2>Installing your WordPress Blog</h2>
<p>At this point you should be able to access your new instance via http. The initial screen will be the WordPress setup dialog. You should be able to access it via http using the Amazon Public DNS name or any CNAME aliases  you created and also added in the wordpress.rb role file override attribute for (wordpress =&gt;server_aliases. You should see something like:</p>
<figure id="attachment_717" aria-describedby="caption-attachment-717" style="width: 266px" class="wp-caption alignleft"><img decoding="async" loading="lazy" class="wp-image-717 size-medium" title="WordPress › Installation" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/WordPress-›-Installation-266x300.jpg?resize=266%2C300" alt="" width="266" height="300" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/WordPress-›-Installation.jpg?resize=266%2C300&amp;ssl=1 266w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/WordPress-›-Installation.jpg?resize=133%2C150&amp;ssl=1 133w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/WordPress-›-Installation.jpg?resize=400%2C449&amp;ssl=1 400w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/WordPress-›-Installation.jpg?w=775&amp;ssl=1 775w" sizes="(max-width: 266px) 100vw, 266px" data-recalc-dims="1" /><figcaption id="caption-attachment-717" class="wp-caption-text">WordPress startup installation page</figcaption></figure>
<p>It is possible to move an existing WordPress Blog to this new instance but that is beyond the scope of this post.</p>
<h2>Happily Ever After</h2>
<p>By default, the chef client runs every 1/2 hour on the instance. If you change any of the cookbooks and push them up to the Opscode Chef Server, those changes will be propogated to the instance the next time the chef-client runs again on the instance.</p>
<p>This is the way to maintain the server. By updating or adding cookbooks, you define the state of the server and the server will converge to that state when the chef-client runs. The inverse is true. If you change something on the server directly and the service you changed is managed by Chef, your direct changes could be reverted by the chef-client the next time it runs.</p>
<p>You shouldn&#8217;t need to but you can disable the chef-client from running automatically by running the following command while ssh&#8217;d to the instance:</p>
<pre><pre class="brush: plain; title: ; notranslate">
sudo /etc/init.d/chef-client stop
</pre>
<p>That will be reset (ie automatic chef-client runs will be re-enabled) if you reboot. You can permanently disable the automatic running of chef-client by running the following commands while ssh&#8217;d into the instance:</p>
<pre><pre class="brush: plain; title: ; notranslate">
cd /etc/init.d
sudo update-rc.d -f chef-client remove
</pre>
<h3>Using the WordPress Automatic Upgrade Mechanism</h3>
<p>At this point you should be able to use your wordpress blog as normal. You should be able to use the automatic update feature of WordPress to update WordPress itself and the plugins. When you are asked to supply the Connection Information, put in:</p>
<ul>
<li><span style="font-size: 10px;"><strong>Hostname</strong>: The Public FQDN of the host (ether the EC2 Public DNS Name or one of the DNS CNAMEs you set up)</span></li>
<li><span style="font-size: 10px;"><strong>FTP Username</strong>: &#8220;blog&#8221; (or whatever you set node[:wordpress][:blog_updater][:username] in the wordpress.rb role file)</span></li>
<li><span style="font-size: 10px;"><strong>FTP Password</strong>: &#8220;big-secret&#8221; (or whatever you <strong>SHOULD</strong> have set node[:wordpress][:blog_updater][:password] in the wordpress.rb role</span></li>
<li><span style="font-size: 10px;"><strong>Connection Type</strong>: FTPS (SSL)</span></li>
</ul>
<p>For instance for the Plugin Update Page:</p>
<figure id="attachment_746" aria-describedby="caption-attachment-746" style="width: 300px" class="wp-caption alignleft"><img decoding="async" loading="lazy" class="wp-image-746 size-medium" title="Upgrade Plugins ‹ WordPress Test — WordPress" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Upgrade-Plugins-‹-Wordpress-Test-—-WordPress-300x198.jpg?resize=300%2C198" alt="" width="300" height="198" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Upgrade-Plugins-‹-Wordpress-Test-—-WordPress.jpg?resize=300%2C198&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Upgrade-Plugins-‹-Wordpress-Test-—-WordPress.jpg?resize=150%2C99&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Upgrade-Plugins-‹-Wordpress-Test-—-WordPress.jpg?resize=400%2C265&amp;ssl=1 400w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Upgrade-Plugins-‹-Wordpress-Test-—-WordPress.jpg?w=771&amp;ssl=1 771w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-746" class="wp-caption-text">Upgrade Plugins Connection Information</figcaption></figure>
<p>That should work and be secure using the vsftpd server that we installed automatically.</p>
<p>Hopefully all will work well for you. I will try to answer questions but can&#8217;t guarantee quick response here. A great resource is the Opscode Chef IRC channel <a href="irc://irc.freenode.net/chef" target="_blank" rel="noopener">irc.freenode.net #chef</a>. And of course the <a href="http://wiki.opscode.com/" target="_blank" rel="noopener">Opscode Chef Wiki</a> and the <a href="http://help.opscode.com/home" target="_blank" rel="noopener">Opscode Support Site</a>.</p>
<h3>Source Code at Github</h3>
<p>You can get all the source for this at <a href="https://github.com/rberger/ibd-wordpress-repo" target="_blank" rel="noopener">https://github.com/rberger/ibd-wordpress-repo</a></p><p>The post <a href="https://www.ibd.com/howto/deploy-wordpress-to-amazon-ec2-micro-instance-with-opscode-chef/">Deploy WordPress to Amazon EC2 Micro Instance with Opscode Chef</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.ibd.com/howto/deploy-wordpress-to-amazon-ec2-micro-instance-with-opscode-chef/feed/</wfw:commentRss>
			<slash:comments>28</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">599</post-id>	</item>
		<item>
		<title>Copy an EBS AMI image to another Amazon EC2 Region</title>
		<link>https://www.ibd.com/scalable-deployment/copy-an-ebs-ami-image-to-another-amazon-ec2-region/</link>
					<comments>https://www.ibd.com/scalable-deployment/copy-an-ebs-ami-image-to-another-amazon-ec2-region/#comments</comments>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Mon, 15 Mar 2010 08:45:24 +0000</pubDate>
				<category><![CDATA[Scalable Deployment]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[EC2]]></category>
		<category><![CDATA[Infrastructure]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=551</guid>

					<description><![CDATA[<p>Since I&#8217;ve already created an image I liked in the us-west-1 region, I would like to reuse it in other regions. Turns out there is&#8230;</p>
<p>The post <a href="https://www.ibd.com/scalable-deployment/copy-an-ebs-ami-image-to-another-amazon-ec2-region/">Copy an EBS AMI image to another Amazon EC2 Region</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Since I&#8217;ve already created an image I liked in the us-west-1 region, I would like to reuse it in other regions. Turns out there is no mechanism within Amazon EC2 to do that. (See <a href="http://docs.amazonwebservices.com/AWSEC2/latest/DeveloperGuide/index.html?FAQ_Regions_Availability_Zones.html" target="_self">How do I launch an Amazon EBS volume from a snapshot across Regions?</a>). I did find <a href="http://citizen428.net/archives/420-Move-EC2-AMIs-between-regions.html" target="_self">one post</a> that talked a bit about how it can be done &#8220;out of band&#8221;. So I figured I would give that a try instead of doing a full recreation in the new region.</p>
<h2>Prepare the Source Instance and Volume</h2>
<h3>Start an instance in the source region</h3>
<p>Here I&#8217;ll start an instance in us-west-1a where I have the EBS image I want to copy. In this case I&#8217;ll use the image I want to copy, but it could be any image as long as its in the same region as the EBS AMI image that is to be copied. Though we are going to use the instance info to figure out some parameters for creating the new AMI. So if you don&#8217;t make the source instance the same AMI as the one you are copying you will need to supply some of the parameters yourself.</p>
<p>You can use a tool like ElasticFox to do the following creating of instances. Here we&#8217;ll do it with command line tools.</p>
<h3>Set some Shell source variables on host machine</h3>
<p>Just to make using these instructions as a cookbook, we&#8217;ll have some shell variables that you can set once and then all the instructions will use the variables so you can just cut and paste the instructions into your shell.</p>
<pre>src_keypair=id_runa-staging-us-west
src_fullpath_keypair=~/.ssh/runa/id_runa-staging-us-west
src_availability_zone=us-west-1a
src_instance_type=m1.large
src_region=us-west-1
src_origin_ami=ami-1f4e1f5a
src_device=/dev/sdh
src_dir=/src
src_user=ubuntu</pre>
<h3>Start up the source instance and capture the instanceid</h3>
<pre>src_instanceid=$(ec2-run-instances \
  --key $src_keypair \
  --availability-zone $src_availability_zone \
  --instance-type $src_instance_type \
  $src_origin_ami \
  --region $src_region  | \
  egrep ^INSTANCE | cut -f2)
echo "src_instanceid=$src_instanceid"

# Wait for the instance to move to the “running” state
while src_public_fqdn=$(ec2-describe-instances --region $src_region "$src_instanceid" | \
  egrep ^INSTANCE | cut -f4) &amp;&amp; test -z $src_public_fqdn; do echo -n .; sleep 1; done
echo src_public_fqdn=$src_public_fqdn</pre>
<p>This should loop till you see something like:</p>
<pre>$ echo src_public_fqdn=$src_public_fqdn
src_public_fqdn=ec2-184-72-2-93.us-west-1.compute.amazonaws.com</pre>
<h3>Create a volume from the EBS AMI snapshot</h3>
<p>Normally when starting an EBS AMI instance, it automatically created a volume from the snapshot associated with the AMI. Here we create the volume from the snapshot ourselves</p>
<pre># Get the volume id
ec2-describe-instances --region $src_region "$src_instanceid" &gt; /tmp/src_instance_info
src_volumeid=$(egrep ^BLOCKDEVICE /tmp/src_instance_info | cut -f3); echo $src_volumeid
# Now get the snapshot id from the volume id
ec2-describe-volumes --region $src_region $src_volumeid | egrep ^VOLUME &gt; /tmp/volume_info
src_snapshotid=$(cut /tmp/volume_info | cut -f2)
echo $src_snapshotid
src_size=$(cut /tmp/volume_info | cut -f2)
echo $src_size
# Create a new volume from the snapshot
src_volumeid=$(ec2-create-volume --region $src_region --snapshot $src_snapshotid -z $src_availability_zone | egrep ^VOLUME | cut -f2)
echo $src_volumeid</pre>
<h3>Mount the EBS Image of the AMI you want to copy</h3>
<p>Now we&#8217;ll mount the EBS AMI image as a plain mount on the running source instance. In this case we&#8217;re going to use the same image as we launched, but it doesn&#8217;t have to be the same image or even the same architecture.</p>
<pre>ec2-attach-volume --region $src_region $src_volumeid -i $src_instanceid -d $src_device</pre>
<p>You should see something like:</p>
<pre>ATTACHMENT	vol-6e7fee06	i-fb0804be	/dev/sdh	attaching	2010-03-14T09:02:58+0000</pre>
<h2>Prepare the Destination Instance and Volume</h2>
<h3>Set some Shell destination variables on host machine</h3>
<p>You&#8217;ll want to tune these to your needs. This example makes the destination size the same as the source. You could make the destination an arbitrary size as long as it fits the source data.</p>
<pre>dst_keypair=runa-production-us-east
dst_fullpath_keypair=~/.ssh/runa/id_runa-production-us-east
dst_availability_zone=us-east-1b
dst_instance_type=m1.large
dst_region=us-east-1
dst_origin_ami=ami-7d43ae14
dst_size=$src_size
dst_device=/dev/sdh
dst_dir=/dst
dst_user=ubuntu</pre>
<h3>Start up the destination instance and capture the dst_instanceid</h3>
<pre>dst_instanceid=$(ec2-run-instances \
  --key $dst_keypair \
  --availability-zone $dst_availability_zone \
  --instance-type $dst_instance_type \
  $dst_origin_ami \
  --region $dst_region  | \
  egrep ^INSTANCE | cut -f2)
echo "dst_instanceid=$dst_instanceid"

# Wait for the instance to move to the “running” state
while dst_public_fqdn=$(ec2-describe-instances --region $dst_region "$dst_instanceid" | \
  egrep ^INSTANCE | cut -f4) &amp;&amp; test -z $dst_public_fqdn; do echo -n .; sleep 1; done
echo dst_public_fqdn=$dst_public_fqdn</pre>
<p>This should loop till you see something like:</p>
<pre>$ echo dst_public_fqdn=$dst_public_fqdn
dst_public_fqdn=ec2-184-73-71-160.compute-1.amazonaws.com</pre>
<h3>Create an empty destination volume</h3>
<pre>dst_volumeid=$(ec2-create-volume --region $dst_region --size $dst_size -z $dst_availability_zone | egrep ^VOLUME | cut -f2)
echo $dst_volumeid</pre>
<h3>Mount the EBS Image of the AMI you want to copy</h3>
<p>Now we&#8217;ll mount the EBS AMI image as a plain mount on the running source instance. In this case we&#8217;re going to use the same image as we launched, but it doesn&#8217;t have to be the same image or even the same architecture.</p>
<pre>ec2-attach-volume --region $dst_region $dst_volumeid -i $dst_instanceid -d $dst_device</pre>
<p>You should see something like:</p>
<pre>ATTACHMENT	vol-450ed02c	i-65be1f0e	/dev/sdh	attaching	2010-03-14T09:39:20+0000</pre>
<h2>Copy the data from the Source Volume to the Destination Volume</h2>
<h3>Copy your credentials to the source machine</h3>
<p>We&#8217;re going to use rsync to copy from the source to the destination tunneled thru ssh. This eliminates any issues with EC2 security groups. But it does mean you have to copy an ssh private key to the source machine that will then be able to access the destination machine via ssh.</p>
<pre>scp -i $src_fullpath_keypair $dst_fullpath_keypair ${src_user}@${src_public_fqdn}:.ssh</pre>
<h3>Mount the source and destination volumes on their instances</h3>
<pre>ssh -i $src_fullpath_keypair ${src_user}@${src_public_fqdn} sudo mkdir -p $src_dir
ssh -i $src_fullpath_keypair ${src_user}@${src_public_fqdn} sudo mount $src_device $src_dir
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo mkfs.ext3 -F $dst_device
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo mkdir -p $dst_dir
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo mount $dst_device $dst_dir</pre>
<h3>Get the FQDN of the Amazon internal address of the destination machine</h3>
<p>We&#8217;re assuming that the dst instance is the us-east equivalent base AMI of the us-west source base AMI so we can use these kernel and ramdisk to build the new AMI later.</p>
<pre>ec2-describe-instances --region $dst_region "$dst_instanceid" &gt; /tmp/dst_instance_info
dst_internal_fqdn=$(egrep ^INSTANCE /tmp/dst_instance_info | cut -f5); echo $dst_internal_fqdn
dst_kernel=$(egrep ^INSTANCE /tmp/dst_instance_info | cut -f13); echo $dst_kernel
dst_ramdisk=$(egrep ^INSTANCE /tmp/dst_instance_info | cut -f14) ;echo $dst_ramdisk</pre>
<h2>Commands to run on the source machine</h2>
<p>You could do the rsync by logging into the source machine and do the following. I tried to do this by using ssh commands, but the fact that the first ssh from source to destination has to be authenticated was a blocker for me. You could log into the source machine and then sudo ssh to the destination machine (you have to do sudo ssh since the rsync has to be run with sudo and the keys are stored separately for the sudo user and the regular user).<br />
I&#8217;ll show both ways.<br />
Here&#8217;s how you can ssh to the source machine:</p>
<pre>ssh -i $src_fullpath_keypair ${src_user}@${src_public_fqdn}</pre>
<h3>Set up some shell variables on the source machine shell environment</h3>
<pre># This is the key you just copied over
dst_fullpath_keypair=~/.ssh/id_runa-production-us-east
# You need to use the Public FQDN of the destination since its cross region
dst_keypair=runa-production-us-east
src_public_fqdn=ec2-184-72-2-93.us-west-1.compute.amazonaws.com
dst_public_fqdn=ec2-184-73-71-160.compute-1.amazonaws.com
dst_user=ubuntu
src_user=ubuntu
src_dir=/src
dst_dir=/dst</pre>
<h3>Do the rsync</h3>
<p>We are using the rsync options</p>
<ul>
<li><strong>P</strong> Keep partial transferred files and Show Progress</li>
<li><strong>H</strong> Preserve Hard Links</li>
<li><strong>A</strong> Preserve ACLs</li>
<li><strong>X</strong> Preserve extended attributes</li>
<li><strong>a</strong> Archive mode</li>
<li><strong>z</strong> Compress files for transfer</li>
</ul>
<pre>rsync -PHAXaz --rsh "ssh -i /home/${src_user}/.ssh/id_${dst_keypair}" --rsync-path "sudo rsync" ${src_dir}/ ${dst_user}@${dst_public_fqdn}:${dst_dir}/</pre>
<h2>If you want to do the rsync from your local host</h2>
<p>I found that I still had to log into the source instance</p>
<pre>ssh -i $src_fullpath_keypair ${src_user}@${src_public_fqdn}</pre>
<p>and then on the source instance do:</p>
<pre>sudo ssh -i /home/${src_user}/.ssh/id_${dst_keypair} ${dst_user}@${dst_public_fqdn}</pre>
<p>and accept &#8220;<em>The authenticity of host</em>&#8221; for the first time so the destination host is in the known keys of the sudo user<br />
Then back on your local host you can issue the remote command that will run on the source instance and rsync to the destination host:</p>
<pre>ssh -i $src_fullpath_keypair ${src_user}@${src_public_fqdn} sudo "rsync -PHAXaz --rsh \"ssh -i /home/${src_user}/.ssh/id_${dst_keypair}\" --rsync-path \"sudo rsync\" ${src_dir}/ ${dst_user}@${dst_public_fqdn}:${dst_dir}/"</pre>
<h2>Complete the new AMI from your Local Host</h2>
<p>The remaining steps will be done back on your local host. This assumes that the shell variables we set up earlier are still there.</p>
<h3>Some Cleanup for new Region</h3>
<p>Ubuntu has their apt sources tied to the region you are in. So we have to update the apt sources for the new region.<br />
We&#8217;ll do this by chrooting to the mount /dst directory and running some commands as if they were being run on an ami with the /dst image. We might as well update things at the same time to the latest packages.</p>
<pre># Allow network access from chroot environment
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo cp /etc/resolv.conf $dst_dir/etc/

# Upgrade the system and install packages
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E chroot $dst_dir mount -t proc none /proc
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E chroot $dst_dir mount -t devpts none /dev/pts

cat &lt;&lt;EOF &gt; /tmp/policy-rc.d
#!/bin/sh
exit 101
EOF
scp -i $dst_fullpath_keypair /tmp/policy-rc.d ${dst_user}@${dst_public_fqdn}:/tmp
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo mv /tmp/policy-rc.d $dst_dir/usr/sbin/policy-rc.d

ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} chmod 755 $dst_dir/usr/sbin/policy-rc.d

# This has to be done to set up the Locale &amp; apt sources
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} DEBIAN_FRONTEND=noninteractive sudo -E chroot $dst_dir /usr/bin/ec2-set-defaults

# Update the apt sources
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} DEBIAN_FRONTEND=noninteractive sudo -E chroot $dst_dir apt-get update

# Optionally update the packages
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} DEBIAN_FRONTEND=noninteractive sudo -E chroot $dst_dir apt-get dist-upgrade -y

# Optionally update your gems
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E chroot $dst_dir gem update --system
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E chroot $dst_dir gem update</pre>
<h4>Clean up from the building of the image</h4>
<pre>ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo chroot $dst_dir umount /proc
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E chroot $dst_dir umount /dev/pts
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E rm -f $dst_dir/usr/sbin/policy-rc.d</pre>
<h3>There are a few more shell variables we&#8217;ll need</h3>
<p>I got the kernel and ramdisk from the destination instance since it has the alestic.com us-east-1 equivalent base AMI to the us-west-1 one that we are copying from.</p>
<pre># Some info for creating the name and description
codename=karmic
release=9.10
tag=server

# Make sure you set this as appropriate
# 64bit
arch=x86_64

# You will need to set the aki and ari values base on the actual base AMI you used
# It will be different for different regions.  These are set for x86_64 and us-east-1
ebsopts="--kernel=${dst_kernel} --ramdisk=${dst_ramdisk}"
ebsopts="$ebsopts --block-device-mapping /dev/sdb=ephemeral0"

now=$(date +%Y%m%d-%H%M)
# Make this specific to what you are making
chef_version="0.8.6"
prefix=runa-chef-${chef_version}-ubuntu-${release}-${codename}-${tag}-${arch}-${now}
description="Runa Chef ${chef_version} Ubuntu $release $codename $tag $arch $now"</pre>
<h3>Snapshot the Destination Volume and register the new AMI in the destination region</h3>
<pre># Unmount the destination filesystem
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo umount $dst_dir

# Detach the Destination Volume (it may speed up the snapshot)
ec2-detach-volume --region $dst_region "$dst_volumeid"

# Make the snapshot
dst_snapshotid=$(ec2-create-snapshot -region $dst_region -d "$description" $dst_volumeid | cut -f2)

# Wait for snapshot to complete. This can take a while
while ec2-describe-snapshots --region $dst_region "$dst_snapshotid" | grep -q pending
  do echo -n .; sleep 1; done

# Register the Destination Snapshot as a new AMI in the Destination Region
new_ami=$(ec2-register \
  --region $dst_region \
  --architecture $arch \
  --name "$prefix" \
  --description "$description" \
  $ebsopts \
  --snapshot "$dst_snapshotid")
echo $new_ami</pre>
<h2>Conclusion</h2>
<p>You should now have a shiny new ami in your destination region. Use the value of $new_ami to start a new instance in your destination region using your favorite tool or technique.</p><p>The post <a href="https://www.ibd.com/scalable-deployment/copy-an-ebs-ami-image-to-another-amazon-ec2-region/">Copy an EBS AMI image to another Amazon EC2 Region</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.ibd.com/scalable-deployment/copy-an-ebs-ami-image-to-another-amazon-ec2-region/feed/</wfw:commentRss>
			<slash:comments>13</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">551</post-id>	</item>
		<item>
		<title>Using the Official Opscode 0.8.x Gems to build EC2 AMI Chef Client and Server</title>
		<link>https://www.ibd.com/howto/using-the-official-opscode-0-8-x-gems-to-build-ec2-ami-chef-client-and-server/</link>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Wed, 03 Mar 2010 06:50:57 +0000</pubDate>
				<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Opscode Chef]]></category>
		<category><![CDATA[Ruby / Rails]]></category>
		<category><![CDATA[Scalable Deployment]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[EC2]]></category>
		<category><![CDATA[Infrastructure]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=513</guid>

					<description><![CDATA[<p>Updates Mar 3, 2010 Added call to script ec2-set-defaults that is normally called on ec2 init that sets the locale and apt sources for EC&#8230;</p>
<p>The post <a href="https://www.ibd.com/howto/using-the-official-opscode-0-8-x-gems-to-build-ec2-ami-chef-client-and-server/">Using the Official Opscode 0.8.x Gems to build EC2 AMI Chef Client and Server</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<h2>Updates</h2>
<ul>
<li><strong>Mar 3, 2010</strong> Added call to script <em>ec2-set-defaults </em>that is normally called on ec2 init that sets the locale and apt sources for EC availability Zone</li>
</ul>
<h2>Introduction</h2>
<p>Opscode has officially released 0.8.x of Chef. It is now even more fabulous. I&#8217;ve been using the pre-release version for the last couple of months and it is rock steady and very powerful. I&#8217;ll be having a post soon on how I used it to deploy a pretty complicated cloud stack with multiple Rails/Mysql/Nginx/Unicorn/Postfix apps for front-ends, and a back end made up of a mix of a Clojure/Swarmiji distributed processing swarm, HBase/Hadoop, Redis, RabbitMQ.</p>
<p>But first, I needed to upgrade my Amazon EC2 AMIs for the officially released Chef 0.8.x. I also wanted to try the EBS Boot image as a basis for the AMI.</p>
<p>This is an update to my earlier post, <a href="http://blog2.ibd.com/scalable-deployment/creating-an-amazon-ami-for-chef-0-8/" target="_blank">Creating an Amazon EC2 AMI for Opscode Chef 0.8</a>, but now using the official Opscode 0.8.x Gems instead of building your own Gems. A lot of the content is the same, but you can consider this mostly superceding the older one except where mentioned otherwise. This version will use the EBS Boot AMIs as per Eric Hammond&#8217;s Tutorial Building <a href="http://alestic.com/2010/01/ec2-ebs-boot-ubuntu" target="_blank">EBS Boot AMIs Using Canonical&#8217;s Downloadable EC2 Images</a>. Much of this is blog post is taken from Eric&#8217;s blog post but in the context of creating a Chef Client base AMI and a Chef Server. Note that <a href="http://thecloudmarket.com/owner/345069653647--opscode" target="_blank">Opscode now has their own AMIs,</a> including ones for Chef 0.8.4, but as of this writing, they do not have AMIs for Amazon us-west.</p>
<h2>Setup</h2>
<h3>Prerequisites</h3>
<p>On your host development machine (ie your laptop or whatever machine you are developing from) you should have already installed:</p>
<ul>
<li>ec2-api-tools and ec2-ami-tools (these assume you have a modern Java run time setup)</li>
<li>chef-0.8.4 or later chef client gem (which implies the entire ruby 1.8.x and rubygems toolchain)</li>
</ul>
<h3>Set some Shell variables on host machine</h3>
<p>Just to make using these instructions as a cookbook, we&#8217;ll have some shell variables that you can set once and then all the instructions will use the variables so you can just cut and paste the instructions into your shell.</p>
<pre>keypair=id_runa-staging-us-west
fullpath_keypair=~/.ssh/runa/id_runa-staging-us-west
availability_zone=us-west-1a
instance_type=m1.large
region=us-west-1

# Pick one of these two AMIs (Note that it will be different for different Amazon Regions)
# 32bit AMI
origin_ami=ami-fd5100b8
#64bit AMI
origin_ami=ami-ff5100ba</pre>
<h3>Start up an instance and capture the instanceid</h3>
<pre>instanceid=$(ec2-run-instances \
  --key $keypair \
  --availability-zone $availability_zone \
  --instance-type $instance_type \
  $origin_ami \
  --region $region  |
  egrep ^INSTANCE | cut -f2)
echo "instanceid=$instanceid"</pre>
<h3>Wait for the instance to move to the “running” state</h3>
<pre>while host=$(ec2-describe-instances --region $region "$instanceid" |
  egrep ^INSTANCE | cut -f4) &amp;&amp; test -z $host; do echo -n .; sleep 1; done
echo host=$host</pre>
<p>This should loop till you see something like:</p>
<pre>$ echo host=$host
host=ec2-184-72-2-93.us-west-1.compute.amazonaws.com</pre>
<h3>Upload your certs</h3>
<p>This assumes that your Amazon certs are in ~/.ec2</p>
<pre>rsync                            \
 --rsh="ssh -i $fullpath_keypair" \
 --rsync-path="sudo rsync"      \
 ~/.ec2/{cert,pk}-*.pem         \
 ubuntu@$host:/mnt/</pre>
<h3>Connect to the instance</h3>
<pre>ssh -i $fullpath_keypair ubuntu@$host</pre>
<h3>Update the Amazon ec2 tools on the instance</h3>
<pre>export DEBIAN_FRONTEND=noninteractive
echo "deb http://ppa.launchpad.net/ubuntu-on-ec2/ec2-tools/ubuntu karmic main" |
  sudo tee /etc/apt/sources.list.d/ubuntu-on-ec2-ec2-tools.list &amp;&amp;
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 9EE6D873 &amp;&amp;
sudo apt-get update &amp;&amp;
sudo -E apt-get dist-upgrade -y &amp;&amp;
sudo -E apt-get install -y ec2-api-tools</pre>
<h3>Set some parameters on instance shell environment</h3>
<p>Again this makes it easier to cut and paste the instructions.</p>
<pre>codename=karmic
release=9.10
tag=server
region=us-west-1
availability_zone=us-west-1a
if [ $(uname -m) = 'x86_64' ]; then
  arch=x86_64
  arch2=amd64
  # You will need to set the aki and ari values base on the actual base AMI you used
  # It will be different for different regions.  These are set for us-west-1
  ebsopts="--kernel=aki-7f3c6d3a --ramdisk=ari-cf2e7f8a"
  ebsopts="$ebsopts --block-device-mapping /dev/sdb=ephemeral0"
else
  arch=i386
  arch2=i386
  # You will need to set the aki and ari values base on the actual base AMI you used
  # It will be different for different regions. These are set for us-west-1
  ebsopts="--kernel=aki-773c6d32 --ramdisk=ari-c12e7f84"
  ebsopts="$ebsopts --block-device-mapping /dev/sda2=ephemeral0"
fi</pre>
<h3>Download and unpack the latest released Ubuntu server image file</h3>
<p>This contains the output of vmbuilder as run by Canonical.</p>
<pre>imagesource=http://uec-images.ubuntu.com/releases/$codename/release/unpacked/ubuntu-$release-$tag-uec-$arch2.img.tar.gz
image=/mnt/$codename-$tag-uec-$arch2.img
imagedir=/mnt/$codename-$tag-uec-$arch2
wget -O- $imagesource |
  sudo tar xzf - -C /mnt
sudo mkdir -p $imagedir
sudo mount -o loop $image $imagedir</pre>
<h3>Bring the packages on the instance up to date</h3>
<pre># Allow network access from chroot environment
sudo cp /etc/resolv.conf $imagedir/etc/

# Fix what I consider to be a bug in vmbuilder
sudo rm -f $imagedir/etc/hostname

# Add multiverse
sudo perl -pi -e 's%(universe)$%$1 multiverse%' \
$imagedir/etc/ec2-init/templates/sources.list.tmpl

# Add Alestic PPA for runurl package (handy in user-data scripts)
echo "deb http://ppa.launchpad.net/alestic/ppa/ubuntu karmic main" |
sudo tee $imagedir/etc/apt/sources.list.d/alestic-ppa.list
sudo chroot $imagedir \
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys BE09C571

# Add ubuntu-on-ec2/ec2-tools PPA for updated ec2-ami-tools
echo "deb http://ppa.launchpad.net/ubuntu-on-ec2/ec2-tools/ubuntu karmic main" |
sudo tee $imagedir/etc/apt/sources.list.d/ubuntu-on-ec2-ec2-tools.list
sudo chroot $imagedir \
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 9EE6D873

# Upgrade the system and install packages
sudo chroot $imagedir mount -t proc none /proc
sudo chroot $imagedir mount -t devpts none /dev/pts

cat &lt;&lt;EOF &gt; /tmp/policy-rc.d
#!/bin/sh
exit 101
EOF
sudo mv /tmp/policy-rc.d $imagedir/usr/sbin/policy-rc.d

chmod 755 $imagedir/usr/sbin/policy-rc.d
DEBIAN_FRONTEND=noninteractive

# It seems this has to be done to set up the Locale &amp; apt sources
sudo -E chroot $imagedir /usr/bin/ec2-set-defaults

# Update the apt sources and packages
sudo chroot $imagedir apt-get update &amp;&amp;
sudo -E chroot $imagedir apt-get dist-upgrade -y &amp;&amp;
sudo -E chroot $imagedir apt-get install -y runurl ec2-ami-tools</pre>
<h2>Install Chef Client and other customizations</h2>
<h3>Install Ruby and needed packages</h3>
<pre><code>sudo -E chroot $imagedir apt-get -y install ruby ruby1.8-dev libopenssl-ruby1.8 rdoc ri irb \
build-essential wget ssl-cert git-core rake librspec-ruby libxml-ruby \
thin couchdb zlib1g-dev libxml2-dev emacs23-nox</code></pre>
<h4>Install Rubygems</h4>
<p>Rubygems will be installed from source since debian/ubuntu try to control rubygems upgrades. If you don&#8217;t care you can install it via apt-get install rubygems</p>
<pre><code>cd $imagedir/tmp
wget http://rubyforge.org/frs/download.php/69365/rubygems-1.3.6.tgz
tar zxf rubygems-1.3.6.tgz
cd rubygems-1.3.6
sudo -E chroot $imagedir ruby /tmp/rubygems-1.3.6/setup.rb
cd ..
sudo rm -rf rubygems-1.3.6
sudo -E chroot $imagedir ln -sfv /usr/bin/gem1.8 /usr/bin/gem
sudo -E chroot $imagedir gem sources -a http://gems.opscode.com
sudo -E chroot $imagedir gem sources -a http://gemcutter.org
sudo -E chroot $imagedir gem install chef
</code></pre>
<h3>Use Opscode Chef Solo Bootstrap to configure the Chef Client</h3>
<p>The following will set up all the default paths and directories as well as install and configure runit to start and monitor the chef-client. Originally I shied away from runit, but this time I&#8217;m going as Opscode Vanilla as possible and they like runit.</p>
<h4>Create the solo.rb file</h4>
<p>All of the following files should be done in $imagedir as we are going to have to run this as chroot to $imagedir</p>
<p>Create $imagedir/solo.rb with an editor and put in the following:</p>
<pre>file_cache_path "/tmp/chef-solo"
cookbook_path "/tmp/chef-solo/cookbooks"
recipe_url "http://s3.amazonaws.com/chef-solo/bootstrap-latest.tar.gz"</pre>
<h4>Create the chef.json file</h4>
<p>Create $imagedir/chef.json with the following. (set the server_fqdn to the chef server you are using):</p>
<pre>{
  "bootstrap": {
    "chef": {
      "url_type": "http",
      "init_style": "runit",
      "path": "/srv/chef",
      "serve_path": "/srv/chef",
      "server_fqdn": "chef-server-staging.runa.com"
    }
  },
  "run_list": [ "recipe[bootstrap::client]" ]
}</pre>
<h4>Run the chef-solo command</h4>
<pre>sudo -E chroot $imagedir chef-solo -c solo.rb -j chef.json \
  -r http://s3.amazonaws.com/chef-solo/bootstrap-latest.tar.gz</pre>
<p>I had to run it 3 times before it completed with no errors.<br />
After it does work, clean up the chef-solo stuff:</p>
<pre>sudo rm $imagedir/{solo.rb,chef.json}</pre>
<h3>Update the client config file</h3>
<p>The Chef Solo Client bootstrap process creates an /etc/chef/client.rb that is not ideal for Amazon EC2. The following will replace that:</p>
<pre><code>mkdir -p /etc/chef
chown root:root /etc/chef
chmod 755 /etc/chef
</code></pre>
<p>Put the following in /etc/chef/client.rb:</p>
<pre><code>
# Chef Client Config File
# Automatically grabs configuration from ohai ec2 metadata.

require 'ohai'
require 'json'

o = Ohai::System.new
o.all_plugins
chef_config = JSON.parse(o[:ec2][:userdata])
if chef_config.kind_of?(Array)
  chef_config = chef_config[o[:ec2][:ami_launch_index]]
end

log_level        :info
log_location     STDOUT
node_name        o[:ec2][:instance_id]
chef_server_url  chef_config["chef_server"]

unless File.exists?("/etc/chef/client.pem")
  File.open("/etc/chef/validation.pem", "w", 0600) do |f|
    f.print(chef_config["validation_key"])
  end
end

if chef_config.has_key?("attributes")
  File.open("/etc/chef/client-config.json", "w") do |f|
    f.print(JSON.pretty_generate(chef_config["attributes"]))
  end
  json_attribs "/etc/chef/client-config.json"
end

validation_key "/etc/chef/validation.pem"
validation_client_name chef_config["validation_client_name"]

Mixlib::Log::Formatter.show_time = true
</code></pre>
<h2>Finish creating the new image</h2>
<h3>Clean up from the building of the image</h3>
<pre>sudo chroot $imagedir umount /proc
sudo chroot $imagedir umount /dev/pts
sudo rm -f $imagedir/usr/sbin/policy-rc.d</pre>
<h3>Copy the image files to a new EBS volume, snapshot and register the snapshot</h3>
<pre>size=15 # root disk in GB
now=$(date +%Y%m%d-%H%M)
prefix=runa-chef-0.8.4-ubuntu-$release-$codename-$tag-$arch-$now
description="Runa Chef 0.8.4 Ubuntu $release $codename $tag $arch $now"
export EC2_CERT=$(echo /mnt/cert-*.pem)
export EC2_PRIVATE_KEY=$(echo /mnt/pk-*.pem)

volumeid=$(ec2-create-volume --region $region --size $size \
  --availability-zone $availability_zone | cut -f2)

instanceid=$(wget -qO- http://instance-data/latest/meta-data/instance-id)

ec2-attach-volume --region $region --device /dev/sdi --instance "$instanceid" "$volumeid"

while [ ! -e /dev/sdi ]; do echo -n .; sleep 1; done

sudo mkfs.ext3 -F /dev/sdi
ebsimage=$imagedir-ebs
sudo mkdir $ebsimage
sudo mount /dev/sdi $ebsimage

sudo tar -cSf - -C $imagedir . | sudo tar xvf - -C $ebsimage
sudo umount $ebsimage

ec2-detach-volume --region $region "$volumeid"
snapshotid=$(ec2-create-snapshot --region $region "$volumeid" | cut -f2)

ec2-delete-volume --region $region "$volumeid"

# This takes a while
while ec2-describe-snapshots --region $region "$snapshotid" | grep -q pending
  do echo -n .; sleep 1; done

ec2-register \
  --region $region \
  --architecture $arch \
  --name "$prefix" \
  --description "$description" \
  $ebsopts \
  --snapshot "$snapshotid"</pre>
<h2>Afterward</h2>
<p>That will get you an AMI that you can now use as a chef-client. You can use the directions from the section <em>Creating a Chef Server from your new Image</em> in the previous article: <a href="http://blog2.ibd.com/scalable-deployment/creating-an-amazon-ami-for-chef-0-8/" target="_blank">Creating an Amazon EC2 AMI for Opscode Chef 0.8</a>.</p><p>The post <a href="https://www.ibd.com/howto/using-the-official-opscode-0-8-x-gems-to-build-ec2-ami-chef-client-and-server/">Using the Official Opscode 0.8.x Gems to build EC2 AMI Chef Client and Server</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">513</post-id>	</item>
		<item>
		<title>Creating an Amazon EC2 AMI for Opscode Chef 0.8 Client and Server</title>
		<link>https://www.ibd.com/howto/creating-an-amazon-ami-for-chef-0-8/</link>
					<comments>https://www.ibd.com/howto/creating-an-amazon-ami-for-chef-0-8/#comments</comments>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Tue, 12 Jan 2010 09:00:21 +0000</pubDate>
				<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Opscode Chef]]></category>
		<category><![CDATA[Runa]]></category>
		<category><![CDATA[Scalable Deployment]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[EC2]]></category>
		<category><![CDATA[Git]]></category>
		<category><![CDATA[Infrastructure]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=333</guid>

					<description><![CDATA[<p>Changes Since Original 1/13/10: Fix various minor inaccuracies and improved description on how to set up the chef-server. Also removed nanite as a requirement (its&#8230;</p>
<p>The post <a href="https://www.ibd.com/howto/creating-an-amazon-ami-for-chef-0-8/">Creating an Amazon EC2 AMI for Opscode Chef 0.8 Client and Server</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<h2>Changes Since Original</h2>
<ul>
<li>1/13/10: Fix various minor inaccuracies and improved description on how to set up the chef-server. Also removed nanite as a requirement (its no longer used)</li>
<li>1/17/10: Add the requirement to build and install mixlib-authentication for the chef-client</li>
<li>1/21/10: Added a mkdir for /var/log/chef</li>
<li>1/22/10: Added step to insure that /tmp permissions are set</li>
</ul>
<h2>Introduction</h2>
<p>Here&#8217;s my experience setting up an Amazon EC2 AMI and Instance for a Chef Server and Client. It is based mostly on <a href="http://loftninjas.org/" target="_blank">Bryan Mclellan (btm)</a>&#8216;s post of Nov 24, 2009 <a href="http://blog.loftninjas.org/2009/11/24/installing-chef-08-alpha-on-ubuntu-karmic/" target="_blank">Installing Chef 0.8 alpha on Ubuntu Karmic</a> and  his more up to date <a href="http://gist.github.com/242523" target="_blank">GIST: chef 0.8 alpha installation</a>. It has a slightly different focus and is a bit stale if you are building your own 0.8 gems from the <a href="http://github.com/opscode/chef" target="_blank">source</a>.</p>
<h2>Instantiate an Amazon EC2 Instance</h2>
<p>We&#8217;ll start with the Canonical Ubuntu 9.10 Karmic AMI. I always go to <a href="http://alestic.com/" target="_blank">Eric Hammond&#8217;s site  alestic.com</a> to get the pointers to the right AMIs. In this case we&#8217;re using a 32bit image for the US-West Region: ami-7d3c6d38 US-East 32bit: ami-1515f67c. You can use the US-West 64bit image ami-7b3c6d3e, US-East 64bit: ami-ab15f6c2</p>
<p>Start the instance from your local dev machine using the command line ec2-api-tools (available as a package or directly from <a href="http://developer.amazonwebservices.com/connect/entry.jspa?externalID=351" target="_blank">Amazon</a>) or using something like the Firefox <a href="http://developer.amazonwebservices.com/connect/entry.jspa?externalID=609" target="_blank">Elasticfox</a> and then ssh into the instance so that you can do the following steps on the instance. For the sake of this example, lets say that the Public DNS name for the instance you started is ec2-204-222-170-10.us-west-1.compute.amazonaws.com and the ssh keypair you associated with this new instance is now on your local dec machine in  ~/.ssh/gsg-keypair</p>
<h2>Prerequisite preparation</h2>
<p>The first set of steps need to be done on the instance you just created so login via ssh:</p>
<pre>ssh -i ~/.ssh/gsg-keypair ec2-204-222-170-10.us-west-1.compute.amazonaws.com</pre>
<h3>If on Amazon us-west</h3>
<p>There is a bug in the current us-west Canonical AMI where it does not use the us-west apt server. So you have to correct the apt soruces.list:</p>
<pre><code>sed -i.bak '1,$s/us.ec2.archive.ubuntu.com/us-west-1.ec2.archive.ubuntu.com/' \
/etc/apt/sources.list</code></pre>
<h3>For all cases</h3>
<pre><code>sudo sed -i.bak2 '1,$s/universe/universe multiverse/' /etc/apt/sources.list
sudo apt-get -y update
sudo apt-get -y upgrade
sudo apt-get -y install emacs23 # Of course this is the first package to install!</code></pre>
<pre><code># Will need these to manipulate ec2 images
sudo apt-get -y install ec2-api-tools ec2-ami-tools </code></pre>
<h3>Set up the ruby environment and install rubygems</h3>
<h4>Install Ruby and needed packages</h4>
<pre><code>sudo apt-get -y install -y ruby ruby1.8-dev libopenssl-ruby1.8 rdoc ri irb \
build-essential wget ssl-cert git-core rake librspec-ruby libxml-ruby \
thin couchdb zlib1g-dev libxml2-dev</code></pre>
<h4>Install Rubygems</h4>
<p>Rubygems will be installed from source since debian/ubuntu try to control rubygems upgrades. If you don&#8217;t care you can install it via apt-get install rubygems</p>
<pre><code>cd /tmp
wget http://rubyforge.org/frs/download.php/60718/rubygems-1.3.5.tgz
tar zxf rubygems-1.3.5.tgz
cd rubygems-1.3.5
sudo ruby setup.rb
sudo ln -sfv /usr/bin/gem1.8 /usr/bin/gem
sudo gem sources -a http://gems.opscode.com
sudo gem sources -a http://gemcutter.org</code></pre>
<h4>Install Pre-requisit Gems</h4>
<pre><code>sudo gem install cucumber merb-core jeweler uuidtools \
json libxml-ruby --no-ri --no-rdoc</code></pre>
<h3>Building and Installing Chef Related Gems</h3>
<p>Until there are final 0.8.x Chef gems, you will have had to build them on your local machine and upload them to this instance. On your dev machine (this example builds things in ~/src, but it could be anywhere appropriate) follow these instructions to build all the gems and install gems you might need to use your local machine. You will use your local dev machine to develop and manage cookbooks and to manage a remote chef-server:</p>
<pre><code>mkdir ~/src
cd ~/src
git clone git://github.com/opscode/chef.git
git clone git://github.com/opscode/ohai.git
git clone git://github.com/opscode/mixlib-log
git clone git://github.com/opscode/mixlib-authentication.git
# Need to get mixlib-log for client &amp; server and
# mixlib-authentication for the client from git till the 1.1.0 update hits
# See http://tickets.opscode.com/browse/CHEF-823
cd mixlib-log
sudo rake install
cd mixlib-authentication
sudo rake install
cd ../ohai
sudo rake install
cd ../chef
rake gem
# Now cd into ~/src/chef/chef to install the chef client/dev gem on your local machine
cd chef
rake install </code></pre>
<p>Upload the gems needed for the client to your instance. From ~/src on your local dev machine do:</p>
<pre>scp -i ~/.ssh/gsg-keypair chef/chef/pkg/chef-0.8.0.gem  ohai/pkg/ohai-0.3.7.gem \
mixlib-authentication/pkg/mixlib-authentication-1.1.0.gem \
mixlib-log/pkg/mixlib-log-1.1.0.gem  ec2-204-222-170-10.us-west-1.compute.amazonaws.com:</pre>
<h2>Set up the Chef Client on the new Instance</h2>
<p>Now back in your home directory on the instance ec2-204-222-170-10.us-west-1.compute.amazonaws.com install the gems you just copied over:</p>
<pre><code>sudo gem install mixlib-log-1.1.0.gem ohai-0.3.7.gem
sudo gem install chef-0.8.0.gem </code></pre>
<h3>Create the client config file</h3>
<pre><code>mkdir /var/log/chef
mkdir /etc/chef
chown root:root /etc/chef
chmod 755 /etc/chef
</code></pre>
<p>Put the following in /etc/chef/client.rb:</p>
<pre><code># Chef Client Config File

require 'ohai'
require 'json'

o = Ohai::System.new
o.all_plugins
chef_config = JSON.parse(o[:ec2][:userdata])
if chef_config.kind_of?(Array)
  chef_config = chef_config[o[:ec2][:ami_launch_index]]
end

log_level        :info
log_location     "/var/log/chef/client.log"
chef_server_url  chef_config["chef_server"]
registration_url chef_config["chef_server"]
openid_url       chef_config["chef_server"]
template_url     chef_config["chef_server"]
remotefile_url   chef_config["chef_server"]
search_url       chef_config["chef_server"]
role_url         chef_config["chef_server"]
client_url       chef_config["chef_server"]

node_name        o[:ec2][:instance_id]

unless File.exists?("/etc/chef/client.pem")
  File.open("/etc/chef/validation.pem", "w") do |f|
    f.print(chef_config["validation_key"])
  end
end

if chef_config.has_key?("attributes")
  File.open("/etc/chef/client-config.json", "w") do |f|
    f.print(JSON.pretty_generate(chef_config["attributes"]))
  end
  json_attribs "/etc/chef/client-config.json"
end

validation_key "/etc/chef/validation.pem"
validation_client_name chef_config["validation_client_name"]

Mixlib::Log::Formatter.show_time = true</code></pre>
<h4>Set up the /etc/init.d/chef-client</h4>
<p>Copy the example init.d script (You can also use runit instead, but we&#8217;re not going to describe that here)</p>
<pre><code>cp /usr/lib/ruby/gems/1.8/gems/chef-0.8.0/distro/debian/etc/init.d/chef-client /etc/init.d
cd /etc/init.d
update-rc.d chef-client defaults</code></pre>
<h4>Create an Init script to set /tmp to proper permmissions</h4>
<p>It looks like the Canonical Images will not  have /tmp with proper permissions if you exclude /tmp from your bundle process. Eric Hammond <a href="https://developer.amazonwebservices.com/connect/message.jspa?messageID=160098" target="_blank">recommends</a> doing the following.</p>
<p>Create a file /etc/init.d/ec2-mkdir-tmp with the following contents:</p>
<pre>#!/bin/sh
#
# ec2-mkdir-tmp Create /tmp if missing (as it's nice to bundle without it).
#
mkdir -p    /tmp
chmod 01777 /tmp</pre>
<p>Then set up the /etc/rc dirs to launch this on boot:</p>
<pre>
<pre>chmod a+x /etc/init.d/ec2-mkdir-tmp
ln -s /etc/init.d/ec2-mkdir-tmp /etc/rcS.d/S36ec2-mkdir-tmp</pre>
<h3><strong>Build the EC2 Image</strong></h3>
<p>The always amazingly helpful <a href="http://www.anvilon.com/" target="_blank">Eric Hammond</a> has a post, <a href="http://alestic.com/2009/06/ec2-ami-bundle" target="_blank">Creating a New Image for EC2 by Rebundling a Running Instance</a>, that describes the basics of how to do this. The following is pretty much a direct synopsis with minimal explanation. See his blog post for more details.</p>
<h3>Clean up potential security holes</h3>
<p>Remove stuff you don&#8217;t want to freeze into your image.</p>
<pre><code>sudo rm -f /root/.*hist* $HOME/.*hist*
sudo rm -f /var/log/*.gz</code></pre>
<h3>Copy AWS Certs to Instance</h3>
<p>Back on your local development system, copy your Amazon certificates to the instance.</p>
<pre><code>
remotehost=&lt;ec2-instance-hostname&gt;
remoteuser=ubuntu
scp -i &lt;private-ssh-key&gt; \
  &lt;path-to-certs&gt;/{cert,pk}-*.pem \
  $remoteuser@$remotehost:/tmp
</code></pre>
<h3>Create the new Image on the Instance</h3>
<p>Back on the ec2 instance, you&#8217;ll do the following to create the image.</p>
<h4>Define where to store the image on S3</h4>
<p>This assumes you have an S3 account setup on AWS. You don&#8217;t have to have already created the bucket. Set some bash variables that will be used by the commands that follow. You should set the prefix to something that is meaningful. Below is what I used as an example. You&#8217;ll want to make it unique to your environment. The Bucket name must be Globally unique across all of Amazon S3.</p>
<pre><code>bucket=runa-west-amis
prefix=runa-ubuntu-9.10-i386-20100101-base</code></pre>
<h4>Define your AWS credentials and target processor</h4>
<pre><code>export AWS_USER_ID=&lt;your-value&gt;
export AWS_ACCESS_KEY_ID=&lt;your-value&gt;
export AWS_SECRET_ACCESS_KEY=&lt;your-value&gt;

if [ $(uname -m) = 'x86_64' ]; then
  arch=x86_64
else
  arch=i386
fi
</code></pre>
<p>Bundle the files<br />
This also runs on the current instance and will bundle the everything on the instance file system except for dirs specified with the -e flag into a copy of the image under /mnt:</p>
<pre><code>sudo -E ec2-bundle-vol           \
  -r $arch                       \
  -d /mnt                        \
  -p $prefix                     \
  -u $AWS_USER_ID                \
  -k /tmp/pk-*.pem               \
  -c /tmp/cert-*.pem             \
  -s 10240                       \
  -e /mnt,/tmp,/root/.ssh,/home/ubuntu/.ssh
</code></pre>
<h5>If you are deploying to US-West-1 AWS Region</h5>
<p>Looks like the Amazon ec2 ami tools are not super aware about us-west yet. So you have to do this extra step right now. You&#8217;ll have to change the &#8211;kernel and &#8211;ramdisk to the ones appropriate for your kernel. You can inspect the values used for the AMI you used to boot the original instance. You can do this with ElasticFox or with the command (specify the AMI and region its in thatyou want to check):</p>
<pre>ec2-describe-images ami-7d3c6d38   -C /tmp/cert-*.pem -K /tmp/pk-*.pem --region us-west-1</pre>
<p>Then execute the following command and specify the right kernel and ramdisk</p>
<pre><code>sudo -E ec2-migrate-manifest        \
  -c /tmp/cert-*.pem             \
  -k /tmp/pk-*.pem               \
  -m /mnt/$prefix.manifest.xml   \
  --access-key $AWS_ACCESS_KEY_ID  \
  --secret-key $AWS_SECRET_ACCESS_KEY \
  --kernel aki-773c6d32          \
  --ramdisk ari-713c6d34         \
  --region us-west-1</code></pre>
<p><code> </code></p>
<h4>Upload the bundle to a bucket on S3:</h4>
<pre><code>sudo -E ec2-upload-bundle        \
    -b $bucket                   \
    -m /mnt/$prefix.manifest.xml \
    -a $AWS_ACCESS_KEY_ID        \
    -s $AWS_SECRET_ACCESS_KEY    \
    --location us-west-1
</code></pre>
<p>You may be prompted with something like:</p>
<pre><code>You are bundling in one region, but uploading to another. If the kernel or ramdisk associated with this AMI are not in the target region, AMI registration will fail.
You can use the ec2-migrate-manifest tool to update your manifest file with a kernel and ramdisk that exist in the target region.
Are you sure you want to continue? [y/N]
</code></pre>
<p>You should enter y return to accept.</p>
<h4>Register the AMI</h4>
<p>Back on your local development machine:</p>
<pre><code>ec2-register $bucket/$prefix.manifest.xml --region us-west-1</code></pre>
<p>The output of this will be the ami-id of your new instance. You can use this to instantiate your new ami.</p>
<p>You now have a private ami image you can start just like any other image. If you want to make it public</p>
<pre><code>ec2-modify-image-attribute -l -a all </code></pre>
<h2>Using the new AMI Image</h2>
<p>You can now use this instance as the basis for chef clients and also the basis to create a Chef Server. Use the Amazon EC2 tool, ElasticFox or whatever you favorite tool for managing EC2 instances to make a new instance first to create a Chef Server. Then after that you can create clients and have them load their roles and recipes from the chef server. Once you have a Chef Server, you can use knife ec2 instance command to create user data that includes a run list, credentials and other json that can be passed to the general ec2 tools to build specific instances.</p>
<h3>Creating a Chef Server from your new Image</h3>
<p>Using an EC2 tool like ec2-tools or elasticfox, create a new instance based on the AMI created earlier. You should use at least a c1.medium as the m1.small is just too painfully wimpy to use. Assume the new instance has the Public DNS name: <code>ec2-204-203-51-20.us-west-1.compute.amazonaws.com</code><br />
Copy the chef server gems to the new instance from the ~/src directory in your local dev environment to the new instance:</p>
<pre><code>scp -i ~/.ssh/gsg-keypair chef/*/pkg/*.gem \
ec2-204-203-51-20.us-west-1.compute.amazonaws.com:</code></pre>
<p>ssh to the new instance and do the following:</p>
<pre><code>sudo gem install chef-server-0.8.0.gem chef-server-api-0.8.0.gem \
chef-server-webui-0.8.0.gem chef-solr-0.8.0.gem</code></pre>
<h4>Set things up to use bootstrap client using chef-solo</h4>
<p>We&#8217;ll be using the last part of BTM&#8217;s GIST, and danielsdeleo (Dan DeLeo)&#8217;s <a href="http://github.com/danielsdeleo/cookbooks/tree/08boot/bootstrap" target="_blank">bootstrap cookbook</a> and chef-solo to set up this initial server.</p>
<pre><code>mkdir -p /tmp/chef-solo
cd /tmp/chef-solo
git clone git://github.com/danielsdeleo/cookbooks.git
cd cookbooks
git checkout 08boot
</code></pre>
<p>Create ~/chef.json:</p>
<pre><code>{
  "bootstrap": {
    "chef": {
      "url_type": "http",
      "init_style": "runit",
      "path": "/srv/chef",
      "serve_path": "/srv/chef",
      "server_fqdn": "localhost"
    }
  },
  "recipes": "bootstrap::server"
}
# End of file
</code></pre>
<p>Create ~/solo.rb with the following content:</p>
<pre><code>file_cache_path "/tmp/chef-solo"
cookbook_path "/tmp/chef-solo/cookbooks"
# End of ~/solo.rb file
</code></pre>
<p>Run chef-solo which will execute the chef bootstrap recipes using the bootstrap params in ~/chef.json to actually setup and configure this chef server</p>
<p>If you had installed rubygems with the ubuntu apt package you may have to specify the path:</p>
<pre><code>/var/lib/gems/1.8/bin/</code></pre>
<p>instead of:</p>
<pre><code>/usr/bin</code></pre>
<p>for the knife and various chef commands in the following code.</p>
<pre><code>/usr/bin/chef-solo -j ~/chef.json -c ~/solo.rb -l debug</code></pre>
<p>You will see a lot of Debug statements go by and it will take several minutes to complete. It should complete with something like:</p>
<pre><code>[Thu, 14 Jan 2010 00:19:38 +0000] INFO: Chef Run complete in 38.59808 seconds
[Thu, 14 Jan 2010 00:19:38 +0000] DEBUG: Exiting</code></pre>
<h5>Setup basic cookbooks</h5>
<p>The following will install the standard cookbooks on the chef server</p>
<pre><code>cd
git clone git://github.com/opscode/chef-repo.git
cd chef-repo
rm cookbooks/README
git clone git://github.com/opscode/cookbooks.git
</code></pre>
<p>Now upload the standard cookbooks using the credentials set up by the bootstrap process (user chef-webui)</p>
<pre><code>knife cookbook upload --all -u chef-webui \
-k /etc/chef/webui.pem -o cookbooks
</code></pre>
<h5>Startup the Chef Server web ui</h5>
<p>Do to a bug (http://tickets.opscode.com/browse/CHEF-839) you have to run this twice, the first time will create the admin user:</p>
<pre><code>sudo /usr/bin/chef-server-webui -p 4002</code></pre>
<p>But the first time will abort with an error message like:</p>
<pre><code>Loading init file from /usr/lib/ruby/gems/1.8/gems/chef-server-0.8.0/config/init-webui.rb
Loading /usr/lib/ruby/gems/1.8/gems/chef-server-0.8.0/config/environments/development.rb
~ Loaded slice 'ChefServerWebui' ...
WARN: HTTP Request Returned 404 Not Found: Cannot load user admin
~ Compiling routes...
~ Could not find resource model Node
~ Could not find resource model Client
~ Could not find resource model Role
~ Could not find resource model Search
~ Could not find resource model Cookbook
~ Could not find resource model Client
~ Could not find resource model Databag
~ Could not find resource model DatabagItem
/usr/lib/ruby/gems/1.8/gems/chef-server-0.8.0/config/init-webui.rb:32: uninitialized constant OpenID (NameError)
from /usr/lib/ruby/gems/1.8/gems/merb-core-1.0.15/lib/merb-core/bootloader.rb:1258:in `call'
from /usr/lib/ruby/gems/1.8/gems/merb-core-1.0.15/lib/merb-core/bootloader.rb:1258:in `run'
from /usr/lib/ruby/gems/1.8/gems/merb-core-1.0.15/lib/merb-core/bootloader.rb:1258:in `each'
from /usr/lib/ruby/gems/1.8/gems/merb-core-1.0.15/lib/merb-core/bootloader.rb:1258:in `run'
from /usr/lib/ruby/gems/1.8/gems/merb-core-1.0.15/lib/merb-core/bootloader.rb:99:in `run'
from /usr/lib/ruby/gems/1.8/gems/merb-core-1.0.15/lib/merb-core/server.rb:172:in `bootup'
from /usr/lib/ruby/gems/1.8/gems/merb-core-1.0.15/lib/merb-core/server.rb:42:in `start'
from /usr/lib/ruby/gems/1.8/gems/merb-core-1.0.15/lib/merb-core.rb:173:in `start'
from /usr/lib/ruby/gems/1.8/gems/chef-server-0.8.0/bin/chef-server-webui:76
from /usr/bin/chef-server-webui:19:in `load'
from /usr/bin/chef-server-webui:19</code></pre>
<p>Then again to actually start the WebUI and have it run in the background. You might want to start it in <a href="http://www.gnu.org/software/screen/" target="_blank">screen</a> for now or possibly redirect its output to a log file The following example shows sending the output of the command to a log file. You&#8217;ll want to check that log file after starting to make sure there were no errors.</p>
<pre><code>sudo sh -c '/usr/bin/chef-server-webui -p 4002 &gt; /var/log/</code><code>chef-server-webui.log' &amp;</code></pre>
<p>If you look at the output of a ps, you&#8217;ll see the shell command above, but the real work is being done by a merb instance with the port you specified (4002):</p>
<pre><code>#ps ax | grep webui
5533 pts/0    S      0:00 sh -c /usr/bin/chef-server-webui -p 4002 &gt; /var/log/chef-server-webui.log
#ps ax | grep merb
3694 ?        Sl     0:55 merb : worker (port 4000)
5534 pts/0    Sl     0:07 merb : worker (port 4002)</code></pre>
<p>The first merb worker is the chef-server itself, the second is the WebUI server.</p>
<p>Accessing the Chef Web UI</p>
<p>You can access the Chef Web UI web server using a web browser at the IP address / Public DNS name of this server that was just set up. Assuming the Public DNS is</p>
<pre><code>ec2-204-203-51-20.us-west-1.compute.amazonaws.com</code></pre>
<p>Assuming that you set up this instance to allow you to access port 4002 from the IP adddress of your local dev machine, you should be able to access the Web UI at</p>
<pre><code>http://ec2-204-203-51-20.us-west-1.compute.amazonaws.com:4002</code></pre>
<p>You can allow access to port 4002 from specific ip address ranges by updating your <a href="http://docs.amazonwebservices.com/AWSEC2/2007-08-29/DeveloperGuide/distributed-firewall-concepts.html" target="_blank">security group</a>. You can do that with ElasticFox (easy) or via the <a href="http://docs.amazonwebservices.com/AWSEC2/2007-08-29/DeveloperGuide/distributed-firewall-examples.html" target="_blank">command line tools</a> (a pain for a one off). Eventually you (or hopefully Opscode) will  set up an apache or nginx reverse proxy, Passenger or equiv to allow normal port 80 / 443 http/https access.</p>
<h2>Conclusion</h2>
<p>You should now be able to use  knife your local dev environment to develop cookbooks and upload roles and cookbooks to your new Chef Server and spin up new chef cookbook driven instances. You should use the knife documentation from the Opscode main wiki <a href="http://wiki.opscode.com/display/chef/Knife" target="_blank">Knife Page</a> <strong>NOT</strong> the docs in the Alpha Forums / Getting Started With Opscode / <a href="http://opscode.zendesk.com/forums/58858/entries/53988" target="_blank">Knife &#8211; Commandline API</a> as the later is actually more obsolete in terms of the version that you built from the opscode git repository. There is also a man page and knife &#8211;help gives you pretty much the same correct info as the wiki.</p>
<p>I hope to have a follow up post on how to do this in more details.</p>
<p>Feel free to leave comments if you find problems or have questions.</p><p>The post <a href="https://www.ibd.com/howto/creating-an-amazon-ami-for-chef-0-8/">Creating an Amazon EC2 AMI for Opscode Chef 0.8 Client and Server</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.ibd.com/howto/creating-an-amazon-ami-for-chef-0-8/feed/</wfw:commentRss>
			<slash:comments>7</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">333</post-id>	</item>
		<item>
		<title>Building Opscode Chef 0.8.x from HEAD of the git repo</title>
		<link>https://www.ibd.com/howto/using-opscode-chef-0-8-x-alpha-from-head-of-the-git-repo/</link>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Wed, 23 Dec 2009 02:55:44 +0000</pubDate>
				<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Opscode Chef]]></category>
		<category><![CDATA[Runa]]></category>
		<category><![CDATA[Scalable Deployment]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[Bleeding Edge]]></category>
		<category><![CDATA[Iaas]]></category>
		<category><![CDATA[Infrastructure]]></category>
		<category><![CDATA[Saas]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=324</guid>

					<description><![CDATA[<p>Update: I am having problems using the chef dev tools/client from the HEAD of the chef git repository with the Opscode Alpha Server service. I&#8217;m&#8230;</p>
<p>The post <a href="https://www.ibd.com/howto/using-opscode-chef-0-8-x-alpha-from-head-of-the-git-repo/">Building Opscode Chef 0.8.x from HEAD of the git repo</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<h2><strong>Update</strong><strong>:</strong></h2>
<p>I am having problems using the chef dev tools/client from the HEAD of the chef git repository with the Opscode Alpha Server service. I&#8217;m not sure if its me or if the latest versions of the chef client from HEAD is compatible with the Alpha Server Service. So the following is still useful for understanding how to build from HEAD, but it will not work with the Opscode Alpha SaaS server. It will work with the server you build from HEAD. See the next article <a href="http://blog2.ibd.com/scalable-deployment/creating-an-amazon-ami-for-chef-0-8/" target="_self">Creating an Amazon EC2 AMI for Opscode Chef 0.8 </a>for info on creating a Chef client and server on EC2.</p>
<h2>Introduction</h2>
<p><a href="http://www.opscode.com">Opscode</a> is introducing a pretty major set of changes in <a href="http://www.opscode.com/chef/">Chef</a> in the<a href="http://github.com/opscode/chef"> 0.8 release</a>. Its a major step forward and has some major changes as to how one interacts with Chef. (as well as some major bug fixes that alone make it worth the move). The <a href="http://www.opscode.com/blog/2009/10/07/preview-chef-0-8-and-the-opscode-platform/">Opscode Alpha Program introduces</a> a new service where Opscode runs the actual Chef Server as a service.</p>
<p>This post will describe setting up your User/Dev environment by building your own Chef Client / Dev Gems from the latest HEAD of the Chef repo from Github. It assumes that you did sign up for the Alpha program and have access to the Opscode Alpha Server. Though much of it would be the same if you were running your own chef server also built from the latest source from github. This post does not show how to actually use Chef and the chef-client on a target node. Hope to have a post on that in the next few days.</p>
<p>The documentation on how to move to and use Chef 0.8 is still very sparse, so I figured I would jot down some of the things we are learning as we apply this to our infrastructure at <a href="http://www.runa.com">Runa</a>. If any of you OpsChefs out there see something wrong or something I left out, let me know in the comments or via email.</p>
<h2>The Opscode Chef Alpha Environment</h2>
<p>If you are in the Opscode Alpha program, you would have been given login[s] and some pem keys. I won&#8217;t go into the details of this since they do have pretty good docs on setting this up (if you have an alpha login you can get them at <a href="http://opscode.zendesk.com/forums/58858/entries/49336">http://opscode.zendesk.com/forums/58858/entries/49336</a>). Its probably a good idea to follow these and start with their 0.8.0 gem to make sure you are talking with the Alpha Server before trying to use the Chef Git Repository to build your own gems.</p>
<p>The Alpha instructions use a Chef gem that is frozen at 0.8.0. But the Chef folks have already progressed much further than the Oct 29h release of 0.8.0 in the Chef Git Repository.</p>
<p>The HEAD of the Git Repository has many changes since 0.8.0. Some big ones include:</p>
<ul>
<li>The Knife sub commands are completely different</li>
<li>There is now a Chef Shell (A REPL like irb but for the chef client)</li>
<li>Lots of Bug Fixes</li>
</ul>
<p>And if we&#8217;re going to be on the bleeding edge, we might as well go all the way! So the rest of this blog will be about using the Chef HEAD branch from the Chef git repository. We&#8217;ll still use the Alpha Chef Server at least to start with.</p>
<h2>Configuring your Dev Environment</h2>
<h3>Prerequisites</h3>
<p>I&#8217;m using Mac OS X 10.6 (snow leopard). Our target environments are Ubuntu Linux on Amazon EC2. But assuming you have *nix, Ruby and Ruby Gems set up on your environment it should generally be the same (don&#8217;t know about people stuck in the Legacy Windows environment though).</p>
<p>So you will need to have installed and know how to use:</p>
<ul>
<li>Ruby</li>
<li>RubyGems</li>
<li>Git</li>
</ul>
<p>And the following Ruby Gems should be installed (I think this is the minimum you need, these will include their own dependencies:</p>
<ul>
<li>rake</li>
<li>rspec</li>
<li>cucumber</li>
<li>uuidtools</li>
<li>nanite</li>
<li>gemcutter</li>
<li>jeweler</li>
</ul>
<p>You will need http://gems.opscode.com as a gem source for the following. You can use the command:</p>
<pre><code>sudo gem sources -a http://gems.opscode.com</code></pre>
<ul>
<li>mixlib-authentication</li>
</ul>
<h3>Getting and building the code/GEMs for the Dev Environment</h3>
<p>The instructions that are in the README.doc of the Chef Git Repository are out of date as of now (Dec 20, 2009). The instructions on the wiki, <a href="http://wiki.opscode.com/display/chef/Installing+Chef+from+HEAD">Installing Chef from HEAD</a> are more accurate. Even though it seems like one can use the mixlib gems as the repository and the gems have the same version number, I found that I needed to install the mixlib libraries from source.</p>
<h4>Getting and building Ohai &amp; Mixlib Gems from Github</h4>
<p>We won&#8217;t be making any changes in these, so we&#8217;ll just git clone and build it:</p>
<pre><code>cd <em>to where you want to keep your local repositories</em>
git clone git://github.com/opscode/ohai.git
cd ohai
sudo rake install
cd ..
git clone git://github.com/opscode/mixlib-config.git
sudo rake install
cd ..
git clone git://github.com/opscode/mixlib-log.git
sudo rake install
cd ..
git clone git://github.com/opscode/mixlib-cli.git
sudo rake install
cd ..
</code></pre>
<h4>Getting the Chef code from github</h4>
<p>You can get the <a href="http://github.com/opscode/chef">Chef repository from github</a>. The readme there has most of the info you need for</p>
<p>If you plan to submit any patches or other changes back to Opscode, or you would like to have your own repository of this, you can fork the Opscode repository into your own Github account. This is what I did and will demonstrate below. If you don&#8217;t want any hardcore forking action, you can just git clone the opscode repository as shown here (assuming your current working directory is where you want the local directory repository placed. It will be named using the default &#8220;chef&#8221;):</p>
<pre><code>git clone git://github.com/opscode/chef.git</code></pre>
<p>If you have forked into your own github account (mine is rberger), you would git clone using the &#8220;Your Clone URL&#8221;:</p>
<pre><code>git clone git@github.com:rberger/chef.git rberger-chef</code></pre>
<p>This assumes you want your local directory name for the repository to be rberger-chef, just so you can distinguish it from the official opscode one. (I will refer to the top of the local repository as rberger-chef from now on).</p>
<h4>What&#8217;s in the Chef Git Repository</h4>
<p>Change directory into the local repository and do an ls. You&#8217;ll see that there are several components here.</p>
<pre><code>
$ cd rberger-chef
$ ls
CHANGELOG         README.rdoc       chef-server       chef-solr         scripts
LICENSE           Rakefile          chef-server-api   cucumber.yml
NOTICE            chef              chef-server-webui features
</code></pre>
<p>There are 2 main trees:</p>
<ul>
<li><strong>chef</strong>: chef-client and dev gem</li>
<li><strong>chef-server</strong>: Chef Server gem. Used only if you build your own server
<ul>
<li><strong>chef-server-api</strong>: Implements the REST interface sub-system as part of the full Chef Server</li>
<li><strong>chef-server-webui</strong>: Implements the WebUI as part of the full Chef Server</li>
<li><strong>chef-solar</strong>: Implements the Solar Search sub-system as part of the full Chef Server</li>
<li><strong>features</strong>: Not 100% sure all its used for, definately for the cucumber tests. But is part of the Server as far as I can tell</li>
</ul>
</li>
</ul>
<p>For now we are only interested in the chef tree. That will be used to set up the local dev environment. We&#8217;re not going to follow the outdated instructions that are in the README.doc in the root of the chef repository which assumes you are setting up the whole stack on the Dev machine. We&#8217;re going to just install the chef client and tools from the chef sub-tree on the dev machine.</p>
<p>This post will not describe how to build /use the chef-server, though you can pretty much build everything by running</p>
<pre><code>sudo rake install</code></pre>
<p>from the top of the distro. There are more gem dependencies that need to be installed before you can build the chef-server trees.</p>
<h4>Building and Installing the Chef Client / Dev tools</h4>
<p>Change directory to the chef subdirectory so you should be in rberger-chef/chef (or if you have a direct clone of the opscode chef repository: chef/chef)</p>
<pre><code>cd chef</code></pre>
<h4 style="text-decoration: line-through;">Some minor tweaks to the Source</h4>
<p>(shef is now included in the executables in the latest repository and setting my own sub-version number was lame)</p>
<p style="text-decoration: line-through;">I have done a few mods to the source. Mainly to set the version number to something that will not conflict with the official numbering now or when new releases come out and to have shef be installed by the gem.</p>
<ol style="text-decoration: line-through;">
<li>Changed line 30 in the Rakefile to <code>s.executables  = %w( chef-client chef-solo knife shef )</code> so the install puts shef in /usr/bin</li>
<li>Changed line 7 in the Rakefile to <code>CHEF_VERSION = "0.8.0.1"</code></li>
<li>Change line 30 in lib/chef.rb to <code>VERSION = '0.8.0.1'</code></li>
</ol>
<h5>Build and install</h5>
<pre><code>rake install</code></pre>
<p>Its going to eventually ask for your sudo password as it needs to use sudo to do the gem install. The run should look something like:</p>
<pre><code>(in /Users/rberger/work/Chef/rberger-chef/chef)
mkdir -p pkg
WARNING:  no rubyforge_project specified
WARNING:  description and summary are identical
  Successfully built RubyGem
  Name: chef
  Version: 0.8.0.1
  File: chef-0.8.0.1.gem
mv chef-0.8.0.1.gem pkg/chef-0.8.0.1.gem
sudo gem install pkg/chef-0.8.0.1 --no-rdoc --no-ri
Password:
Building native extensions.  This could take a while...
Successfully installed eventmachine-0.12.10
Successfully installed amqp-0.6.5
Successfully installed thor-0.12.0
Successfully installed deep_merge-0.1.0
Successfully installed moneta-0.6.0
Successfully installed chef-0.8.0.1
6 gems installed
</code></pre>
<h3>Using Chef with the Opscode Alpha SaaS Server</h3>
<p>This just touches on some of the things that are described in <a href="http://opscode.zendesk.com/forums/58858/entries/49336">The Official Guide to Getting Started With Opscode</a></p>
<h4>Setting up your Dev Environment</h4>
<p>Its not clear if you really have to do everything as described in the document if you are building the latest release from the chef repository and using the ~/.chef/knife.rb config described below. For instance I didn&#8217;t have to set the environment variables for OPSCODE_USER and OPSCODE_KEY since they are now set in the knife.rb nor did I have to create /etc/chef/client.rb. And even without the global Chef config, I was able to use most of the knife commands. But not some like the ec2 instances data seemed to need the organization validation key to be in /etc/chef/validation.pem</p>
<h5>Copy your assigned validation key to /etc/chef</h5>
<p>When you got your Opscode Alpha welcome stuff, you should have gotten your user keys and a key for your organization. Copy your organization (in our case runa.pem) to /etc/chef/validation.pem. You will probably have to create /etc/chef directory first.</p>
<h5>The User Chef/Knife config</h5>
<p>You must configure a knife config file in your home directory under ~/.chef/knife.rb and have your key that you got from Opscode somewhere pointed to by a line in ~/.chef/knife.rb. The configuration parameters are described on the <a href="http://wiki.opscode.com/display/chef/Knife">Knife Wiki Page</a>. For instance my config file:</p>
<pre><code>log_level        :info
log_location     STDOUT
node_name        'rberger'
client_key       '/Users/rberger/.chef/rberger.pem'
chef_server_url  "https://api.opscode.com/organizations/runa"
cache_type       'BasicFile'
cache_options( :path =&gt; '/Users/rberger/.chef/checksums' )
</code></pre>
<p>Once you have this set up you can now use knife and the chef rake commands. You can test things out by saying something like:</p>
<pre><code>knife client list</code></pre>
<p>Which should return and empty list assuming you haven&#8217;t set up any clients on this server yet.</p>
<p>The first real useful command you want to do is to upload your cookbooks to the Opscode Server:</p>
<pre><code>cd <em>to where your chef cookbook repository is</em>
rake upload_cookbooks</code></pre>
<p>You can also do it with just knife:</p>
<pre><code>knife cookbook upload -a</code></pre>
<p>This may take a while as it will upload all the cookbooks in cookbooks and site-cookbooks in your current repository.</p>
<p>After that you can upload single cookbooks</p>
<p>knife cookbook upload</p>
<p>Just remember the knife documentation on the Alpha site no longer applies to the knife that you get from building from the HEAD of the chef git repository. Strangely enough, the <a href="http://wiki.opscode.com/display/chef/Knife">knife documentation on the wiki</a> is accurate.</p>
<h2>Conclusion</h2>
<p>Once you&#8217;ve been thru it, its all quite simple. I hope to post some more on using 0.8.0+ soon. See a more recent blog post for building your own Chef Server <a href="http://blog2.ibd.com/scalable-deployment/creating-an-amazon-ami-for-chef-0-8/">Creating an Amazon EC2 AMI for Opscode Chef 0.8</a></p><p>The post <a href="https://www.ibd.com/howto/using-opscode-chef-0-8-x-alpha-from-head-of-the-git-repo/">Building Opscode Chef 0.8.x from HEAD of the git repo</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">324</post-id>	</item>
	</channel>
</rss>
