<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>AWS - Cognizant Transmutation</title>
	<atom:link href="https://www.ibd.com/tag/aws/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.ibd.com</link>
	<description>Internet Bandwidth Development: Composting the Internet for over Two Decades</description>
	<lastBuildDate>Wed, 01 Sep 2021 00:42:36 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.1</generator>

 
<atom:link rel="hub" href="https://pubsubhubbub.appspot.com"/><atom:link rel="hub" href="https://pubsubhubbub.superfeedr.com"/><atom:link rel="hub" href="https://websubhub.com/hub"/><site xmlns="com-wordpress:feed-additions:1">156814061</site>	<item>
		<title>Accessing AppSync APIs that require Cognito Login outside of Amplify</title>
		<link>https://www.ibd.com/scalable-deployment/aws/access-appsync-outside-amplify-2/</link>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Wed, 01 Sep 2021 00:36:18 +0000</pubDate>
				<category><![CDATA[AWS]]></category>
		<category><![CDATA[AppSync]]></category>
		<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[Cognito]]></category>
		<guid isPermaLink="false">https://www.ibd.com/howto/access-appsync-outside-amplify-2/</guid>

					<description><![CDATA[<p>Access your AppSync GraphQL APIs that require Cognito Logins with arbitrary tools outside of Amplify Apps</p>
<p>The post <a href="https://www.ibd.com/scalable-deployment/aws/access-appsync-outside-amplify-2/">Accessing AppSync APIs that require Cognito Login outside of Amplify</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<h2>The Need</h2>
<p>You have this great Amplify App using AppSync GraphQL. You eventually find that you need to be able to access that data in your AppSync GraphQL database from tools other than your Amplify App. Its easy if you just have your AppSync API protected just by an API Key. But that isn&#8217;t great security for your data!</p>
<p>One way to protect your AppSync data is to use <a href="https://docs.amplify.aws/lib/graphqlapi/authz/q/platform/js/#cognito-user-pools">Cognito Identity Pools</a>. Amplify makes it pretty transparent if you are  using Amplify to build your clients. AppSync lets you do really nice <a href="https://docs.aws.amazon.com/appsync/latest/devguide/security-authorization-use-cases.html">table and record level access control based on logins and roles</a>.</p>
<p>What happens if you want to access that data from something other than an Amplify based client? How do you &#8220;login&#8221; and get the JWT credentials you need to access your AppSync APIs?</p>
<h2>Use AWS CLI</h2>
<p>The most general way is to use the AWS CLI to effectively login and retrieve the JWT credentials that can then be passed in the headers of any requests you make to your AppSync APIs.</p>
<p>Unfortunately its not as easy as just having your login and password. It also depends on how you configured your Cognito Identity Pool and its related Client Apps.</p>
<h3>Cognito User Pool Client App</h3>
<p>You can have multiple Client Apps specified for your Cognito User Pool. I suggest  having one dedicated to these external applications. That way you can have custom configuration just for this and not disrupt your main  Amplify apps. Also you can easily turn it off if you need too.</p>
<p><img decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2021/08/User-pool-app-clients.png?ssl=1" alt="User Pool Client Apps" title="User Pool Client Apps" data-recalc-dims="1"/></p>
<p>In my case I created a new client app <code>shoppabdbe800b-rob-test2</code> as a way to test a client app with no <code>App Client Secret</code>. This makes it easier to access from the command line as you do not have to generate a Secret Hash (will describe how to deal with that below).</p>
<p><img decoding="async" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2021/08/app-client-config-no-secret.png?ssl=1" alt="App Client Config with no secret" title="App Client Config with no secret" data-recalc-dims="1"/></p>
<p>If you want to allow admin level access (ie a user with admin permission) you need to check <code>Enable username password auth for admin APIs for authentication (ALLOW_ADMIN_USER_PASSWORD_AUTH)</code></p>
<p>If you want to allow regular users to login you must also select <code>Enable username password based authentication (ALLOW_USER_PASSWORD_AUTH)</code></p>
<p>The defaults for the other fields should be ok. Be sure to save your changes.</p>
<h3>Minimal IAM permissions</h3>
<p>As far as I can tell, these are the minimal IAM permissions to make the aws <code>cognito-idp</code> command work for admin and regular users of AppSync (replace the Resource arn with the arn of the user pool[s] you want to control):</p>
<pre><code>{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Sid": "VisualEditor0",
            "Effect": "Allow",
            "Action": [
                "cognito-idp:AdminInitiateAuth",
                "cognito-idp:AdminGetUser"
            ],
            "Resource": "arn:aws:cognito-idp:us-east-1:XXXXXXXXXXXXX:userpool/us-east-1_XXXXXXXXX"
        },
        {
            "Sid": "VisualEditor1",
            "Effect": "Allow",
            "Action": [
                "cognito-idp:GetUser",
                "cognito-idp:InitiateAuth"
            ],
            "Resource": "*"
        }
    ]
}</code></pre>
<h3>Get the Credentials with no App Client Secret</h3>
<p>This example is if you did not set the App Client Secret.</p>
<p>You should now be able to get the JWT credentials from the AWS CLI.</p>
<p>This assumes you have<a href="https://docs.aws.amazon.com/cli/latest/userguide/cli-chap-configure.html"> set up your</a> <code>~/.aws/credentials</code> file or whatever is appropriate for your command line environment so that you have the permissions to access this service.</p>
<ul>
<li>When using the <code>ADMIN_USER_PASSWORD_AUTH</code></li>
</ul>
<pre><code>aws cognito-idp admin-initiate-auth --user-pool-id us-east-1_XXXXXXXXXX --auth-flow ADMIN_USER_PASSWORD_AUTH --client-id XXXXXXXXXXXXX --auth-parameters USERNAME=username1,PASSWORD=XXXXXXXXXXXXX &gt; creds.json</code></pre>
<ul>
<li>When using the <code>USER_PASSWORD_AUTH</code></li>
</ul>
<pre><code>aws cognito-idp initiate-auth --auth-flow USER_PASSWORD_AUTH --client-id XXXXXXXXXXXXX --auth-parameters USERNAME=username2,PASSWORD=XXXXXXXXXXXX &gt; creds.json</code></pre>
<p>Of course replace the <code>XXXX</code>&#8216;s with the actual values.</p>
<ul>
<li><code>user-pool-id</code> &#8211; The pool id found at the top of the <em>User Pool Client Apps</em> page</li>
<li><code>client-id</code> &#8211; The <code>client-id</code> of the <code>app client</code> you are using</li>
<li><code>USERNAME</code> &#8211; The Username normally used to login to your Amplify app</li>
<li><code>PASSWORD</code> &#8211; The Password normally used to login to your Amplify app</li>
</ul>
<p>The results will be in <code>creds.json</code>. (You could not use the <code>&gt; creds.json</code> if you want to just see the results)</p>
<h3>Get the Credentials when there is an App Client Secret</h3>
<p>This assumes you have an App Client that has an <code>app secret key</code> set.</p>
<p>The main thing here is you need to generate a <code>secret hash</code> to send along with the command.</p>
<p>You can do that by creating a little python program to generate it for you when you need it:</p>
<pre><code class="language-python3">#!/usr/bin/env python3

import sys
import hmac, hashlib, base64

if (len(sys.argv) == 4):
    username = sys.argv[1]
    app_client_id = sys.argv[2]
    key = sys.argv[3]
    message = bytes(sys.argv[1]+sys.argv[2],'utf-8')
    key = bytes(sys.argv[3],'utf-8')
    secret_hash = base64.b64encode(hmac.new(key, message, digestmod=hashlib.sha256).digest()).decode()

    print("SECRET HASH:",secret_hash)
else:
    (print("len sys.argv: ", len(sys.argv)))
    print("usage: ",  sys.argv[0], " &lt;username&gt; &lt;app_client_id&gt; &lt;app_client_secret&gt;")</code></pre>
<p>Save the file someplace that you can execute it from like <code>~/bin/app-client-secret-hash</code> and make it executable (<code>chmod a+x ~/bin/app-client-secret-hash</code>).</p>
<p>You will need:</p>
<ul>
<li><code>app-client-id</code> &#8211; The <code>client-id</code> of the <code>app client</code> you are using</li>
<li><code>app-client-secret</code> &#8211; The secret of the <code>app client</code> you are using (its on the App Client page of the User Pool)</li>
<li><code>USERNAME</code> &#8211; The Username normally used to login to your Amplify app</li>
</ul>
<p>To use:</p>
<pre><code>~/bin/app-client-secret-hash  &lt;username&gt; &lt;app_client_id&gt; &lt;app_client_secret&gt;</code></pre>
<p>Where of  course you replace the arguments with the actual values.</p>
<p>The result is a <code>secret-hash</code> you will use in the following command to get the actual JWT credentials</p>
<pre><code>aws cognito-idp admin-initiate-auth --user-pool-id us-east-1_XXXXXXXXXX --auth-flow ADMIN_USER_PASSWORD_AUTH --client-id XXXXXXXXXXXXX --auth-parameters USERNAME=username3,PASSWORD='secret password',SECRET_HASH='secret-hash' &gt; creds.json</code></pre>
<p>You could do the same thing with <code>USER_PASSWORD_AUTH</code> if you nee that instead</p>
<pre><code>aws cognito-idp initiate-auth --auth-flow USER_PASSWORD_AUTH --client-id XXXXXXXXXXXXX --auth-parameters USERNAME=rob+admin,PASSWORD=XXXXXXXXX,SECRET_HASH='secret-hash' &gt; creds.json</code></pre>
<h2>Using the Credentials</h2>
<p>How you use these credentials depends on what tool or  how you are trying to access your AppSync APIs.</p>
<h3>From some Javascript</h3>
<p>You can just add in the <code>IdToken</code> from the <code>creds.json</code> as an <code>Authorization</code> header when you build the request:</p>
<pre><code class="language-javascript">function graphQLFetcher(graphQLParams) {
  const APPSYNC_API_URL = "TYPE_YOUR_APPSYNC_URL";
  const credentialsAppSync = {
    Authorization: "eyJraWQiOiI1dVUwMld...",
  };
  return fetch(APPSYNC_API_URL, {
    method: "post",
    headers: {
      Accept: "application/json",
      "Content-Type": "application/json",
      ...credentialsAppSync,
    },
    body: JSON.stringify(graphQLParams),
    credentials: "omit",
  }).then(function (response) {
    return response.json().catch(function () {
      return response.text();
    });
  });
}</code></pre>
<p>If you are using some GraphQL tool that needs to access your AppSync APIs. The tool should have a way that you can supply the token and it will add it as an <code>Authorization</code> header for its own requests.</p>
<p>Do let me know if you have some examples of tools that would make use of this.</p>
<h2>References</h2>
<ul>
<li><a href="https://aws.amazon.com/blogs/mobile/appsync-graphiql-local/" title="Explore AWS AppSync APIs with GraphiQL from your local machine">Explore AWS AppSync APIs with GraphiQL from your local machine</a></li>
<li>[How do I troubleshoot &#8220;Unable to verify secret hash for client <client-id>&#8221; errors from my Amazon Cognito user pools API?](<a href="https://aws.amazon.com/premiumsupport/knowledge-center/cognito-unable-to-verify-secret-hash/">https://aws.amazon.com/premiumsupport/knowledge-center/cognito-unable-to-verify-secret-hash/</a> &#8220;How do I troubleshoot &#8220;Unable to verify secret hash for client </client-id><client-id>&#8221; errors from my Amazon Cognito user pools API?&#8221;)</client-id></li>
</ul><p>The post <a href="https://www.ibd.com/scalable-deployment/aws/access-appsync-outside-amplify-2/">Accessing AppSync APIs that require Cognito Login outside of Amplify</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1803</post-id>	</item>
		<item>
		<title>CLI to Switch Amazon AWS Shell Environment Credentials</title>
		<link>https://www.ibd.com/howto/cli-to-switch-amazon-aws-shell-environment-credentials/</link>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Mon, 16 Jun 2014 04:54:11 +0000</pubDate>
				<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Scalable Deployment]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[Bash]]></category>
		<category><![CDATA[CLI]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=1616</guid>

					<description><![CDATA[<p>I work with many different AWS IAM Accounts and need to easily switch between these accounts. The good news is the AWS CLI tools now support a standard config file (~/.aws/config) that allows you to create profiles  for  multiple accounts in the one config file. You can select them when using the aws-cli with the --profile flag. But many other&#8230;</p>
<p>The post <a href="https://www.ibd.com/howto/cli-to-switch-amazon-aws-shell-environment-credentials/">CLI to Switch Amazon AWS Shell Environment Credentials</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><a href="http://docs.aws.amazon.com/cli/latest/userguide/cli-chap-getting-started.html#cli-multiple-profiles" target="_blank" rel="noopener"><img decoding="async" loading="lazy" class="alignleft wp-image-1625 size-full" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2014/06/AwsCreds.png?resize=300%2C300" alt="AwsCreds" width="300" height="300" data-recalc-dims="1" /></a>I work with many different AWS IAM Accounts and need to easily switch between these accounts. The good news is the AWS CLI tools now support a standard config file (<code>~/.aws/config</code>) that allows you to create profiles  for  multiple accounts in the one config file. You can select them when using the <code>aws-cli</code> with the <code>--profile</code> flag.</p>
<p>But many other tools don&#8217;t yet support the new format config file or multi-profiles. But they do support shell environment variables. So I wrote a simple ruby script that</p>
<ul>
<li>Allows you to specify the profile name as an argument</li>
<li>Reads in the config file ~/.aws/config</li>
<li>Outputs the export statements for publishing the environment variables
<ul>
<li>You can eval the output to set the environment of your current shell session</li>
</ul>
</li>
</ul>
<p>So if you had a config file ~/.aws/config that looked like this:</p>
<pre><pre class="brush: plain; light: false; title: ~/.aws/config; notranslate">
[default]
aws_access_key_id=AKI***********2A
aws_secret_access_key=jt41************************************p
region=us-east-1

[profile foo]
aws_access_key_id=0K***************K82
aws_secret_access_key=2b+***********************************1g
region=us-east-1

[profile bar]
aws_access_key_id=AKI**************GA
aws_secret_access_key=MG************************************/d
region=us-east-1
</pre>
<p>If you don&#8217;t specify any argument to the command it will output the default profile:</p>
<pre><pre class="brush: bash; title: ; notranslate">
 $ aws_switch
export AWS_ACCESS_KEY_ID=AKI***********2A
export AWS_SECRET_ACCESS_KEY=jt41************************************p
export AMAZON_ACCESS_KEY_ID=AKI***********2A
export AMAZON_SECRET_ACCESS_KEY=jt41************************************p
export AWS_ACCESS_KEY=AKI***********2A
export AWS_SECRET_KEY=jt41************************************p
</pre>
<p>If you specified a profile (in this case <code>foo</code>):</p>
<pre><pre class="brush: bash; title: ; notranslate">
$ aws_switch foo
export AWS_ACCESS_KEY_ID=0K***************K82
export AWS_SECRET_ACCESS_KEY=2b+***********************************1g
export AMAZON_ACCESS_KEY_ID=0K***************K82
export AMAZON_SECRET_ACCESS_KEY=2b+***********************************1g
export AWS_ACCESS_KEY=0K***************K82
export AWS_SECRET_KEY=2b+***********************************1g
</pre>
<p>You would actually use it by eval&#8217;ing the output of <code>aws_switch</code> so it sets the variables in the environment of yhour current shell:</p>
<pre><pre class="brush: bash; title: ; notranslate">
eval `aws_switch foo`
</pre>
<p>Here&#8217;s the code for <code>aws_switch</code>. Put it in someplace in your <code>$PATH</code> and make sure to <code>chmod 0755</code> the file so its executable:</p>
<pre><pre class="brush: ruby; light: false; title: aws_switch; notranslate">
#!/usr/bin/env ruby
require 'inifile'

configs = IniFile.load(File.join(File.expand_path('~'), '.aws', 'config'))

profile_name_input = ARGV[0]
case profile_name_input
when 'default'
  profile_name = 'default'
when nil
  profile_name = 'default'
when &quot;&quot;
  profile_name = 'default'
else
  profile_name = &quot;profile #{profile_name_input}&quot;
end

id = configs[profile_name]['aws_access_key_id']
key = configs[profile_name]['aws_secret_access_key']

puts &quot;export AWS_ACCESS_KEY_ID=#{id}&quot;
puts &quot;export AWS_SECRET_ACCESS_KEY=#{key}&quot;
puts &quot;export AMAZON_ACCESS_KEY_ID=#{id}&quot;
puts &quot;export AMAZON_SECRET_ACCESS_KEY=#{key}&quot;
puts &quot;export AWS_ACCESS_KEY=#{id}&quot;
puts &quot;export AWS_SECRET_KEY=#{key}&quot;
</pre><p>The post <a href="https://www.ibd.com/howto/cli-to-switch-amazon-aws-shell-environment-credentials/">CLI to Switch Amazon AWS Shell Environment Credentials</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">1616</post-id>	</item>
		<item>
		<title>Deploy WordPress to Amazon EC2 Micro Instance with Opscode Chef</title>
		<link>https://www.ibd.com/howto/deploy-wordpress-to-amazon-ec2-micro-instance-with-opscode-chef/</link>
					<comments>https://www.ibd.com/howto/deploy-wordpress-to-amazon-ec2-micro-instance-with-opscode-chef/#comments</comments>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Mon, 03 Jan 2011 07:08:16 +0000</pubDate>
				<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Opscode Chef]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[blogging]]></category>
		<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[EC2]]></category>
		<category><![CDATA[Infrastructure]]></category>
		<category><![CDATA[ubuntu]]></category>
		<category><![CDATA[Wordpress]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=599</guid>

					<description><![CDATA[<p>Updates September 9, 2011 Included the latest Chef Knife ec2 server create argument that sets the EBS Volume to not be deleted on the termination of the EC2 Instance Intro Up until recently a friend lent me a Virtual Machine in he Cloud for my Blog. I didn&#8217;t have to do anything to manage it. But his company is no&#8230;</p>
<p>The post <a href="https://www.ibd.com/howto/deploy-wordpress-to-amazon-ec2-micro-instance-with-opscode-chef/">Deploy WordPress to Amazon EC2 Micro Instance with Opscode Chef</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<h2>Updates</h2>
<h3>September 9, 2011</h3>
<p>Included the latest Chef Knife ec2 server create argument that sets the EBS Volume to not be deleted on the termination of the EC2 Instance</p>
<h2>Intro</h2>
<p>Up until recently a friend lent me a Virtual Machine in he Cloud for my Blog. I didn&#8217;t have to do anything to manage it. But his company is no longer supporting those machines so I had to move my blog.<br />
Right around that time Amazon announced their Micro Instances at a very low price. I also wanted to try out the new Opscode Chef knife commands that bootstrap an EC2 instance from scratch as well as their Chef Server SaaS. So this was a good reason to combine all these to create my new Blog Instance. And now Amazon even offers the ability to have a single micro instance free for a year! (You still have to pay for I/O charges but they are really cheap compared to the instance charges, unless you have a blog that is too popular, but then you&#8217;ll need a bigger server anyway)<br />
<strong>Spoiler Alert:</strong> It was way too easy and no problem at all! (Though I did end up having to write a few support cookbooks like <em>vsftpd</em>, but now you don&#8217;t have to)</p>
<h3>Some Assumptions for this post</h3>
<ul>
<li>You are using a *nix platform for your local development (ie your laptop is a Mac, Linux, *BSD or equivalent) and that your target server you want to deploy to is a relatively recent Ubuntu Linux.</li>
<li>You have or will install git client on your local development box</li>
<li>You followed the directions or have done the equivalent of the instructions in the Opscode <a href="http://help.opscode.com/faqs/start/how-to-get-started" target="_blank" rel="noopener">How to Get Started</a> pages as noted below</li>
</ul>
<h2>Set up an Account on Amazon Web Services</h2>
<p>If you don&#8217;t already have an Amazon EC2 Account, go to the <a href="http://aws.amazon.com/" target="_blank" rel="noopener">Amazon Web Services</a> page and click on the <a href="http://www.amazon.com/gp/aws/registration/registration-form.html" target="_blank" rel="noopener">Sign Up Now button</a>. Create all your user info and then Sign Up for Amazon EC2. You&#8217;ll need to put in  credit card info at this point since you&#8217;ll need to pay for the EC2 instance you&#8217;ll be using shortly. After you complete your signup, you&#8217;ll need to get your credentials at the <a href="http://aws-portal.amazon.com/gp/aws/developer/account/index.html?action=access-key" target="_blank" rel="noopener">AWS Security Credentials page</a>.  Copy down your Access Key ID and click on Show under the Secret Access Key and get that as well. You will need these values to put into your knife.rb file that you will get to in the following steps.</p>
<h2>Get an Opscode Platform Account</h2>
<p>Its free and easy. Just go to the <a href="https://cookbooks.opscode.com/users/new" target="_blank" rel="noopener">Opscode Platform Signup page</a>. Fill in your information and submit. There is no cost for up to 5 client nodes. Once you set up and confirm your account you can go thru the <a href="http://help.opscode.com/faqs/start/how-to-get-started" target="_blank" rel="noopener">How to Get Started</a> pages which includes how to set up your client development machine (installing Chef Client, Knife and various dependencies) as well as downloading your private key, organization key and your Knife Configuration File. You should go thru all 5 steps of the Getting Started section. And please do follow their examples of using git. The rest of this post assumes you have git installed and will use it for your own repository even if you don&#8217;t push it to an upstream git repository.</p>
<p>Once you have completed that you will be ready to use the remaining steps of this blog post. The remaining steps will assume you put your chef-repo in the same location as the Opscode instructions suggested (~/chef-repo). If you put it somewhere else, just adjust your path to your chef-repo as appropriate.</p>
<p>It also assumes you got your private user key (<em>your_user_name.pem</em>) and organization validator key (<em>your_organization-validator.pem</em>) and knife.rb in Section 3 of How to Get Started: <a href="http://help.opscode.com/faqs/start/chef-client" target="_blank" rel="noopener">Setting Up a Chef Client</a>. In that section you ran the command <code>knife configure client ./client-config</code> inside your ~/chef-repo/ directory. That will have created ~/chef-repo/.chef and put the keys and knife.rb in that directory.</p>
<p>For the use of this blog post, we will use the username: <em>rberger_test</em> and organization name: <em>install_wordpress</em>. So the private user key name for this example will be: <em>rberger_test.pem</em> and the organization validator key will be called <em>install_wordpress-validator.pem.</em> You should copy your keys someplace that you will not loose outside of ~/chef-repo. There are ways to <a title="Create a new private user key" href="http://help.opscode.com/faqs/account/getting-a-new-private-key-for-your-opscode-user" target="_blank" rel="noopener">create new ones</a>, but its always easier not to have to. Bottom line, is its expected that your keys and the knife.rb will be in your <em>~/chef-repo/.chef </em>directory at this point.</p>
<h2>Set up your Development Environment</h2>
<p>Your development environment is your home or work computer/laptop. Its the machine that is local to you. It is on this machine that you put together your Cookbooks. From here you push your cookbooks to the Opscode Chef Server, issue the commands to configure AWS and launch your AWS instances.</p>
<h3>Tweak up your chef-repo</h3>
<p>I like to keep the &#8220;standard&#8221; chef recipes that get downloaded from git or from cookbook.opscode.com in their own directory (called <em>cookbooks</em>) and all the cookbooks I create or highly modify in another directory (<em>site-cookbooks</em>). In Step 2 of the How to Get Started: <a href="http://help.opscode.com/faqs/start/user-environment" target="_blank" rel="noopener">Setting Up Your User Environment</a>, they had you create a <em>~/chef-repo</em> directory and populate it from git or from a tar ball. You should add the <em>site-cookbooks</em> directory to your <em>~/chef-repo</em>. We&#8217;re also going to add an empty <em>README.md</em> to the <em>site-cookbooks </em>directory so when we create our own git repository that directory will be there (an empty directory will not be added to a git repository)</p>
<pre><pre class="brush: bash; title: ; notranslate">
cd ~/chef-repo
mkdir site-cookbooks
echo &quot;Directory for customized cookbooks&quot; &amp;amp;gt; site-cookbooks/README.md
</pre>
<p>You will probably also not want to include your <em>.chef </em>directory with all your keys in what gets uploaded to any outside chef repository. If you are just keeping things local, you can skip this step. Edit <em>~/chef-repo/.gitignore</em> and add .<em>chef </em>to the file on its own line. You might also want to add <em>client-config</em> to <em>.gitignore</em> as well as any temporary or backup file suffixes you might have. For instance if you use Emacs, you would add <em>~*</em> (emacs backup files suffix), the .DS_Store which is something left by the Mac filesystem,  .rake_test_cache which is left around by Rake and metadata.json which is a file generated by chef. My <em>.gitignore</em> looks like:</p>
<pre><pre class="brush: bash; title: ; notranslate">
.chef
client-config
*~
.DS_Store
.rake_test_cache
metadata.json
</pre>
<p>If you created the <em>~chef-repo</em> from the git clone of the Opscode repository, you&#8217;ll want to get rid of the git configuration and history from the cloning of the Opscode chef-repo and create your own git repository:</p>
<pre><pre class="brush: bash; title: ; notranslate">
rm -rf .git
git init
git add -A
git commit -a -m &quot;Created my own basic chef-repo&quot;
</pre>
<p>The above commands will have removed the old git config that came when you did the git clone <em>http://github.com/opscode/chef-repo.git</em> command as part of the Opscode <a href="http://help.opscode.com/faqs/start/how-to-get-started" target="_blank" rel="noopener">How to Get Started</a> pages. The git init, add and commit will create a new local git repository for your own use not connected to the opscode repository. You can then add a remote repository if you want to be able to push your repository and future changes to another git repository such as github.com.</p>
<h3>Updating your knife.rb file with Amazon Credentials</h3>
<p>Add the following lines to the end of your ~/chef-repo/.chef/knife.rb file. You should have gotten your AWS Access Key and Secret Access key when you signed up to Amazon Web Services, but you can always go back and get it at <a href="http://aws-portal.amazon.com/gp/aws/developer/account/index.html?action=access-key" target="_blank" rel="noopener">AWS Security Credentials page</a>. Your final knife.rb should look something like this, except the various items that are customized to your setup. In the example below <em>rberger_test</em> would be replaced by your Opscode User name and <em>install_wordpress</em> would be replaced by your Opscode Organization name that was used when you went thru the Section 3 of the Opscode How to Get Started: <a href="http://help.opscode.com/faqs/start/chef-client" target="_blank" rel="noopener">Setting Up a Chef Client</a>.</p>
<pre><pre class="brush: ruby; highlight: [12,13]; title: ; notranslate">
current_dir = File.dirname(__FILE__)
log_level                :info
log_location             STDOUT
node_name                &quot;rberger_test&quot;
client_key               &quot;#{current_dir}/rberger_test.pem&quot;
validation_client_name   &quot;install_wordpress-validator&quot;
validation_key           &quot;#{current_dir}/install_wordpress-validator.pem&quot;
chef_server_url          &quot;https://api.opscode.com/organizations/install_wordpress&quot;
cache_type               'BasicFile'
cache_options( :path =&amp;amp;gt; &quot;#{ENV['HOME']}/.chef/checksums&quot; )
cookbook_path            [&quot;#{current_dir}/../cookbooks&quot;,&amp;amp;nbsp;&quot;#{current_dir}/../site-cookbooks&quot;]
knife[:aws_access_key_id]     = &quot;Your Access Key&quot;
knife[:aws_secret_access_key] = &quot;Your Secret Access Key&quot;
</pre>
<p>You can test that your knife.rb is setup enough to access AWS by issuing the command</p>
<pre><pre class="brush: bash; title: ; notranslate">knife ec2 server list</pre>
<p>And you should see something like this (just the heading and no instances unless you&#8217;ve launched some EC2 instances earlier:</p>
<pre><pre class="brush: bash; title: ; notranslate">
Instance ID &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Public IP &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Private IP &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; Flavor &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Image &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;Security Groups &amp;amp;nbsp;State
</pre>
<h3>Get the Appropriate Cookbooks</h3>
<p>We&#8217;ll get cookbooks using the <a href="http://wiki.opscode.com/display/chef/Knife" target="_blank" rel="noopener">knife command</a> and the <a href="http://cookbooks.opscode.com/" target="_blank" rel="noopener">cookbooks.opscode.com</a> web service. We&#8217;ll be using the following cookbooks:</p>
<ul>
<li>chef</li>
<li>apache2</li>
<li>mysql</li>
<li>openssl</li>
<li>php</li>
<li>postfix</li>
<li>sudo</li>
<li>users</li>
<li>vsftpd</li>
<li>wordpress</li>
</ul>
<p>Use the knife command on your local development machine to pull down the cookbooks you need. The command we&#8217;re using (knife cookbook site vendor COOKBOOK) will automatically download the cookbooks and install them in the ~/chef-repo/cookbooks directory. It will also check them into your git repository as a vendor branch (Stay on the master branch at least until you have installed all the cookbooks).</p>
<pre><pre class="brush: bash; title: ; notranslate">
cd ~/chef-repo
knife cookbook site vendor chef -d
knife cookbook site vendor apache2 -d
knife cookbook site vendor mysql -d
knife cookbook site vendor openssl -d
knife cookbook site vendor php -d
knife cookbook site vendor postfix -d
knife cookbook site vendor sudo -d
knife cookbook site vendor users -d
knife cookbook site vendor vsftpd -d
knife cookbook site vendor wordpress -d
</pre>
<p>Those commands will download all the cookbooks and any other cookbook dependencies they may have into your ~/chef-repo/cookbooks directory and check each one in as a git branch in your repo. If you do an ls on your ~/chef-repo/cookbooks directory you should see:</p>
<pre><pre class="brush: plain; title: ; notranslate">
README.md &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; bluepill &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;couchdb &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; java &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;php &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; runit &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; users &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; xml
apache2 &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; build-essential daemontools &amp;amp;nbsp; &amp;amp;nbsp; mysql &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; postfix &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; sudo &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;vsftpd &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;zlib
apt &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; chef &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;erlang &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp;openssl &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; rabbitmq_chef &amp;amp;nbsp; ucspi-tcp &amp;amp;nbsp; &amp;amp;nbsp; &amp;amp;nbsp; wordpress
</pre>
<p>And if you do a git branch you should see your master branch as the current and a chef-vendor- for each of the cookbooks you installed:</p>
<pre><pre class="brush: plain; title: ; notranslate">
  chef-vendor-apache2
  chef-vendor-apt
  chef-vendor-bluepill
  chef-vendor-build-essential
  chef-vendor-chef
  chef-vendor-couchdb
  chef-vendor-daemontools
  chef-vendor-erlang
  chef-vendor-java
  chef-vendor-mysql
  chef-vendor-openssl
  chef-vendor-php
  chef-vendor-postfix
  chef-vendor-rabbitmq_chef
  chef-vendor-runit
  chef-vendor-sudo
  chef-vendor-ucspi-tcp
  chef-vendor-users
  chef-vendor-vsftpd
  chef-vendor-wordpress
  chef-vendor-xml
  chef-vendor-zlib
* master
</pre>
<p>If you ever want to update these standard cookbooks,  you can just redo the <code>knife cookbook site vendor Cookbook</code> command.</p>
<h2>Create site-cookbooks to extend standard cookbooks</h2>
<p>It is standard practice to put the official cookbooks in the <em>~chef-repo/cookbooks</em> directory, as we just did in the previous step. Any cookbook overrides, extensions or custom cookbooks go into the <em>~chef-repo/site-cookbooks</em> directory. If you create a cookbook directory in ~chef-repo/site-cookbooks with the same name as a cookbook in the <em>~chef-repo/cookbooks</em> directory, the files, templates and/or recipes in the <em>~chef-repo/site-cookbook</em> directory will override the matching files, templates and/or recipes in the cookbook of the same name in the <em>~chef-repo/cookbooks</em> directory. We will now extend two of the cookbooks; users and wordpress.</p>
<h3>Extend the Sudo cookbook so its suitable for EC2</h3>
<p>The standard sudo cookbook creates a sudoers file that requires passwords to activate sudo. Most EC2 environments do not allow passwords for logins and require that you login only with ssh keys. So we need to modify the Sudo cookbook to create the sudoers file with the NOPASSWORD flag set for all the users we want to have sudo powers. We just need to override the template file used in the standard sudo cookbook.</p>
<p>First have to make a directory for the new template in your site-cookbooks directory:</p>
<pre><pre class="brush: plain; title: ; notranslate">
mkdir -p site-cookbooks/sudo/templates/default
</pre>
<p>Copy the following into site-cookbooks/sudo/templates/default/sudoers.erb:</p>
<pre><pre class="brush: plain; title: ; notranslate">
#
# /etc/sudoers
#
# Generated by Chef for
#

Defaults !lecture,tty_tickets,!fqdn

# User privilege specification
root  ALL=(ALL) ALL

 ALL=(ALL) NOPASSWD:ALL

# Members of the sysadmin group may gain root privileges
%sysadmin ALL=(ALL) NOPASSWD:ALL

# Members of the group '' may gain root privileges
% ALL=(ALL) NOPASSWD:ALL

</pre>
<h3>Fix a bug in the latest version of the Standard Mysql Cookbook</h3>
<p>As I was writing this post, Opscode came out with a new version of the Mysql Cookbook that seems to have a bug with the Chef Client version 0.9.12. It may be fixed by the time you read this. If you are running Chef 0.9.12, check for line 59 of cookbooks/mysql/recipes/client.rb. Change</p>
<pre><pre class="brush: plain; title: ; notranslate">
if platform_version.to_f &amp;amp;gt;= 5.0
</pre>
<p>to:</p>
<pre><pre class="brush: plain; title: ; notranslate">
if node.platform_version.to_f &amp;amp;gt;= 5.0
</pre>
<h3>Extend the WordPress cookbook to do some custom actions</h3>
<p>We need to do a few custom actions after we install wordpress. The main one being to change the onwnership of the wordpress directory and most of the files to the user blog.</p>
<p>We need to add a user named <em>blog</em> that has its home directory the same as the wordpress directory. We will use this <em>blog</em> user to do automatic updates to wordpress. It will use vsftpd for secure ftp and will have only access to the wordpress directory.</p>
<p>We also need to add a swap file to the server. We could create a new cookbook to hold this as its not really wordpress related, but because this is such a simple system, we will just add a new recipe to wordpress to handle these miscellaneous actions.</p>
<h4>Create a recipe to add the blog user and change ownership of the wordpress directory</h4>
<p>First make the directories in site-cookbooks for extending the wordpress cookbook:</p>
<pre><pre class="brush: plain; title: ; notranslate">
mkdir -p site-cookbooks/wordpress/recipes
mkdir -p site-cookbooks/wordpress/attributes
mkdir -p site-cookbooks/wordpress/templates/default
</pre>
<p>Create and edit the file site-cookbooks/wordpress/attributes/wordpress.rb and put the following in it (note, this must have a different name than the one used in the standard wordpress cookbook templates directory):</p>
<pre><pre class="brush: plain; title: ; notranslate">
default[:wordpress][:blog_updater][:username] = &quot;blog&quot;

::Chef::Node.send(:include, Opscode::OpenSSL::Password)

default[:wordpress][:blog_updater][:password] = secure_password
# hash set by recipe or manually using makepasswd
default[:wordpress][:blog_updater][:hash] = nil

# For creating the swap partition. Swap_size is in GB
default[:wordpress][:gb_swap_size] = 2
default[:wordpress][:swap_file] = &quot;/swap_file&quot;
</pre>
<p>This will set the <em>[:wordpress][:blog_updater]</em> to be the name &#8220;blog&#8221;. This is the default for the username that will have the ability to use vsftpd to update wordpress and its plugins. We actually override this in the wordpress.rb role file. But we put a default here as well for good practice (ie. the cookbook will work even if someone doesn&#8217;t override the value in a role).</p>
<p>The <em>::Chef::Node.send(:include, Opscode::OpenSSL::Password)</em> line is there so we can use the Chef mechanism to create an auto-generated password (<em>secure_password</em>). We then use that mechanism to set the default password for the <em>blog_updater</em>.</p>
<p>Create and edit site-cookbooks/wordpress/recipes/blog_user.rb. put the following as the contents:</p>
<pre><pre class="brush: plain; title: ; notranslate">
# Get the password cryptographic hash for node[:wordpress][:blog_updater][:password
package &quot;makepasswd&quot;
package &quot;libshadow-ruby1.8&quot;
if node[:wordpress][:blog_updater][:hash].nil? || node[:wordpress][:blog_updater][:hash].empty?
  cmd = &quot;echo #{node[:wordpress][:blog_updater][:password]} | /usr/bin/makepasswd --clearfrom=- --crypt-md5 |awk '{ print $2 }'&quot;
  ruby_block &quot;create_blog_updater_pw&quot; do
    block do
      node.set[:wordpress][:blog_updater][:hash] = `#{cmd}`.chomp
    end
    action :create
  end
end

# Create the blog_updater user with their home directory being the wordpress directory and the group as the same group as the Apache runtime group
user &quot;#{node[:wordpress][:blog_updater][:username]}&quot; do
  home &quot;#{node[:wordpress][:dir]}&quot;
  gid &quot;#{node[:apache][:user]}&quot;
  shell &quot;/bin/bash&quot;
  supports :manage_home =&amp;amp;gt; true
  unless node[:wordpress][:blog_updater][:hash].nil? || node[:wordpress][:blog_updater][:hash].empty?
    password &quot;#{node[:wordpress][:blog_updater][:hash]}&quot;
  end
end

# Change the ownership of the wordpress directory so that the blog user can update
execute &quot;chown wordpress home for blog user&quot; do
  cwd &quot;#{node[:wordpress][:dir]}&quot;
  user &quot;root&quot;
  command &quot;chown -R #{node[:wordpress][:blog_updater][:username]}:#{node[:apache][:user]} #{node[:wordpress][:dir]}&quot;
  not_if { node[:wordpress][:dir].nil? || node[:wordpress][:dir].empty? || (not File.exists?(node[:wordpress][:dir])) }
end
</pre>
<p>The above code will create the blog_user as a Linux user on the target system and set its home directory to be the wordpress directory. This is to make it work with vsftpd.</p>
<h4>Create a template to override the default wordpress apache config</h4>
<p>The standard WordPress cookbook sets the Apache Server Name the FQDN of the EC2 Public DNS and sets the Server Aliases to the EC2 FQDN Private DNS name. This is pretty useless. We would like to have the cookbook set the Server Alias to the FQDN&#8217;s based on our own DNS names. To do this without overriding the whole standard WordPress cookbook, we can override one template and name it: <em>site-cookbooks/wordpress/templates/default/wordpress.conf.erb</em>.</p>
<pre><pre class="brush: plain; title: ; notranslate">

  ServerName
  ServerAlias
  DocumentRoot

  &amp;amp;gt;
    Options FollowSymLinks
    AllowOverride FileInfo
    Order allow,deny
    Allow from all

    Options FollowSymLinks
    AllowOverride None

  LogLevel info
  ErrorLog /-error.log
  CustomLog /-access.log combined

  RewriteEngine On
  RewriteLog /-rewrite.log
  RewriteLogLevel 0

</pre>
<p>The key changes are the ServerAlias line where we now add the <code>@node[:wordpress][:server_aliases]</code> will add any aliases specified by this attribute which we set in the wordpress.rb role file. We also change the AllowOverride to FileInfo for the docroot</p>
<h4>Create a recipe to add a swap file to the server</h4>
<p>The t1.micro instance only has 612MB of RAM. You can easily run out of that with a WordPress blog. So we have a recipe to add a swap file system utilizing some space the EBS  disk Volume. This recipe creates a 2GB file called /swap_file  using dd and then uses the mkswap and swapon commands to make that file into a swap partition. The recipe also updates the /etc/fstab file so that the swap file will be mounted again if the instance reboots.</p>
<p>Create and edit the file site-cookbooks/wordpress/recipes/add_swap.rb with the following content:</p>
<pre><pre class="brush: plain; title: ; notranslate">
mb_block_size = 100
count = (node[:wordpress][:gb_swap_size] * 1024) / mb_block_size
bash &quot;add_swap&quot; do
  user &quot;root&quot;
  code &amp;amp;lt; &quot;#{node[:wordpress][:swap_file]}&quot;
  )
end
</pre>
<p>Create and edit the file site-cookbooks/wordpress/templates/default/fstab.erb and put the following content:</p>
<pre><pre class="brush: plain; title: ; notranslate">
# /etc/fstab: static file system information.
#
proc                   /proc           proc   nodev,noexec,nosuid     0       0
      none            swap   sw                      0       0
/dev/sda1              /               ext3   defaults                0       0
/dev/sda2              /mnt            auto   defaults,nobootwait,comment=cloudconfig 0       0
</pre>
<h3>Create WordPress Role</h3>
<p>This example will use a single role named <em>wordpress</em>. Use your favorite editor to create a file in your repo with the path roles/wordpress.rb with the following contents (Substitute your domain for ibd.com and change the hostnames such as test and wordpress-test to names appropriate for your blog. Replace <em>rberger_test </em>with the userid you want to use to log into your server via ssh):</p>
<pre><pre class="brush: ruby; title: ; notranslate">
name &quot;wordpress&quot;
description &quot;Blog using wordpress&quot;
recipes &quot;apt&quot;, &quot;build-essential&quot;, &quot;chef::client_service&quot;, &quot;users::sysadmins&quot;,
        &quot;sudo&quot;, &quot;postfix&quot;, &quot;mysql::server&quot;, &quot;wordpress&quot;, &quot;wordpress::blog_user&quot;,
        &quot;wordpress::add_swap&quot;, &quot;vsftpd&quot;

override_attributes(
  &quot;postfix&quot; =&amp;amp;gt; {&quot;myhostname&quot; =&amp;amp;gt; &quot;test.ibd.com&quot;, &quot;mydomain&quot; =&amp;amp;gt; &quot;ibd.com&quot;},
  &quot;authorization&quot; =&amp;amp;gt; {
    &quot;sudo&quot; =&amp;amp;gt; {
      &quot;groups&quot; =&amp;amp;gt; [],
      &quot;users&quot; =&amp;amp;gt; [&quot;rberger_test&quot;, &quot;ubuntu&quot;]
    }
  },
  &quot;wordpress&quot; =&amp;amp;gt; {
     &quot;server_aliases&quot; =&amp;amp;gt; %w(test.ibd.com wordpress-test.ibd.com),
     &quot;version&quot; =&amp;amp;gt; &quot;3.0.4&quot;,
     &quot;checksum&quot; =&amp;amp;gt; &quot;c68588ca831b76ac8342d783b7e3128c9f4f75aad39c43a7f2b33351634b74de&quot;,
     &quot;blog_updater&quot; =&amp;amp;gt; {
       &quot;username&quot; =&amp;amp;gt; &quot;blog&quot;,
       &quot;password&quot; =&amp;amp;gt; &quot;big-secret&quot;
     }
   },
   &quot;vsftpd&quot; =&amp;amp;gt; {&quot;chroot_users&quot; =&amp;amp;gt; %w(blog)}
)
</pre>
<p>The recipes line will be used to determine which cookbook/recipes (order is important) should be loaded by Chef when the chef-client is run on your new server.</p>
<ul>
<li><strong>apt: </strong>Configures various APT components on Debian-like systems.</li>
<li><strong>build-essential: </strong>Installs C compiler / build tools</li>
<li><strong>chef::client_service:</strong> Sets up a Chef client daemon to run periodically</li>
<li><strong>users::sysadmins:</strong> Creates users with ssh authorized keys. Requires a databag to be configured with users info</li>
<li><strong>sudo:</strong> Installs sudo and configures the /etc/sudoers file</li>
<li><strong>postfix: </strong>Installs and configures postfix for outgoing email</li>
<li><strong>mysql::server: </strong>Installs &amp; configures packages required for mysql servers</li>
<li><strong>wordpress:</strong> Installs and configures WordPress according to the instructions at http://codex.wordpress.org/Installing_WordPress</li>
<li><strong>wordpress::blog_user:</strong> Custom add-on recipe to add a user named &#8220;blog&#8221; to use with vsftpd for automatic wordpress and plugin updates</li>
<li><strong>wordpress::add_swap:</strong> Custom add-on recipe to add a swap partition to the instance</li>
<li><strong>vsftpd:</strong> Very Basic installation and configuration of vsftpd to support Secure (SSL) SFTP</li>
</ul>
<p>The <em>override_attributes</em> are used to configure various cookbooks.</p>
<ul>
<li><strong>postfix</strong> &#8211; Parameters for the postfix cookbook. Mainly sets the host and domain name to be meaningful</li>
<li><strong>authorization</strong> &#8211; Configures the sudo cookbook. Tells which users and groups should have sudo capability</li>
<li><strong>wordpress </strong>Some of these override values in the base cookbook and others for the site-cookbook version
<ul>
<li><strong>server-aliases</strong> &#8211; Sets aliases for the blog name. Will be used as serveralias names in the apache config.</li>
<li><strong>version</strong> &#8211; The version of wordpress to download.</li>
<li><strong>checksum</strong> &#8211; The checksum of the tar image of the wordpress download.</li>
<li><strong>blog_updater</strong>&#8211; Info needed to create a user that will do auto updates to wordpress via vsftps
<ul>
<li><strong>username</strong> &#8211; The username of the user</li>
<li><strong>password</strong> &#8211; The password to create for the user</li>
</ul>
</li>
</ul>
</li>
<li><strong>vsftpd</strong> &#8211; Sets what user should be allowed to access via ftp and have their home directory chroot&#8217;d (should be the same as wordpress-blog_updater).</li>
</ul>
<h3>Upload the cookbooks and roles to Opscode Chef Platform</h3>
<p>Run the following commands while you are in ~/chef-repo. This will upload the wordpress role and all the cookbooks in your chef-repo to your account on the Opscode Chef Platform:</p>
<pre><pre class="brush: plain; title: ; notranslate">
knife role from file roles/wordpress.rb
knife cookbook upload -a
</pre>
<h3>Create the Users databag</h3>
<p>The <em>users</em> <em>cookbook</em> will take info from Opscode Chef Server Data Bag names <em>users</em>. There can be an item for each user that you want to create a login for. The standard Opscode <em>users cookbook </em>expects the users set up in the data bags to be in the group sysadmin and have the ability to sudo and gain root powers.</p>
<p>We&#8217;ll need to create an item for each user you would like to have on your system. I suggest you make at least one for yourself. Here is the data bag I used for my setup. I don&#8217;t show the ssh key. You&#8217;ll have to substitute your own public ssh key for &lt;<em>your public ssh key&gt;</em> that you will use to ssh to the server. Its a requirement that you have an ssh key as described in the next section on the <em>sudo</em> <em>cookbook</em>.</p>
<p>Here is the JSON representation of my user data bag item. Create a directory users in ~/chef-repo/data_bags/users and put the following JSON in the file ~/chef-repo/users/.json (where is the username you want to have on the target system. The id will be the name of the item in the data bag and what will become your username  (in this case <em>rberger_test</em>) You will also need to include the public ssh key you want associated with this user. You will need to have created a ssh keypair (private and public) locally using something like ssh-keygen. You don&#8217;t really need the openid. You should be able to set that to an empty string (&#8220;&#8221;):</p>
<pre><pre class="brush: plain; title: rberger_test.json; notranslate">
{
  &quot;id&quot;: &quot;rberger_test&quot;,
  &quot;comment&quot;: &quot;Robert J. Berger&quot;,
  &quot;uid&quot;: 2001,
  &quot;groups&quot;: &quot;sysadmin&quot;,
  &quot;shell&quot;: &quot;/bin/bash&quot;,
  &quot;openid&quot;: &quot;rberger_test.myopenid.com&quot;,
  &quot;ssh_keys&quot;: &quot;&quot;
}
</pre>
<p>You will need to create the users databag and then upload your version of the user JSON (rberger_test.json in the example) to the Chef server with the following commands:</p>
<pre><pre class="brush: plain; title: ; notranslate">
knife data bag create users
knife data bag from file users data_bags/users/rberger_test.json
</pre>
<p>With Amazon EC2 instances its best to only allow access without passwords using ssh keys. Since the login is protected by ssh keys, and you don&#8217;t have passwords associated with the users, you need to make sure sudo is set up to allow invoking sudo for specific users (sysadmins) without a password. The users cookbook creates such a user based on the users data bag. But the sudo cookbook does not set up sudoers to support not having password. We will modify the sudoers.erb template later. Make sure you don&#8217;t deploy without this modification as the default sudo cookbook will make it impossible to sudo on an EC2 instance after its run.</p>
<h2>Configure AWS</h2>
<p>You can do most of the following by using the a GUI web app such as <a href="https://console.aws.amazon.com/ec2/home" target="_blank" rel="noopener">Amazon&#8217;s AWS console</a>, the Firefox plugin <a href="http://aws.amazon.com/developertools/609?_encoding=UTF8&amp;jiveRedirect=1" target="_blank" rel="noopener">ElasticFox</a> other such GUI  tools or the command line <a href="http://aws.amazon.com/developertools/351?_encoding=UTF8&amp;queryArg=searchQuery&amp;x=0&amp;fromSearch=1&amp;y=0&amp;searchPath=developertools&amp;searchQuery=ec2-api-tools" target="_blank" rel="noopener">ec2-api-tools</a>. For now, we&#8217;ll show how to do this with the Amazon AWS Console.</p>
<h3>Set up Security Group</h3>
<p>Add a WordPress group, that enables  ssh, http and https. You should open at least http and https to all IP addresses (represented by Source IP: 0.0.0.0/0) You can decide to open up ssh to every IP or just to your own development network or host. In this example we&#8217;ll open it up to the world. Note: by default ping (ICMP) is not enabled so you can not ping your instance. You can enable ping by adding a line where it doesn&#8217;t matter what is in Connection Method, Protocol is ICMP, From Port and To Port is set to -1 and Source IP is 0.0.0.0/0.</p>
<p><img decoding="async" loading="lazy" class="alignleft wp-image-686 size-medium" title="AWS Management Console Security Group" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console-Security-Group-300x223.jpg?resize=300%2C223" alt="" width="300" height="223" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console-Security-Group.jpg?resize=300%2C223&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console-Security-Group.jpg?resize=150%2C111&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console-Security-Group.jpg?resize=400%2C298&amp;ssl=1 400w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console-Security-Group.jpg?w=816&amp;ssl=1 816w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></p>
<figure id="attachment_687" aria-describedby="caption-attachment-687" style="width: 300px" class="wp-caption aligncenter"><img decoding="async" loading="lazy" class="wp-image-687 size-medium" title="Enter Security Group Name" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Enter-Security-Group-Name-300x171.jpg?resize=300%2C171" alt="" width="300" height="171" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Enter-Security-Group-Name.jpg?resize=300%2C171&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Enter-Security-Group-Name.jpg?resize=150%2C85&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Enter-Security-Group-Name.jpg?resize=400%2C228&amp;ssl=1 400w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Enter-Security-Group-Name.jpg?w=541&amp;ssl=1 541w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-687" class="wp-caption-text">Enter the name and description of the Security Group</figcaption></figure>
<figure id="attachment_688" aria-describedby="caption-attachment-688" style="width: 300px" class="wp-caption alignleft"><img decoding="async" loading="lazy" class="wp-image-688 size-medium" title="Setting ports" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Setting-ports-300x184.jpg?resize=300%2C184" alt="" width="300" height="184" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Setting-ports.jpg?resize=300%2C184&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Setting-ports.jpg?resize=150%2C92&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Setting-ports.jpg?resize=400%2C245&amp;ssl=1 400w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Setting-ports.jpg?w=986&amp;ssl=1 986w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-688" class="wp-caption-text">Set the Ports that are to be enabled (Select the Connection Method, enter the Source IP, and click Save)</figcaption></figure>
<h3>Generate an SSH Key Pair for accessing your instance[s]</h3>
<p>You need to use the Amazon Key Pair generator to generate a key that will be used to make initial ssh connections to your new instances after they are created. You can also do this on the AWS <span style="font-size: 15.6px;">Management Console&#8217;s EC2 Key Pairs page:</span></p>
<figure id="attachment_694" aria-describedby="caption-attachment-694" style="width: 300px" class="wp-caption aligncenter"><img decoding="async" loading="lazy" class="wp-image-694 size-medium" title="AWS Management Console Key Pairs" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console-300x175.jpg?resize=300%2C175" alt="" width="300" height="175" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console.jpg?resize=300%2C175&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console.jpg?resize=150%2C87&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console.jpg?resize=400%2C234&amp;ssl=1 400w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/AWS-Management-Console.jpg?w=894&amp;ssl=1 894w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-694" class="wp-caption-text">Navigate to the Key Pairs page and click on Create Key Pair</figcaption></figure>
<p>You can name the key pair anything, but you may want to use this key pair to access this and future instances, so you might want to name it something general like aws-east. Here we&#8217;re going to name it something more specific: aws-wordpress just for this example.</p>
<figure id="attachment_695" aria-describedby="caption-attachment-695" style="width: 300px" class="wp-caption alignleft"><img decoding="async" loading="lazy" class="wp-image-695 size-medium" title="Create Key Pair naming" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Create-Key-Pair-naming-300x183.jpg?resize=300%2C183" alt="" width="300" height="183" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Create-Key-Pair-naming.jpg?resize=300%2C183&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Create-Key-Pair-naming.jpg?resize=150%2C91&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Create-Key-Pair-naming.jpg?w=354&amp;ssl=1 354w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-695" class="wp-caption-text">Enter the name for the key</figcaption></figure>
<figure id="attachment_697" aria-describedby="caption-attachment-697" style="width: 300px" class="wp-caption alignnone"><img decoding="async" loading="lazy" class="wp-image-697 size-medium" title="keypair created message" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/keypair-created-message-300x198.jpg?resize=300%2C198" alt="" width="300" height="198" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/keypair-created-message.jpg?resize=300%2C198&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/keypair-created-message.jpg?resize=150%2C99&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/keypair-created-message.jpg?w=353&amp;ssl=1 353w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-697" class="wp-caption-text">After the key pair is created, make sure to save the private key that is downloaded automatically</figcaption></figure>
<p>At this point a file named asw-wordpress.pem will have been downloaded by your browser. Make sure not to loose it! Put it into your ~/.ssh directory and chmod it to 0600:</p>
<pre><pre class="brush: plain; title: ; notranslate">
chmod 0600 ~/.ssh/aws-wordpress.pem
</pre>
<p>The final Key Pairs page on the AWS Management Console should look something like:</p>
<figure id="attachment_696" aria-describedby="caption-attachment-696" style="width: 300px" class="wp-caption alignleft"><img decoding="async" loading="lazy" class="wp-image-696 size-medium" title="Final Keypair display" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Final-Keypair-display-300x177.jpg?resize=300%2C177" alt="" width="300" height="177" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Final-Keypair-display.jpg?resize=300%2C177&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Final-Keypair-display.jpg?resize=150%2C88&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Final-Keypair-display.jpg?resize=400%2C236&amp;ssl=1 400w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Final-Keypair-display.jpg?w=887&amp;ssl=1 887w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-696" class="wp-caption-text">Final Key Pair Page</figcaption></figure>
<h2>Create the Instance and Bootstrap Chef on the Instance</h2>
<p>The Chef Knife command has the ability to launch EC2 (and other cloud) instances. This process automatically installs chef and all its dependencies after the instance is created. If all goes well, it then loads and executes your roles and cookbooks on the instance creating your server.</p>
<p>You can see what options are available to this command:</p>
<pre><pre class="brush: plain; title: ; notranslate">
# knife ec2 server create --help
knife ec2 server create (options)
    -Z, --availability-zone ZONE     The Availability Zone
    -A, --aws-access-key-id KEY      Your AWS Access Key ID
    -K SECRET,                       Your AWS API Secret Access Key
        --aws-secret-access-key
        --user-data USER_DATA_FILE   The EC2 User Data file to provision the instance with
        --bootstrap-version VERSION  The version of Chef to install
    -N, --node-name NAME             The Chef node name for your new node
        --server-url URL             Chef Server URL
    -k, --key KEY                    API Client Key
        --color                      Use colored output
    -c, --config CONFIG              The configuration file to use
        --defaults                   Accept default values for all questions
    -d, --distro DISTRO              Bootstrap a distro using a template
        --ebs-no-delete-on-term      Do not delete EBS volumn on instance termination
        --ebs-size SIZE              The size of the EBS volume in GB, for EBS-backed instances
    -e, --editor EDITOR              Set the editor to use for interactive commands
    -E, --environment ENVIRONMENT    Set the Chef environment
    -f, --flavor FLAVOR              The flavor of server (m1.small, m1.medium, etc)
    -F, --format FORMAT              Which format to use for output
    -i IDENTITY_FILE,                The SSH identity file used for authentication
        --identity-file
    -I, --image IMAGE                The AMI for the server
        --no-color                   Don't use colors in the output
    -n, --no-editor                  Do not open EDITOR, just accept the data as is
        --no-host-key-verify         Disable host key verification
    -u, --user USER                  API Client Username
        --prerelease                 Install the pre-release chef gems
        --print-after                Show the data after a destructive operation
        --region REGION              Your AWS region
    -r, --run-list RUN_LIST          Comma separated list of roles/recipes to apply
    -G, --groups X,Y,Z               The security groups for this server
    -S, --ssh-key KEY                The AWS SSH key id
    -P, --ssh-password PASSWORD      The ssh password
    -x, --ssh-user USERNAME          The ssh username
    -s, --subnet SUBNET-ID           create node in this Virtual Private Cloud Subnet ID (implies VPC mode)
        --template-file TEMPLATE     Full path to location of template to use
    -V, --verbose                    More verbose output. Use twice for max verbosity
    -v, --version                    Show chef version
    -y, --yes                        Say yes to all prompts for confirmation
    -h, --help                       Show this message
</pre>
<p>The actual command we&#8217;ll use is:</p>
<pre><pre class="brush: plain; title: ; notranslate">
knife ec2 server create --run-list 'role[wordpress]' --node-name test-wordpress --flavor t1.micro \
--identity-file ~/.ssh/aws-wordpress.pem --image ami-a2f405cb --groups wordpress \
--ssh-key aws-wordpress --ssh-user ubuntu --ebs-no-delete-on-term
</pre>
<h3>Details of knife command to launch instance</h3>
<p><strong>role[wordpress]: </strong>The role[s] given to this instance. More than one can be specified by an orderd space separated list of strings: &#8216;role[role0]&#8217; &#8216;role[role1]&#8217; &#8230;</p>
<p><strong>&#8211;node-name test-wordpress:</strong> The name of the instance. Used by Chef to name the Node and Client</p>
<p><strong>&#8211;flavor t1.micro:</strong> The <a href="http://aws.amazon.com/ec2/instance-types/" target="_blank" rel="noopener">EC2 Instance Type</a>. Here we are using the smallest type. This is the only one that is <a href="http://aws.amazon.com/free/" target="_blank" rel="noopener">&#8220;free&#8221;</a></p>
<p><strong>&#8211;identity-file ~/.ssh/aws-wordpress.pem:</strong> The path to the ssh private key that was downloaded earlier from the AWS Management Console. You could potentially not include this if you added the key to your ssh-agent.</p>
<p><strong>&#8211;image ami-a2f405cb: </strong>The Amazon Machine Image assigned to this instance. It is the image of the root file system for the instance and thus determines what OS and software is booted when the instance is started. In this case it is the Canonical Ubuntu 10.4 32 bit AMI. You can find the latest Ubuntu AMIs for each region at the top of the home page of <a href="http://alestic.com/" target="_blank" rel="noopener">Eric Hammond&#8217;s super helpful site</a>.</p>
<p><strong>&#8211;groups wordpress:</strong> The Security Group[s] to be assigned to this instance. In this case its &#8220;wordpress&#8221; Multiple Groups can be assigned as a comma separated list</p>
<p><strong>&#8211;ssh-key aws-wordpress: </strong>The name of the SSH Key Pair that was downloaded from the AWS Management Console</p>
<p><strong>&#8211;ssh-user ubuntu: </strong>The user name for ssh access. This AMI uses &#8220;ubuntu&#8221;. The AMI&#8217;s usually are configured to allow only a single user to ssh by default. Different AMI&#8217;s use different names such as root or ec2-user.</p>
<p><strong>&#8211;ebs-no-delete-on-term: </strong>By default, the EBS Volume is deleted when the EC2 instance is terminated. By adding this flag it will instead make it so the EBS volume will continue to exist after the EC2 instance has been terminated. You want this for your final deployed site so that if something goes wrong with the EC2 instance you will still have your EBS volume and can use it to create a new EC2 instance without loosing your data. (That is the topic of another tutorial though!)</p>
<h3>Successful launch results</h3>
<p>After you fire off the knife ec2 server create command, you&#8217;ll see something like:</p>
<pre><pre class="brush: plain; title: ; notranslate">
[WARN] Fog::AWS::EC2#new is deprecated, use Fog::AWS::Compute#new instead (/Library/Ruby/Gems/1.8/gems/chef-0.9.12/lib/chef/knife/ec2_server_create.rb:145:in `run')
Instance ID: i-d10ae5bd
Flavor: t1.micro
Image: ami-a2f405cb
Availability Zone: us-east-1b
Security Groups: wordpress
SSH Key: aws-wordpress

Waiting for server..............
Public DNS Name: ec2-184-73-44-17.compute-1.amazonaws.com
Public IP Address: 184.73.44.17
Private DNS Name: domU-12-31-39-10-60-17.compute-1.internal
Private IP Address: 10.198.99.229

Waiting for sshd...done
INFO: Bootstrapping Chef on&amp;amp;nbsp;ec2-184-73-44-17.compute-1.amazonaws.com
</pre>
<p>That will be followed by loads of debugging info as the knife command bootstraps chef and its related packages and gems. This can go on for 10 to 20 minutes. Eventually you&#8217;ll see something along the lines of:</p>
<pre><pre class="brush: plain; title: ; notranslate">
Instance ID: i-d10ae5bd
Flavor: t1.micro
Image: ami-a2f405cb
Availability Zone: us-east-1b
Security Groups: wordpress
SSH Key: aws-wordpress
Public DNS Name: ec2-184-73-44-17.compute-1.amazonaws.com
Public IP Address: 184.73.44.17
Private DNS Name: domU-12-31-39-10-60-17.compute-1.internal
Private IP Address: 10.198.99.229
Run List: role[wordpress]
</pre>
<p>You can look just before this block and see if chef finished the running of the wordpress related cookbooks ok. If within a page above the last block you don&#8217;t see any errors then all is ok. The last few lines should be something like:</p>
<pre><pre class="brush: plain; title: ; notranslate">
[Mon, 03 Jan 2011 07:23:34 +0000] INFO: Chef Run complete in 10.945359 seconds
[Mon, 03 Jan 2011 07:23:34 +0000] INFO: cleaning the checksum cache
[Mon, 03 Jan 2011 07:23:34 +0000] INFO: Running report handlers
[Mon, 03 Jan 2011 07:23:34 +0000] INFO: Report handlers complete
</pre>
<p>If there are errors, you&#8217;ll have to debug your cookbooks which is beyond the scope of this post.</p>
<p>Now you should be able to log into your instance ether as the ubuntu default user or the user you created in the wordpress role and the Users Databag (rberger_test in this example):</p>
<pre><pre class="brush: plain; title: ; notranslate">
# Using the ubuntu user and an explicit ssh key
ssh -i ~/.ssh/aws-wordpress.pem ubuntu@ec2-184-73-44-17.compute-1.amazonaws.com

# Using the user created by the cookbook and a key that is already on you ssh-agent
ssh rberger_test@ec2-184-73-44-17.compute-1.amazonaws.com
</pre>
<h2>Configure DNS to have preferred FQDNs point to your instance</h2>
<p>You can access your site using the Amazon Public DNS name, but that would not be good in general. You probably want to access it via a URL like <em>http://www.myydomain.com</em>. To do this you must configure your DNS to add a CNAME to map your FQDN to the Amazon Public DNS name. How this is done is very specific to your DNS service provider. Bottom line is that you want to do a CNAME not an A record.  (I.E. an alias of your FQDN for the Amazon Public DNS name, not an A record that uses the Amazon IP address). There are some issues of using an A record with Amazon. You probably won&#8217;t see them for a simple situation such as hosting a single instance. But once you have many instances that need to talk to each other, using the CNAME will make life easier.</p>
<h2>Installing your WordPress Blog</h2>
<p>At this point you should be able to access your new instance via http. The initial screen will be the WordPress setup dialog. You should be able to access it via http using the Amazon Public DNS name or any CNAME aliases  you created and also added in the wordpress.rb role file override attribute for (wordpress =&gt;server_aliases. You should see something like:</p>
<figure id="attachment_717" aria-describedby="caption-attachment-717" style="width: 266px" class="wp-caption alignleft"><img decoding="async" loading="lazy" class="wp-image-717 size-medium" title="WordPress › Installation" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/WordPress-›-Installation-266x300.jpg?resize=266%2C300" alt="" width="266" height="300" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/WordPress-›-Installation.jpg?resize=266%2C300&amp;ssl=1 266w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/WordPress-›-Installation.jpg?resize=133%2C150&amp;ssl=1 133w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/WordPress-›-Installation.jpg?resize=400%2C449&amp;ssl=1 400w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/WordPress-›-Installation.jpg?w=775&amp;ssl=1 775w" sizes="(max-width: 266px) 100vw, 266px" data-recalc-dims="1" /><figcaption id="caption-attachment-717" class="wp-caption-text">WordPress startup installation page</figcaption></figure>
<p>It is possible to move an existing WordPress Blog to this new instance but that is beyond the scope of this post.</p>
<h2>Happily Ever After</h2>
<p>By default, the chef client runs every 1/2 hour on the instance. If you change any of the cookbooks and push them up to the Opscode Chef Server, those changes will be propogated to the instance the next time the chef-client runs again on the instance.</p>
<p>This is the way to maintain the server. By updating or adding cookbooks, you define the state of the server and the server will converge to that state when the chef-client runs. The inverse is true. If you change something on the server directly and the service you changed is managed by Chef, your direct changes could be reverted by the chef-client the next time it runs.</p>
<p>You shouldn&#8217;t need to but you can disable the chef-client from running automatically by running the following command while ssh&#8217;d to the instance:</p>
<pre><pre class="brush: plain; title: ; notranslate">
sudo /etc/init.d/chef-client stop
</pre>
<p>That will be reset (ie automatic chef-client runs will be re-enabled) if you reboot. You can permanently disable the automatic running of chef-client by running the following commands while ssh&#8217;d into the instance:</p>
<pre><pre class="brush: plain; title: ; notranslate">
cd /etc/init.d
sudo update-rc.d -f chef-client remove
</pre>
<h3>Using the WordPress Automatic Upgrade Mechanism</h3>
<p>At this point you should be able to use your wordpress blog as normal. You should be able to use the automatic update feature of WordPress to update WordPress itself and the plugins. When you are asked to supply the Connection Information, put in:</p>
<ul>
<li><span style="font-size: 10px;"><strong>Hostname</strong>: The Public FQDN of the host (ether the EC2 Public DNS Name or one of the DNS CNAMEs you set up)</span></li>
<li><span style="font-size: 10px;"><strong>FTP Username</strong>: &#8220;blog&#8221; (or whatever you set node[:wordpress][:blog_updater][:username] in the wordpress.rb role file)</span></li>
<li><span style="font-size: 10px;"><strong>FTP Password</strong>: &#8220;big-secret&#8221; (or whatever you <strong>SHOULD</strong> have set node[:wordpress][:blog_updater][:password] in the wordpress.rb role</span></li>
<li><span style="font-size: 10px;"><strong>Connection Type</strong>: FTPS (SSL)</span></li>
</ul>
<p>For instance for the Plugin Update Page:</p>
<figure id="attachment_746" aria-describedby="caption-attachment-746" style="width: 300px" class="wp-caption alignleft"><img decoding="async" loading="lazy" class="wp-image-746 size-medium" title="Upgrade Plugins ‹ WordPress Test — WordPress" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Upgrade-Plugins-‹-Wordpress-Test-—-WordPress-300x198.jpg?resize=300%2C198" alt="" width="300" height="198" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Upgrade-Plugins-‹-Wordpress-Test-—-WordPress.jpg?resize=300%2C198&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Upgrade-Plugins-‹-Wordpress-Test-—-WordPress.jpg?resize=150%2C99&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Upgrade-Plugins-‹-Wordpress-Test-—-WordPress.jpg?resize=400%2C265&amp;ssl=1 400w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/10/Upgrade-Plugins-‹-Wordpress-Test-—-WordPress.jpg?w=771&amp;ssl=1 771w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /><figcaption id="caption-attachment-746" class="wp-caption-text">Upgrade Plugins Connection Information</figcaption></figure>
<p>That should work and be secure using the vsftpd server that we installed automatically.</p>
<p>Hopefully all will work well for you. I will try to answer questions but can&#8217;t guarantee quick response here. A great resource is the Opscode Chef IRC channel <a href="irc://irc.freenode.net/chef" target="_blank" rel="noopener">irc.freenode.net #chef</a>. And of course the <a href="http://wiki.opscode.com/" target="_blank" rel="noopener">Opscode Chef Wiki</a> and the <a href="http://help.opscode.com/home" target="_blank" rel="noopener">Opscode Support Site</a>.</p>
<h3>Source Code at Github</h3>
<p>You can get all the source for this at <a href="https://github.com/rberger/ibd-wordpress-repo" target="_blank" rel="noopener">https://github.com/rberger/ibd-wordpress-repo</a></p><p>The post <a href="https://www.ibd.com/howto/deploy-wordpress-to-amazon-ec2-micro-instance-with-opscode-chef/">Deploy WordPress to Amazon EC2 Micro Instance with Opscode Chef</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.ibd.com/howto/deploy-wordpress-to-amazon-ec2-micro-instance-with-opscode-chef/feed/</wfw:commentRss>
			<slash:comments>28</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">599</post-id>	</item>
		<item>
		<title>Modyfying Jets3t S3 GUI tool to work with Walrus (Eucalyptus S3)</title>
		<link>https://www.ibd.com/howto/getting-the-jet3t-s3-gui-tool-to-work-with-walrus-eucalyptus-s3/</link>
					<comments>https://www.ibd.com/howto/getting-the-jet3t-s3-gui-tool-to-work-with-walrus-eucalyptus-s3/#comments</comments>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Fri, 18 Jun 2010 01:37:18 +0000</pubDate>
				<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Scalable Deployment]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[Eucalyptus]]></category>
		<category><![CDATA[S3]]></category>
		<category><![CDATA[ubuntu]]></category>
		<category><![CDATA[Walrus]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=582</guid>

					<description><![CDATA[<p>Jets3t (pronounced &#8220;jet-set&#8221;) is a free, open-source Java toolkit and application suite for the Amazon Simple Storage Service (Amazon S3) and Amazon CloudFront content delivery network. For some reason almost all the standard tools for accessing S3 will not easily work with the Eucalyptus equivalent to S3 called Walrus. I am use to using the excellent S3Fox add-on for Firefox&#8230;</p>
<p>The post <a href="https://www.ibd.com/howto/getting-the-jet3t-s3-gui-tool-to-work-with-walrus-eucalyptus-s3/">Modyfying Jets3t S3 GUI tool to work with Walrus (Eucalyptus S3)</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><a href="http://jets3t.s3.amazonaws.com/index.html" target="_blank" rel="noopener">Jets3t</a> (pronounced &#8220;jet-set&#8221;) is a free, open-source Java toolkit and application suite for the <a href="http://www.amazon.com/s3" target="_blank" rel="noopener">Amazon Simple Storage Service (Amazon S3)</a> and <a href="http://www.amazon.com/cloudfront" target="_blank" rel="noopener">Amazon CloudFront</a> content delivery network. For some reason almost all the standard tools for accessing S3 will not easily work with the <a href="http://open.eucalyptus.com/" target="_blank" rel="noopener">Eucalyptus</a> equivalent to S3 called <a href="http://open.eucalyptus.com/wiki/EucalyptusWalrusInteracting_v1.6" target="_blank" rel="noopener">Walrus</a>. I am use to using the excellent S3Fox add-on for Firefox and wanted some GUI tool that had similar capabilities. I was able to piece together how to get Jets3t to work with Eucalyptus Walrus. This article puts it all together in one place.</p>
<p>The basic build procedure is based on the <a href="http://bitbucket.org/jmurty/jets3t/wiki/Build_Instructions" target="_blank" rel="noopener">instructions</a> for downloading and building Jet3t from Source. I got the hints for what to change to make things work with Walrus from the <a href="http://groups.google.com/group/jets3t-users/browse_thread/thread/49e1296ed110f0ab/6872154bfd96e8b8" target="_blank" rel="noopener">Jets3t Users Forum article <em>eucalyptus walrus</em></a>. And an almost unrelated <a href="http://getsatisfaction.com/cloudera/topics/hadoop_in_eucalyptus_private_cloud" target="_blank" rel="noopener">article <em>hadoop in eucalyptus private cloud</em></a> in the Cloudera Support Forum. Search on the page for the section that says <em>my jets3t file has these values</em>.</p>
<h2>Prerequisites</h2>
<p>These instructions assume you are on Ubuntu (I had 10.4 Lucid) though it should be easily modifiable to work on any platform that supports Java.</p>
<p>You&#8217;ll need to install</p>
<ul>
<li>Mercurial (hg)</li>
<li>Sun Java 6 (I couldn&#8217;t get it to work with openjdk-6)</li>
<li>Ant</li>
</ul>
<p>You can use the command:</p>
<pre><code>sudo apt-get install mercurial sun-java6-jdk ant1.8</code></pre>
<h2>Get the Source of Jets3t with Mercurial</h2>
<p>cd to where you want to create the directory that will contain the source. Then use the following command to create a local mercurial repository of the source. Then cd into the repository directory jets3t</p>
<pre><code>hg clone http://bitbucket.org/jmurty/jets3t/
cd jets3t
</code></pre>
<h2>Edit files to make jets3t work with Walrus</h2>
<p>Use your favorite editor (emacs of course <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> to make the following edits.</p>
<h3>Edit <code>LoginCredentialsPanel.java</code> to not check the length of login credentials</h3>
<p>Jets3t enforces the length of the Access Key and Access Secret Key. But some newer versions of Walrus do not fit the assumptions. This edit eliminates the checks.</p>
<p>This should be around line 230 of <code>src/org/jets3t/apps/cockpit/gui/LoginCredentialsPanel.java</code>. Comment out the lines that have <code>errors.add</code>. It should look something like the following after you comment out the two lines with <code>errors.add</code>.</p>
<pre><code>    if (getAWSAccessKey().length() == 20) {
        // Correct length for AWS Access Key
    } else if (getAWSAccessKey().length() == 22) {
        // Correct length for Eucalyptus ID
    } else {
        // errors.add("Access Key must have 20 or 22 characters");
    }

    if (getAWSSecretKey().length() == 40) {
        // Correct length for AWS Access Key
    } else if (getAWSSecretKey().length() == 38) {
        // Correct length for Eucalyptus Secret Key
    } else {
        //  errors.add("Secret Key must have 40 or 38 characters");
    }
</code></pre>
<h3>Edit <code>jets3t.properties</code> to use parameters for accessing Walrus instead of AWS S3</h3>
<p>You&#8217;ll want to set the following for Walrus access in <code>jets3t/configs/jets3t.properties</code>. The following are the values you need to change. s3service.s3-endpoint should be set to the fully qualified domain name of the host that runs Walrus (you could use an ip address I believe). I had to set s3service.https-only to false since I don&#8217;t know what it would take to set up SSL/TLS between the Java environment and the Walrus environment. If you do, let me know!</p>
<pre><code>
s3service.https-only=false
s3service.s3-endpoint=your_walrus_host_name
s3service.s3-endpoint-http-port=8773
s3service.s3-endpoint-https-port=8443
s3service.disable-dns-buckets=true
s3service.s3-endpoint-virtual-path=/services/Walrus
</code></pre>
<h3>Optionally edit <code>build.properties</code></h3>
<p>If you want to mark the build version in a way that distinguishes from the standard version and or change the debug level.<br />
I changed the version to <code>version=0.7.4-runa</code>.</p>
<h2>Build Jets3t with your changes to work with Walrus</h2>
<p>The following use the default target of <em>dist</em> which will create a target tree in the top level directory <em>dist</em>.</p>
<pre><code>ant
</code></pre>
<p>If that works (it will say <em>BUILD SUCCESSFUL</em> at the end if it was) then there will be a director <em>dist/jets3t-0.7.4-runa</em> (or whatever you set the version value to in build.properties). You should be able to:</p>
<pre><code>cd dist/jets3t-0.7.4-runa/bin
bash cockpit.sh &amp;
</code></pre>
<p>This should start up an application window and a window for you to enter your Eucalyptus credentials. Select the <em>Direct Login</em> tab and enter you Eucalyptus Access Key and Access Secret Key.</p>
<p><a href="https://i0.wp.com/blog2.ibd.com/wp-content/uploads/2010/06/Cockpit-Login-2.jpg"><img decoding="async" loading="lazy" class="alignleft wp-image-586 size-medium" title="Jets3t Cockpit Login" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/06/Cockpit-Login-2-300x239.jpg?resize=300%2C239" alt="Jets3t Cockpit Login" width="300" height="239" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/06/Cockpit-Login-2.jpg?resize=300%2C239&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/06/Cockpit-Login-2.jpg?w=499&amp;ssl=1 499w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p>
<p>After you click ok, you should see your Walrus buckets!</p>
<p><a href="https://i0.wp.com/blog2.ibd.com/wp-content/uploads/2010/06/JetS3t-Cockpit-_-admin.jpg"><img decoding="async" loading="lazy" class="wp-image-585 size-medium alignright" title="JetS3t Cockpit" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/06/JetS3t-Cockpit-_-admin-300x208.jpg?resize=300%2C208" alt="JetS3t Cockpit" width="300" height="208" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/06/JetS3t-Cockpit-_-admin.jpg?resize=300%2C208&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2010/06/JetS3t-Cockpit-_-admin.jpg?w=799&amp;ssl=1 799w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></a></p>
<p>Once it all works you can use the &lt;i&gt;Store Credentials&lt;/i&gt; option on the login window to store your credentials on Walrus and use a login/password to access Walrus. But that is optional.</p><p>The post <a href="https://www.ibd.com/howto/getting-the-jet3t-s3-gui-tool-to-work-with-walrus-eucalyptus-s3/">Modyfying Jets3t S3 GUI tool to work with Walrus (Eucalyptus S3)</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.ibd.com/howto/getting-the-jet3t-s3-gui-tool-to-work-with-walrus-eucalyptus-s3/feed/</wfw:commentRss>
			<slash:comments>4</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">582</post-id>	</item>
		<item>
		<title>Copy an EBS AMI image to another Amazon EC2 Region</title>
		<link>https://www.ibd.com/scalable-deployment/copy-an-ebs-ami-image-to-another-amazon-ec2-region/</link>
					<comments>https://www.ibd.com/scalable-deployment/copy-an-ebs-ami-image-to-another-amazon-ec2-region/#comments</comments>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Mon, 15 Mar 2010 08:45:24 +0000</pubDate>
				<category><![CDATA[Scalable Deployment]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[EC2]]></category>
		<category><![CDATA[Infrastructure]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=551</guid>

					<description><![CDATA[<p>Since I&#8217;ve already created an image I liked in the us-west-1 region, I would like to reuse it in other regions. Turns out there is no mechanism within Amazon EC2 to do that. (See How do I launch an Amazon EBS volume from a snapshot across Regions?). I did find one post that talked a bit about how it can&#8230;</p>
<p>The post <a href="https://www.ibd.com/scalable-deployment/copy-an-ebs-ami-image-to-another-amazon-ec2-region/">Copy an EBS AMI image to another Amazon EC2 Region</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>Since I&#8217;ve already created an image I liked in the us-west-1 region, I would like to reuse it in other regions. Turns out there is no mechanism within Amazon EC2 to do that. (See <a href="http://docs.amazonwebservices.com/AWSEC2/latest/DeveloperGuide/index.html?FAQ_Regions_Availability_Zones.html" target="_self">How do I launch an Amazon EBS volume from a snapshot across Regions?</a>). I did find <a href="http://citizen428.net/archives/420-Move-EC2-AMIs-between-regions.html" target="_self">one post</a> that talked a bit about how it can be done &#8220;out of band&#8221;. So I figured I would give that a try instead of doing a full recreation in the new region.</p>
<h2>Prepare the Source Instance and Volume</h2>
<h3>Start an instance in the source region</h3>
<p>Here I&#8217;ll start an instance in us-west-1a where I have the EBS image I want to copy. In this case I&#8217;ll use the image I want to copy, but it could be any image as long as its in the same region as the EBS AMI image that is to be copied. Though we are going to use the instance info to figure out some parameters for creating the new AMI. So if you don&#8217;t make the source instance the same AMI as the one you are copying you will need to supply some of the parameters yourself.</p>
<p>You can use a tool like ElasticFox to do the following creating of instances. Here we&#8217;ll do it with command line tools.</p>
<h3>Set some Shell source variables on host machine</h3>
<p>Just to make using these instructions as a cookbook, we&#8217;ll have some shell variables that you can set once and then all the instructions will use the variables so you can just cut and paste the instructions into your shell.</p>
<pre>src_keypair=id_runa-staging-us-west
src_fullpath_keypair=~/.ssh/runa/id_runa-staging-us-west
src_availability_zone=us-west-1a
src_instance_type=m1.large
src_region=us-west-1
src_origin_ami=ami-1f4e1f5a
src_device=/dev/sdh
src_dir=/src
src_user=ubuntu</pre>
<h3>Start up the source instance and capture the instanceid</h3>
<pre>src_instanceid=$(ec2-run-instances \
  --key $src_keypair \
  --availability-zone $src_availability_zone \
  --instance-type $src_instance_type \
  $src_origin_ami \
  --region $src_region  | \
  egrep ^INSTANCE | cut -f2)
echo "src_instanceid=$src_instanceid"

# Wait for the instance to move to the “running” state
while src_public_fqdn=$(ec2-describe-instances --region $src_region "$src_instanceid" | \
  egrep ^INSTANCE | cut -f4) &amp;&amp; test -z $src_public_fqdn; do echo -n .; sleep 1; done
echo src_public_fqdn=$src_public_fqdn</pre>
<p>This should loop till you see something like:</p>
<pre>$ echo src_public_fqdn=$src_public_fqdn
src_public_fqdn=ec2-184-72-2-93.us-west-1.compute.amazonaws.com</pre>
<h3>Create a volume from the EBS AMI snapshot</h3>
<p>Normally when starting an EBS AMI instance, it automatically created a volume from the snapshot associated with the AMI. Here we create the volume from the snapshot ourselves</p>
<pre># Get the volume id
ec2-describe-instances --region $src_region "$src_instanceid" &gt; /tmp/src_instance_info
src_volumeid=$(egrep ^BLOCKDEVICE /tmp/src_instance_info | cut -f3); echo $src_volumeid
# Now get the snapshot id from the volume id
ec2-describe-volumes --region $src_region $src_volumeid | egrep ^VOLUME &gt; /tmp/volume_info
src_snapshotid=$(cut /tmp/volume_info | cut -f2)
echo $src_snapshotid
src_size=$(cut /tmp/volume_info | cut -f2)
echo $src_size
# Create a new volume from the snapshot
src_volumeid=$(ec2-create-volume --region $src_region --snapshot $src_snapshotid -z $src_availability_zone | egrep ^VOLUME | cut -f2)
echo $src_volumeid</pre>
<h3>Mount the EBS Image of the AMI you want to copy</h3>
<p>Now we&#8217;ll mount the EBS AMI image as a plain mount on the running source instance. In this case we&#8217;re going to use the same image as we launched, but it doesn&#8217;t have to be the same image or even the same architecture.</p>
<pre>ec2-attach-volume --region $src_region $src_volumeid -i $src_instanceid -d $src_device</pre>
<p>You should see something like:</p>
<pre>ATTACHMENT	vol-6e7fee06	i-fb0804be	/dev/sdh	attaching	2010-03-14T09:02:58+0000</pre>
<h2>Prepare the Destination Instance and Volume</h2>
<h3>Set some Shell destination variables on host machine</h3>
<p>You&#8217;ll want to tune these to your needs. This example makes the destination size the same as the source. You could make the destination an arbitrary size as long as it fits the source data.</p>
<pre>dst_keypair=runa-production-us-east
dst_fullpath_keypair=~/.ssh/runa/id_runa-production-us-east
dst_availability_zone=us-east-1b
dst_instance_type=m1.large
dst_region=us-east-1
dst_origin_ami=ami-7d43ae14
dst_size=$src_size
dst_device=/dev/sdh
dst_dir=/dst
dst_user=ubuntu</pre>
<h3>Start up the destination instance and capture the dst_instanceid</h3>
<pre>dst_instanceid=$(ec2-run-instances \
  --key $dst_keypair \
  --availability-zone $dst_availability_zone \
  --instance-type $dst_instance_type \
  $dst_origin_ami \
  --region $dst_region  | \
  egrep ^INSTANCE | cut -f2)
echo "dst_instanceid=$dst_instanceid"

# Wait for the instance to move to the “running” state
while dst_public_fqdn=$(ec2-describe-instances --region $dst_region "$dst_instanceid" | \
  egrep ^INSTANCE | cut -f4) &amp;&amp; test -z $dst_public_fqdn; do echo -n .; sleep 1; done
echo dst_public_fqdn=$dst_public_fqdn</pre>
<p>This should loop till you see something like:</p>
<pre>$ echo dst_public_fqdn=$dst_public_fqdn
dst_public_fqdn=ec2-184-73-71-160.compute-1.amazonaws.com</pre>
<h3>Create an empty destination volume</h3>
<pre>dst_volumeid=$(ec2-create-volume --region $dst_region --size $dst_size -z $dst_availability_zone | egrep ^VOLUME | cut -f2)
echo $dst_volumeid</pre>
<h3>Mount the EBS Image of the AMI you want to copy</h3>
<p>Now we&#8217;ll mount the EBS AMI image as a plain mount on the running source instance. In this case we&#8217;re going to use the same image as we launched, but it doesn&#8217;t have to be the same image or even the same architecture.</p>
<pre>ec2-attach-volume --region $dst_region $dst_volumeid -i $dst_instanceid -d $dst_device</pre>
<p>You should see something like:</p>
<pre>ATTACHMENT	vol-450ed02c	i-65be1f0e	/dev/sdh	attaching	2010-03-14T09:39:20+0000</pre>
<h2>Copy the data from the Source Volume to the Destination Volume</h2>
<h3>Copy your credentials to the source machine</h3>
<p>We&#8217;re going to use rsync to copy from the source to the destination tunneled thru ssh. This eliminates any issues with EC2 security groups. But it does mean you have to copy an ssh private key to the source machine that will then be able to access the destination machine via ssh.</p>
<pre>scp -i $src_fullpath_keypair $dst_fullpath_keypair ${src_user}@${src_public_fqdn}:.ssh</pre>
<h3>Mount the source and destination volumes on their instances</h3>
<pre>ssh -i $src_fullpath_keypair ${src_user}@${src_public_fqdn} sudo mkdir -p $src_dir
ssh -i $src_fullpath_keypair ${src_user}@${src_public_fqdn} sudo mount $src_device $src_dir
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo mkfs.ext3 -F $dst_device
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo mkdir -p $dst_dir
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo mount $dst_device $dst_dir</pre>
<h3>Get the FQDN of the Amazon internal address of the destination machine</h3>
<p>We&#8217;re assuming that the dst instance is the us-east equivalent base AMI of the us-west source base AMI so we can use these kernel and ramdisk to build the new AMI later.</p>
<pre>ec2-describe-instances --region $dst_region "$dst_instanceid" &gt; /tmp/dst_instance_info
dst_internal_fqdn=$(egrep ^INSTANCE /tmp/dst_instance_info | cut -f5); echo $dst_internal_fqdn
dst_kernel=$(egrep ^INSTANCE /tmp/dst_instance_info | cut -f13); echo $dst_kernel
dst_ramdisk=$(egrep ^INSTANCE /tmp/dst_instance_info | cut -f14) ;echo $dst_ramdisk</pre>
<h2>Commands to run on the source machine</h2>
<p>You could do the rsync by logging into the source machine and do the following. I tried to do this by using ssh commands, but the fact that the first ssh from source to destination has to be authenticated was a blocker for me. You could log into the source machine and then sudo ssh to the destination machine (you have to do sudo ssh since the rsync has to be run with sudo and the keys are stored separately for the sudo user and the regular user).<br />
I&#8217;ll show both ways.<br />
Here&#8217;s how you can ssh to the source machine:</p>
<pre>ssh -i $src_fullpath_keypair ${src_user}@${src_public_fqdn}</pre>
<h3>Set up some shell variables on the source machine shell environment</h3>
<pre># This is the key you just copied over
dst_fullpath_keypair=~/.ssh/id_runa-production-us-east
# You need to use the Public FQDN of the destination since its cross region
dst_keypair=runa-production-us-east
src_public_fqdn=ec2-184-72-2-93.us-west-1.compute.amazonaws.com
dst_public_fqdn=ec2-184-73-71-160.compute-1.amazonaws.com
dst_user=ubuntu
src_user=ubuntu
src_dir=/src
dst_dir=/dst</pre>
<h3>Do the rsync</h3>
<p>We are using the rsync options</p>
<ul>
<li><strong>P</strong> Keep partial transferred files and Show Progress</li>
<li><strong>H</strong> Preserve Hard Links</li>
<li><strong>A</strong> Preserve ACLs</li>
<li><strong>X</strong> Preserve extended attributes</li>
<li><strong>a</strong> Archive mode</li>
<li><strong>z</strong> Compress files for transfer</li>
</ul>
<pre>rsync -PHAXaz --rsh "ssh -i /home/${src_user}/.ssh/id_${dst_keypair}" --rsync-path "sudo rsync" ${src_dir}/ ${dst_user}@${dst_public_fqdn}:${dst_dir}/</pre>
<h2>If you want to do the rsync from your local host</h2>
<p>I found that I still had to log into the source instance</p>
<pre>ssh -i $src_fullpath_keypair ${src_user}@${src_public_fqdn}</pre>
<p>and then on the source instance do:</p>
<pre>sudo ssh -i /home/${src_user}/.ssh/id_${dst_keypair} ${dst_user}@${dst_public_fqdn}</pre>
<p>and accept &#8220;<em>The authenticity of host</em>&#8221; for the first time so the destination host is in the known keys of the sudo user<br />
Then back on your local host you can issue the remote command that will run on the source instance and rsync to the destination host:</p>
<pre>ssh -i $src_fullpath_keypair ${src_user}@${src_public_fqdn} sudo "rsync -PHAXaz --rsh \"ssh -i /home/${src_user}/.ssh/id_${dst_keypair}\" --rsync-path \"sudo rsync\" ${src_dir}/ ${dst_user}@${dst_public_fqdn}:${dst_dir}/"</pre>
<h2>Complete the new AMI from your Local Host</h2>
<p>The remaining steps will be done back on your local host. This assumes that the shell variables we set up earlier are still there.</p>
<h3>Some Cleanup for new Region</h3>
<p>Ubuntu has their apt sources tied to the region you are in. So we have to update the apt sources for the new region.<br />
We&#8217;ll do this by chrooting to the mount /dst directory and running some commands as if they were being run on an ami with the /dst image. We might as well update things at the same time to the latest packages.</p>
<pre># Allow network access from chroot environment
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo cp /etc/resolv.conf $dst_dir/etc/

# Upgrade the system and install packages
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E chroot $dst_dir mount -t proc none /proc
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E chroot $dst_dir mount -t devpts none /dev/pts

cat &lt;&lt;EOF &gt; /tmp/policy-rc.d
#!/bin/sh
exit 101
EOF
scp -i $dst_fullpath_keypair /tmp/policy-rc.d ${dst_user}@${dst_public_fqdn}:/tmp
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo mv /tmp/policy-rc.d $dst_dir/usr/sbin/policy-rc.d

ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} chmod 755 $dst_dir/usr/sbin/policy-rc.d

# This has to be done to set up the Locale &amp; apt sources
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} DEBIAN_FRONTEND=noninteractive sudo -E chroot $dst_dir /usr/bin/ec2-set-defaults

# Update the apt sources
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} DEBIAN_FRONTEND=noninteractive sudo -E chroot $dst_dir apt-get update

# Optionally update the packages
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} DEBIAN_FRONTEND=noninteractive sudo -E chroot $dst_dir apt-get dist-upgrade -y

# Optionally update your gems
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E chroot $dst_dir gem update --system
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E chroot $dst_dir gem update</pre>
<h4>Clean up from the building of the image</h4>
<pre>ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo chroot $dst_dir umount /proc
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E chroot $dst_dir umount /dev/pts
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo -E rm -f $dst_dir/usr/sbin/policy-rc.d</pre>
<h3>There are a few more shell variables we&#8217;ll need</h3>
<p>I got the kernel and ramdisk from the destination instance since it has the alestic.com us-east-1 equivalent base AMI to the us-west-1 one that we are copying from.</p>
<pre># Some info for creating the name and description
codename=karmic
release=9.10
tag=server

# Make sure you set this as appropriate
# 64bit
arch=x86_64

# You will need to set the aki and ari values base on the actual base AMI you used
# It will be different for different regions.  These are set for x86_64 and us-east-1
ebsopts="--kernel=${dst_kernel} --ramdisk=${dst_ramdisk}"
ebsopts="$ebsopts --block-device-mapping /dev/sdb=ephemeral0"

now=$(date +%Y%m%d-%H%M)
# Make this specific to what you are making
chef_version="0.8.6"
prefix=runa-chef-${chef_version}-ubuntu-${release}-${codename}-${tag}-${arch}-${now}
description="Runa Chef ${chef_version} Ubuntu $release $codename $tag $arch $now"</pre>
<h3>Snapshot the Destination Volume and register the new AMI in the destination region</h3>
<pre># Unmount the destination filesystem
ssh -i $dst_fullpath_keypair ${dst_user}@${dst_public_fqdn} sudo umount $dst_dir

# Detach the Destination Volume (it may speed up the snapshot)
ec2-detach-volume --region $dst_region "$dst_volumeid"

# Make the snapshot
dst_snapshotid=$(ec2-create-snapshot -region $dst_region -d "$description" $dst_volumeid | cut -f2)

# Wait for snapshot to complete. This can take a while
while ec2-describe-snapshots --region $dst_region "$dst_snapshotid" | grep -q pending
  do echo -n .; sleep 1; done

# Register the Destination Snapshot as a new AMI in the Destination Region
new_ami=$(ec2-register \
  --region $dst_region \
  --architecture $arch \
  --name "$prefix" \
  --description "$description" \
  $ebsopts \
  --snapshot "$dst_snapshotid")
echo $new_ami</pre>
<h2>Conclusion</h2>
<p>You should now have a shiny new ami in your destination region. Use the value of $new_ami to start a new instance in your destination region using your favorite tool or technique.</p><p>The post <a href="https://www.ibd.com/scalable-deployment/copy-an-ebs-ami-image-to-another-amazon-ec2-region/">Copy an EBS AMI image to another Amazon EC2 Region</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.ibd.com/scalable-deployment/copy-an-ebs-ami-image-to-another-amazon-ec2-region/feed/</wfw:commentRss>
			<slash:comments>13</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">551</post-id>	</item>
		<item>
		<title>Simple update and clone an Amazon EC2 EBS Boot image</title>
		<link>https://www.ibd.com/scalable-deployment/simple-update-and-clone-an-amazon-ec2-ebs-boot-image/</link>
					<comments>https://www.ibd.com/scalable-deployment/simple-update-and-clone-an-amazon-ec2-ebs-boot-image/#comments</comments>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Fri, 05 Mar 2010 07:54:41 +0000</pubDate>
				<category><![CDATA[Scalable Deployment]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[EC2]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=533</guid>

					<description><![CDATA[<p>Introduction Well there is already an update to Chef&#8217;s Ohai library. At first I thought, &#8220;Oh no, I have to generate another EC2 image&#8221;. But then I remember reading that you can update and clone a running EBS boot image. One of the cool features of using an Amazon EC2 instance that boots from an EBS Snapshot is that its&#8230;</p>
<p>The post <a href="https://www.ibd.com/scalable-deployment/simple-update-and-clone-an-amazon-ec2-ebs-boot-image/">Simple update and clone an Amazon EC2 EBS Boot image</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<h2>Introduction</h2>
<p>Well there is already an update to Chef&#8217;s Ohai library. At first I thought, &#8220;Oh no, I have to generate another EC2 image&#8221;. But then I remember reading that you can update and clone a running EBS boot image.</p>
<p>One of the cool features of using an Amazon EC2 instance that boots from an EBS Snapshot is that its easy to create new boot images from an existing running EC2 instance, assuming that you are running an EC2 instance that is itself bootable from an EBS Image.</p>
<h2>Prerequisites</h2>
<p>The following expects that you have a recent copy of the Amazon ec2-api-tools on the instance and that you have recent version of the ec2-api-tools on your host development system.</p>
<h2>Start up an instance, make changes</h2>
<p>Start up an instance you can use as a base, for instance the one we created in Using the Official Opscode 0.8.x Gems to build EC2 AMI Chef Client and Server</p>
<h3>Get the name of the instance</h3>
<p>First you will need the instance name of your instance you want to copy. You can use Elasticfox or other tool. Or run the following command on the instance:</p>
<pre>wget -qO- http://instance-data/latest/meta-data/instance-id</pre>
<h2>On another host</h2>
<p>The rest of the instructions will be run on your host development system (not the system you are copying). This makes it so you don&#8217;t have to put your Amazon Certs onto the machine you are cloning (you don&#8217;t want those keys to end up on the cloned image)</p>
<h3>Create some shell defines</h3>
<p>To make the instructions easier make some defines we&#8217;ll use in commands. Tune them for your environment.</p>
<pre># This will be the instance id of the running instance you want to clone
instanceid=i-07202042

# Some info for creating the name and description
codename=karmic
release=9.10
tag=server
region=us-west-1
availability_zone=us-west-1a

# Make sure you set this as appropriate
# 64bit
arch=x86_64
arch2=amd64
#32bit
arch=i386
arch2=i386
now=$(date +%Y%m%d-%H%M)

# Make this specific to what you are making
prefix=runa-chef-0.8.4-ubuntu-$release-$codename-$tag-$arch-$now
description="Runa Chef 0.8.4 Ubuntu $release $codename $tag $arch $now"</pre>
<h3>Get the info  about your running instance</h3>
<p>Use Elasticfox or your favorite tool or the following command to get the volume id of the instance</p>
<pre>ec2-describe-instances --region $region "$instanceid" &gt; /tmp/instance_info
volumeid=$(egrep ^BLOCKDEVICE /tmp/instance_info | cut -f3); echo $volumeid
kernel=$(egrep ^INSTANCE /tmp/instance_info | cut -f13); echo $kernel
ramdisk=$(egrep ^INSTANCE /tmp/instance_info | cut -f14) ;echo $ramdisk</pre>
<h3>Shutdown  the instance</h3>
<p>Its not clear if you really need to do this. But when I first tried doing it without shuting down the instance, the snapshots took forever.</p>
<h3>Create a new snapshot</h3>
<pre>snapshotid=$(ec2-create-snapshot -region $region -d "$description" $volumeid | cut -f2)</pre>
<h3>Register the new image</h3>
<pre>ec2reg --region $region -s $snapshotid -a $arch --kernel $kernel --ramdisk $ramdisk -d "$description" -n "$prefix"</pre>
<p>The result of this command will be the ami image name. After this completes, the image and snapshot can be used to create new instances.</p><p>The post <a href="https://www.ibd.com/scalable-deployment/simple-update-and-clone-an-amazon-ec2-ebs-boot-image/">Simple update and clone an Amazon EC2 EBS Boot image</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.ibd.com/scalable-deployment/simple-update-and-clone-an-amazon-ec2-ebs-boot-image/feed/</wfw:commentRss>
			<slash:comments>1</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">533</post-id>	</item>
		<item>
		<title>Using the Official Opscode 0.8.x Gems to build EC2 AMI Chef Client and Server</title>
		<link>https://www.ibd.com/howto/using-the-official-opscode-0-8-x-gems-to-build-ec2-ami-chef-client-and-server/</link>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Wed, 03 Mar 2010 06:50:57 +0000</pubDate>
				<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Opscode Chef]]></category>
		<category><![CDATA[Ruby / Rails]]></category>
		<category><![CDATA[Scalable Deployment]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[EC2]]></category>
		<category><![CDATA[Infrastructure]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=513</guid>

					<description><![CDATA[<p>Updates Mar 3, 2010 Added call to script ec2-set-defaults that is normally called on ec2 init that sets the locale and apt sources for EC availability Zone Introduction Opscode has officially released 0.8.x of Chef. It is now even more fabulous. I&#8217;ve been using the pre-release version for the last couple of months and it is rock steady and very&#8230;</p>
<p>The post <a href="https://www.ibd.com/howto/using-the-official-opscode-0-8-x-gems-to-build-ec2-ami-chef-client-and-server/">Using the Official Opscode 0.8.x Gems to build EC2 AMI Chef Client and Server</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<h2>Updates</h2>
<ul>
<li><strong>Mar 3, 2010</strong> Added call to script <em>ec2-set-defaults </em>that is normally called on ec2 init that sets the locale and apt sources for EC availability Zone</li>
</ul>
<h2>Introduction</h2>
<p>Opscode has officially released 0.8.x of Chef. It is now even more fabulous. I&#8217;ve been using the pre-release version for the last couple of months and it is rock steady and very powerful. I&#8217;ll be having a post soon on how I used it to deploy a pretty complicated cloud stack with multiple Rails/Mysql/Nginx/Unicorn/Postfix apps for front-ends, and a back end made up of a mix of a Clojure/Swarmiji distributed processing swarm, HBase/Hadoop, Redis, RabbitMQ.</p>
<p>But first, I needed to upgrade my Amazon EC2 AMIs for the officially released Chef 0.8.x. I also wanted to try the EBS Boot image as a basis for the AMI.</p>
<p>This is an update to my earlier post, <a href="http://blog2.ibd.com/scalable-deployment/creating-an-amazon-ami-for-chef-0-8/" target="_blank">Creating an Amazon EC2 AMI for Opscode Chef 0.8</a>, but now using the official Opscode 0.8.x Gems instead of building your own Gems. A lot of the content is the same, but you can consider this mostly superceding the older one except where mentioned otherwise. This version will use the EBS Boot AMIs as per Eric Hammond&#8217;s Tutorial Building <a href="http://alestic.com/2010/01/ec2-ebs-boot-ubuntu" target="_blank">EBS Boot AMIs Using Canonical&#8217;s Downloadable EC2 Images</a>. Much of this is blog post is taken from Eric&#8217;s blog post but in the context of creating a Chef Client base AMI and a Chef Server. Note that <a href="http://thecloudmarket.com/owner/345069653647--opscode" target="_blank">Opscode now has their own AMIs,</a> including ones for Chef 0.8.4, but as of this writing, they do not have AMIs for Amazon us-west.</p>
<h2>Setup</h2>
<h3>Prerequisites</h3>
<p>On your host development machine (ie your laptop or whatever machine you are developing from) you should have already installed:</p>
<ul>
<li>ec2-api-tools and ec2-ami-tools (these assume you have a modern Java run time setup)</li>
<li>chef-0.8.4 or later chef client gem (which implies the entire ruby 1.8.x and rubygems toolchain)</li>
</ul>
<h3>Set some Shell variables on host machine</h3>
<p>Just to make using these instructions as a cookbook, we&#8217;ll have some shell variables that you can set once and then all the instructions will use the variables so you can just cut and paste the instructions into your shell.</p>
<pre>keypair=id_runa-staging-us-west
fullpath_keypair=~/.ssh/runa/id_runa-staging-us-west
availability_zone=us-west-1a
instance_type=m1.large
region=us-west-1

# Pick one of these two AMIs (Note that it will be different for different Amazon Regions)
# 32bit AMI
origin_ami=ami-fd5100b8
#64bit AMI
origin_ami=ami-ff5100ba</pre>
<h3>Start up an instance and capture the instanceid</h3>
<pre>instanceid=$(ec2-run-instances \
  --key $keypair \
  --availability-zone $availability_zone \
  --instance-type $instance_type \
  $origin_ami \
  --region $region  |
  egrep ^INSTANCE | cut -f2)
echo "instanceid=$instanceid"</pre>
<h3>Wait for the instance to move to the “running” state</h3>
<pre>while host=$(ec2-describe-instances --region $region "$instanceid" |
  egrep ^INSTANCE | cut -f4) &amp;&amp; test -z $host; do echo -n .; sleep 1; done
echo host=$host</pre>
<p>This should loop till you see something like:</p>
<pre>$ echo host=$host
host=ec2-184-72-2-93.us-west-1.compute.amazonaws.com</pre>
<h3>Upload your certs</h3>
<p>This assumes that your Amazon certs are in ~/.ec2</p>
<pre>rsync                            \
 --rsh="ssh -i $fullpath_keypair" \
 --rsync-path="sudo rsync"      \
 ~/.ec2/{cert,pk}-*.pem         \
 ubuntu@$host:/mnt/</pre>
<h3>Connect to the instance</h3>
<pre>ssh -i $fullpath_keypair ubuntu@$host</pre>
<h3>Update the Amazon ec2 tools on the instance</h3>
<pre>export DEBIAN_FRONTEND=noninteractive
echo "deb http://ppa.launchpad.net/ubuntu-on-ec2/ec2-tools/ubuntu karmic main" |
  sudo tee /etc/apt/sources.list.d/ubuntu-on-ec2-ec2-tools.list &amp;&amp;
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 9EE6D873 &amp;&amp;
sudo apt-get update &amp;&amp;
sudo -E apt-get dist-upgrade -y &amp;&amp;
sudo -E apt-get install -y ec2-api-tools</pre>
<h3>Set some parameters on instance shell environment</h3>
<p>Again this makes it easier to cut and paste the instructions.</p>
<pre>codename=karmic
release=9.10
tag=server
region=us-west-1
availability_zone=us-west-1a
if [ $(uname -m) = 'x86_64' ]; then
  arch=x86_64
  arch2=amd64
  # You will need to set the aki and ari values base on the actual base AMI you used
  # It will be different for different regions.  These are set for us-west-1
  ebsopts="--kernel=aki-7f3c6d3a --ramdisk=ari-cf2e7f8a"
  ebsopts="$ebsopts --block-device-mapping /dev/sdb=ephemeral0"
else
  arch=i386
  arch2=i386
  # You will need to set the aki and ari values base on the actual base AMI you used
  # It will be different for different regions. These are set for us-west-1
  ebsopts="--kernel=aki-773c6d32 --ramdisk=ari-c12e7f84"
  ebsopts="$ebsopts --block-device-mapping /dev/sda2=ephemeral0"
fi</pre>
<h3>Download and unpack the latest released Ubuntu server image file</h3>
<p>This contains the output of vmbuilder as run by Canonical.</p>
<pre>imagesource=http://uec-images.ubuntu.com/releases/$codename/release/unpacked/ubuntu-$release-$tag-uec-$arch2.img.tar.gz
image=/mnt/$codename-$tag-uec-$arch2.img
imagedir=/mnt/$codename-$tag-uec-$arch2
wget -O- $imagesource |
  sudo tar xzf - -C /mnt
sudo mkdir -p $imagedir
sudo mount -o loop $image $imagedir</pre>
<h3>Bring the packages on the instance up to date</h3>
<pre># Allow network access from chroot environment
sudo cp /etc/resolv.conf $imagedir/etc/

# Fix what I consider to be a bug in vmbuilder
sudo rm -f $imagedir/etc/hostname

# Add multiverse
sudo perl -pi -e 's%(universe)$%$1 multiverse%' \
$imagedir/etc/ec2-init/templates/sources.list.tmpl

# Add Alestic PPA for runurl package (handy in user-data scripts)
echo "deb http://ppa.launchpad.net/alestic/ppa/ubuntu karmic main" |
sudo tee $imagedir/etc/apt/sources.list.d/alestic-ppa.list
sudo chroot $imagedir \
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys BE09C571

# Add ubuntu-on-ec2/ec2-tools PPA for updated ec2-ami-tools
echo "deb http://ppa.launchpad.net/ubuntu-on-ec2/ec2-tools/ubuntu karmic main" |
sudo tee $imagedir/etc/apt/sources.list.d/ubuntu-on-ec2-ec2-tools.list
sudo chroot $imagedir \
apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 9EE6D873

# Upgrade the system and install packages
sudo chroot $imagedir mount -t proc none /proc
sudo chroot $imagedir mount -t devpts none /dev/pts

cat &lt;&lt;EOF &gt; /tmp/policy-rc.d
#!/bin/sh
exit 101
EOF
sudo mv /tmp/policy-rc.d $imagedir/usr/sbin/policy-rc.d

chmod 755 $imagedir/usr/sbin/policy-rc.d
DEBIAN_FRONTEND=noninteractive

# It seems this has to be done to set up the Locale &amp; apt sources
sudo -E chroot $imagedir /usr/bin/ec2-set-defaults

# Update the apt sources and packages
sudo chroot $imagedir apt-get update &amp;&amp;
sudo -E chroot $imagedir apt-get dist-upgrade -y &amp;&amp;
sudo -E chroot $imagedir apt-get install -y runurl ec2-ami-tools</pre>
<h2>Install Chef Client and other customizations</h2>
<h3>Install Ruby and needed packages</h3>
<pre><code>sudo -E chroot $imagedir apt-get -y install ruby ruby1.8-dev libopenssl-ruby1.8 rdoc ri irb \
build-essential wget ssl-cert git-core rake librspec-ruby libxml-ruby \
thin couchdb zlib1g-dev libxml2-dev emacs23-nox</code></pre>
<h4>Install Rubygems</h4>
<p>Rubygems will be installed from source since debian/ubuntu try to control rubygems upgrades. If you don&#8217;t care you can install it via apt-get install rubygems</p>
<pre><code>cd $imagedir/tmp
wget http://rubyforge.org/frs/download.php/69365/rubygems-1.3.6.tgz
tar zxf rubygems-1.3.6.tgz
cd rubygems-1.3.6
sudo -E chroot $imagedir ruby /tmp/rubygems-1.3.6/setup.rb
cd ..
sudo rm -rf rubygems-1.3.6
sudo -E chroot $imagedir ln -sfv /usr/bin/gem1.8 /usr/bin/gem
sudo -E chroot $imagedir gem sources -a http://gems.opscode.com
sudo -E chroot $imagedir gem sources -a http://gemcutter.org
sudo -E chroot $imagedir gem install chef
</code></pre>
<h3>Use Opscode Chef Solo Bootstrap to configure the Chef Client</h3>
<p>The following will set up all the default paths and directories as well as install and configure runit to start and monitor the chef-client. Originally I shied away from runit, but this time I&#8217;m going as Opscode Vanilla as possible and they like runit.</p>
<h4>Create the solo.rb file</h4>
<p>All of the following files should be done in $imagedir as we are going to have to run this as chroot to $imagedir</p>
<p>Create $imagedir/solo.rb with an editor and put in the following:</p>
<pre>file_cache_path "/tmp/chef-solo"
cookbook_path "/tmp/chef-solo/cookbooks"
recipe_url "http://s3.amazonaws.com/chef-solo/bootstrap-latest.tar.gz"</pre>
<h4>Create the chef.json file</h4>
<p>Create $imagedir/chef.json with the following. (set the server_fqdn to the chef server you are using):</p>
<pre>{
  "bootstrap": {
    "chef": {
      "url_type": "http",
      "init_style": "runit",
      "path": "/srv/chef",
      "serve_path": "/srv/chef",
      "server_fqdn": "chef-server-staging.runa.com"
    }
  },
  "run_list": [ "recipe[bootstrap::client]" ]
}</pre>
<h4>Run the chef-solo command</h4>
<pre>sudo -E chroot $imagedir chef-solo -c solo.rb -j chef.json \
  -r http://s3.amazonaws.com/chef-solo/bootstrap-latest.tar.gz</pre>
<p>I had to run it 3 times before it completed with no errors.<br />
After it does work, clean up the chef-solo stuff:</p>
<pre>sudo rm $imagedir/{solo.rb,chef.json}</pre>
<h3>Update the client config file</h3>
<p>The Chef Solo Client bootstrap process creates an /etc/chef/client.rb that is not ideal for Amazon EC2. The following will replace that:</p>
<pre><code>mkdir -p /etc/chef
chown root:root /etc/chef
chmod 755 /etc/chef
</code></pre>
<p>Put the following in /etc/chef/client.rb:</p>
<pre><code>
# Chef Client Config File
# Automatically grabs configuration from ohai ec2 metadata.

require 'ohai'
require 'json'

o = Ohai::System.new
o.all_plugins
chef_config = JSON.parse(o[:ec2][:userdata])
if chef_config.kind_of?(Array)
  chef_config = chef_config[o[:ec2][:ami_launch_index]]
end

log_level        :info
log_location     STDOUT
node_name        o[:ec2][:instance_id]
chef_server_url  chef_config["chef_server"]

unless File.exists?("/etc/chef/client.pem")
  File.open("/etc/chef/validation.pem", "w", 0600) do |f|
    f.print(chef_config["validation_key"])
  end
end

if chef_config.has_key?("attributes")
  File.open("/etc/chef/client-config.json", "w") do |f|
    f.print(JSON.pretty_generate(chef_config["attributes"]))
  end
  json_attribs "/etc/chef/client-config.json"
end

validation_key "/etc/chef/validation.pem"
validation_client_name chef_config["validation_client_name"]

Mixlib::Log::Formatter.show_time = true
</code></pre>
<h2>Finish creating the new image</h2>
<h3>Clean up from the building of the image</h3>
<pre>sudo chroot $imagedir umount /proc
sudo chroot $imagedir umount /dev/pts
sudo rm -f $imagedir/usr/sbin/policy-rc.d</pre>
<h3>Copy the image files to a new EBS volume, snapshot and register the snapshot</h3>
<pre>size=15 # root disk in GB
now=$(date +%Y%m%d-%H%M)
prefix=runa-chef-0.8.4-ubuntu-$release-$codename-$tag-$arch-$now
description="Runa Chef 0.8.4 Ubuntu $release $codename $tag $arch $now"
export EC2_CERT=$(echo /mnt/cert-*.pem)
export EC2_PRIVATE_KEY=$(echo /mnt/pk-*.pem)

volumeid=$(ec2-create-volume --region $region --size $size \
  --availability-zone $availability_zone | cut -f2)

instanceid=$(wget -qO- http://instance-data/latest/meta-data/instance-id)

ec2-attach-volume --region $region --device /dev/sdi --instance "$instanceid" "$volumeid"

while [ ! -e /dev/sdi ]; do echo -n .; sleep 1; done

sudo mkfs.ext3 -F /dev/sdi
ebsimage=$imagedir-ebs
sudo mkdir $ebsimage
sudo mount /dev/sdi $ebsimage

sudo tar -cSf - -C $imagedir . | sudo tar xvf - -C $ebsimage
sudo umount $ebsimage

ec2-detach-volume --region $region "$volumeid"
snapshotid=$(ec2-create-snapshot --region $region "$volumeid" | cut -f2)

ec2-delete-volume --region $region "$volumeid"

# This takes a while
while ec2-describe-snapshots --region $region "$snapshotid" | grep -q pending
  do echo -n .; sleep 1; done

ec2-register \
  --region $region \
  --architecture $arch \
  --name "$prefix" \
  --description "$description" \
  $ebsopts \
  --snapshot "$snapshotid"</pre>
<h2>Afterward</h2>
<p>That will get you an AMI that you can now use as a chef-client. You can use the directions from the section <em>Creating a Chef Server from your new Image</em> in the previous article: <a href="http://blog2.ibd.com/scalable-deployment/creating-an-amazon-ami-for-chef-0-8/" target="_blank">Creating an Amazon EC2 AMI for Opscode Chef 0.8</a>.</p><p>The post <a href="https://www.ibd.com/howto/using-the-official-opscode-0-8-x-gems-to-build-ec2-ami-chef-client-and-server/">Using the Official Opscode 0.8.x Gems to build EC2 AMI Chef Client and Server</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">513</post-id>	</item>
		<item>
		<title>Creating an Amazon EC2 AMI for Opscode Chef 0.8 Client and Server</title>
		<link>https://www.ibd.com/howto/creating-an-amazon-ami-for-chef-0-8/</link>
					<comments>https://www.ibd.com/howto/creating-an-amazon-ami-for-chef-0-8/#comments</comments>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Tue, 12 Jan 2010 09:00:21 +0000</pubDate>
				<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Opscode Chef]]></category>
		<category><![CDATA[Runa]]></category>
		<category><![CDATA[Scalable Deployment]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[EC2]]></category>
		<category><![CDATA[Git]]></category>
		<category><![CDATA[Infrastructure]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=333</guid>

					<description><![CDATA[<p>Changes Since Original 1/13/10: Fix various minor inaccuracies and improved description on how to set up the chef-server. Also removed nanite as a requirement (its no longer used) 1/17/10: Add the requirement to build and install mixlib-authentication for the chef-client 1/21/10: Added a mkdir for /var/log/chef 1/22/10: Added step to insure that /tmp permissions are set Introduction Here&#8217;s my experience&#8230;</p>
<p>The post <a href="https://www.ibd.com/howto/creating-an-amazon-ami-for-chef-0-8/">Creating an Amazon EC2 AMI for Opscode Chef 0.8 Client and Server</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<h2>Changes Since Original</h2>
<ul>
<li>1/13/10: Fix various minor inaccuracies and improved description on how to set up the chef-server. Also removed nanite as a requirement (its no longer used)</li>
<li>1/17/10: Add the requirement to build and install mixlib-authentication for the chef-client</li>
<li>1/21/10: Added a mkdir for /var/log/chef</li>
<li>1/22/10: Added step to insure that /tmp permissions are set</li>
</ul>
<h2>Introduction</h2>
<p>Here&#8217;s my experience setting up an Amazon EC2 AMI and Instance for a Chef Server and Client. It is based mostly on <a href="http://loftninjas.org/" target="_blank">Bryan Mclellan (btm)</a>&#8216;s post of Nov 24, 2009 <a href="http://blog.loftninjas.org/2009/11/24/installing-chef-08-alpha-on-ubuntu-karmic/" target="_blank">Installing Chef 0.8 alpha on Ubuntu Karmic</a> and  his more up to date <a href="http://gist.github.com/242523" target="_blank">GIST: chef 0.8 alpha installation</a>. It has a slightly different focus and is a bit stale if you are building your own 0.8 gems from the <a href="http://github.com/opscode/chef" target="_blank">source</a>.</p>
<h2>Instantiate an Amazon EC2 Instance</h2>
<p>We&#8217;ll start with the Canonical Ubuntu 9.10 Karmic AMI. I always go to <a href="http://alestic.com/" target="_blank">Eric Hammond&#8217;s site  alestic.com</a> to get the pointers to the right AMIs. In this case we&#8217;re using a 32bit image for the US-West Region: ami-7d3c6d38 US-East 32bit: ami-1515f67c. You can use the US-West 64bit image ami-7b3c6d3e, US-East 64bit: ami-ab15f6c2</p>
<p>Start the instance from your local dev machine using the command line ec2-api-tools (available as a package or directly from <a href="http://developer.amazonwebservices.com/connect/entry.jspa?externalID=351" target="_blank">Amazon</a>) or using something like the Firefox <a href="http://developer.amazonwebservices.com/connect/entry.jspa?externalID=609" target="_blank">Elasticfox</a> and then ssh into the instance so that you can do the following steps on the instance. For the sake of this example, lets say that the Public DNS name for the instance you started is ec2-204-222-170-10.us-west-1.compute.amazonaws.com and the ssh keypair you associated with this new instance is now on your local dec machine in  ~/.ssh/gsg-keypair</p>
<h2>Prerequisite preparation</h2>
<p>The first set of steps need to be done on the instance you just created so login via ssh:</p>
<pre>ssh -i ~/.ssh/gsg-keypair ec2-204-222-170-10.us-west-1.compute.amazonaws.com</pre>
<h3>If on Amazon us-west</h3>
<p>There is a bug in the current us-west Canonical AMI where it does not use the us-west apt server. So you have to correct the apt soruces.list:</p>
<pre><code>sed -i.bak '1,$s/us.ec2.archive.ubuntu.com/us-west-1.ec2.archive.ubuntu.com/' \
/etc/apt/sources.list</code></pre>
<h3>For all cases</h3>
<pre><code>sudo sed -i.bak2 '1,$s/universe/universe multiverse/' /etc/apt/sources.list
sudo apt-get -y update
sudo apt-get -y upgrade
sudo apt-get -y install emacs23 # Of course this is the first package to install!</code></pre>
<pre><code># Will need these to manipulate ec2 images
sudo apt-get -y install ec2-api-tools ec2-ami-tools </code></pre>
<h3>Set up the ruby environment and install rubygems</h3>
<h4>Install Ruby and needed packages</h4>
<pre><code>sudo apt-get -y install -y ruby ruby1.8-dev libopenssl-ruby1.8 rdoc ri irb \
build-essential wget ssl-cert git-core rake librspec-ruby libxml-ruby \
thin couchdb zlib1g-dev libxml2-dev</code></pre>
<h4>Install Rubygems</h4>
<p>Rubygems will be installed from source since debian/ubuntu try to control rubygems upgrades. If you don&#8217;t care you can install it via apt-get install rubygems</p>
<pre><code>cd /tmp
wget http://rubyforge.org/frs/download.php/60718/rubygems-1.3.5.tgz
tar zxf rubygems-1.3.5.tgz
cd rubygems-1.3.5
sudo ruby setup.rb
sudo ln -sfv /usr/bin/gem1.8 /usr/bin/gem
sudo gem sources -a http://gems.opscode.com
sudo gem sources -a http://gemcutter.org</code></pre>
<h4>Install Pre-requisit Gems</h4>
<pre><code>sudo gem install cucumber merb-core jeweler uuidtools \
json libxml-ruby --no-ri --no-rdoc</code></pre>
<h3>Building and Installing Chef Related Gems</h3>
<p>Until there are final 0.8.x Chef gems, you will have had to build them on your local machine and upload them to this instance. On your dev machine (this example builds things in ~/src, but it could be anywhere appropriate) follow these instructions to build all the gems and install gems you might need to use your local machine. You will use your local dev machine to develop and manage cookbooks and to manage a remote chef-server:</p>
<pre><code>mkdir ~/src
cd ~/src
git clone git://github.com/opscode/chef.git
git clone git://github.com/opscode/ohai.git
git clone git://github.com/opscode/mixlib-log
git clone git://github.com/opscode/mixlib-authentication.git
# Need to get mixlib-log for client &amp; server and
# mixlib-authentication for the client from git till the 1.1.0 update hits
# See http://tickets.opscode.com/browse/CHEF-823
cd mixlib-log
sudo rake install
cd mixlib-authentication
sudo rake install
cd ../ohai
sudo rake install
cd ../chef
rake gem
# Now cd into ~/src/chef/chef to install the chef client/dev gem on your local machine
cd chef
rake install </code></pre>
<p>Upload the gems needed for the client to your instance. From ~/src on your local dev machine do:</p>
<pre>scp -i ~/.ssh/gsg-keypair chef/chef/pkg/chef-0.8.0.gem  ohai/pkg/ohai-0.3.7.gem \
mixlib-authentication/pkg/mixlib-authentication-1.1.0.gem \
mixlib-log/pkg/mixlib-log-1.1.0.gem  ec2-204-222-170-10.us-west-1.compute.amazonaws.com:</pre>
<h2>Set up the Chef Client on the new Instance</h2>
<p>Now back in your home directory on the instance ec2-204-222-170-10.us-west-1.compute.amazonaws.com install the gems you just copied over:</p>
<pre><code>sudo gem install mixlib-log-1.1.0.gem ohai-0.3.7.gem
sudo gem install chef-0.8.0.gem </code></pre>
<h3>Create the client config file</h3>
<pre><code>mkdir /var/log/chef
mkdir /etc/chef
chown root:root /etc/chef
chmod 755 /etc/chef
</code></pre>
<p>Put the following in /etc/chef/client.rb:</p>
<pre><code># Chef Client Config File

require 'ohai'
require 'json'

o = Ohai::System.new
o.all_plugins
chef_config = JSON.parse(o[:ec2][:userdata])
if chef_config.kind_of?(Array)
  chef_config = chef_config[o[:ec2][:ami_launch_index]]
end

log_level        :info
log_location     "/var/log/chef/client.log"
chef_server_url  chef_config["chef_server"]
registration_url chef_config["chef_server"]
openid_url       chef_config["chef_server"]
template_url     chef_config["chef_server"]
remotefile_url   chef_config["chef_server"]
search_url       chef_config["chef_server"]
role_url         chef_config["chef_server"]
client_url       chef_config["chef_server"]

node_name        o[:ec2][:instance_id]

unless File.exists?("/etc/chef/client.pem")
  File.open("/etc/chef/validation.pem", "w") do |f|
    f.print(chef_config["validation_key"])
  end
end

if chef_config.has_key?("attributes")
  File.open("/etc/chef/client-config.json", "w") do |f|
    f.print(JSON.pretty_generate(chef_config["attributes"]))
  end
  json_attribs "/etc/chef/client-config.json"
end

validation_key "/etc/chef/validation.pem"
validation_client_name chef_config["validation_client_name"]

Mixlib::Log::Formatter.show_time = true</code></pre>
<h4>Set up the /etc/init.d/chef-client</h4>
<p>Copy the example init.d script (You can also use runit instead, but we&#8217;re not going to describe that here)</p>
<pre><code>cp /usr/lib/ruby/gems/1.8/gems/chef-0.8.0/distro/debian/etc/init.d/chef-client /etc/init.d
cd /etc/init.d
update-rc.d chef-client defaults</code></pre>
<h4>Create an Init script to set /tmp to proper permmissions</h4>
<p>It looks like the Canonical Images will not  have /tmp with proper permissions if you exclude /tmp from your bundle process. Eric Hammond <a href="https://developer.amazonwebservices.com/connect/message.jspa?messageID=160098" target="_blank">recommends</a> doing the following.</p>
<p>Create a file /etc/init.d/ec2-mkdir-tmp with the following contents:</p>
<pre>#!/bin/sh
#
# ec2-mkdir-tmp Create /tmp if missing (as it's nice to bundle without it).
#
mkdir -p    /tmp
chmod 01777 /tmp</pre>
<p>Then set up the /etc/rc dirs to launch this on boot:</p>
<pre>
<pre>chmod a+x /etc/init.d/ec2-mkdir-tmp
ln -s /etc/init.d/ec2-mkdir-tmp /etc/rcS.d/S36ec2-mkdir-tmp</pre>
<h3><strong>Build the EC2 Image</strong></h3>
<p>The always amazingly helpful <a href="http://www.anvilon.com/" target="_blank">Eric Hammond</a> has a post, <a href="http://alestic.com/2009/06/ec2-ami-bundle" target="_blank">Creating a New Image for EC2 by Rebundling a Running Instance</a>, that describes the basics of how to do this. The following is pretty much a direct synopsis with minimal explanation. See his blog post for more details.</p>
<h3>Clean up potential security holes</h3>
<p>Remove stuff you don&#8217;t want to freeze into your image.</p>
<pre><code>sudo rm -f /root/.*hist* $HOME/.*hist*
sudo rm -f /var/log/*.gz</code></pre>
<h3>Copy AWS Certs to Instance</h3>
<p>Back on your local development system, copy your Amazon certificates to the instance.</p>
<pre><code>
remotehost=&lt;ec2-instance-hostname&gt;
remoteuser=ubuntu
scp -i &lt;private-ssh-key&gt; \
  &lt;path-to-certs&gt;/{cert,pk}-*.pem \
  $remoteuser@$remotehost:/tmp
</code></pre>
<h3>Create the new Image on the Instance</h3>
<p>Back on the ec2 instance, you&#8217;ll do the following to create the image.</p>
<h4>Define where to store the image on S3</h4>
<p>This assumes you have an S3 account setup on AWS. You don&#8217;t have to have already created the bucket. Set some bash variables that will be used by the commands that follow. You should set the prefix to something that is meaningful. Below is what I used as an example. You&#8217;ll want to make it unique to your environment. The Bucket name must be Globally unique across all of Amazon S3.</p>
<pre><code>bucket=runa-west-amis
prefix=runa-ubuntu-9.10-i386-20100101-base</code></pre>
<h4>Define your AWS credentials and target processor</h4>
<pre><code>export AWS_USER_ID=&lt;your-value&gt;
export AWS_ACCESS_KEY_ID=&lt;your-value&gt;
export AWS_SECRET_ACCESS_KEY=&lt;your-value&gt;

if [ $(uname -m) = 'x86_64' ]; then
  arch=x86_64
else
  arch=i386
fi
</code></pre>
<p>Bundle the files<br />
This also runs on the current instance and will bundle the everything on the instance file system except for dirs specified with the -e flag into a copy of the image under /mnt:</p>
<pre><code>sudo -E ec2-bundle-vol           \
  -r $arch                       \
  -d /mnt                        \
  -p $prefix                     \
  -u $AWS_USER_ID                \
  -k /tmp/pk-*.pem               \
  -c /tmp/cert-*.pem             \
  -s 10240                       \
  -e /mnt,/tmp,/root/.ssh,/home/ubuntu/.ssh
</code></pre>
<h5>If you are deploying to US-West-1 AWS Region</h5>
<p>Looks like the Amazon ec2 ami tools are not super aware about us-west yet. So you have to do this extra step right now. You&#8217;ll have to change the &#8211;kernel and &#8211;ramdisk to the ones appropriate for your kernel. You can inspect the values used for the AMI you used to boot the original instance. You can do this with ElasticFox or with the command (specify the AMI and region its in thatyou want to check):</p>
<pre>ec2-describe-images ami-7d3c6d38   -C /tmp/cert-*.pem -K /tmp/pk-*.pem --region us-west-1</pre>
<p>Then execute the following command and specify the right kernel and ramdisk</p>
<pre><code>sudo -E ec2-migrate-manifest        \
  -c /tmp/cert-*.pem             \
  -k /tmp/pk-*.pem               \
  -m /mnt/$prefix.manifest.xml   \
  --access-key $AWS_ACCESS_KEY_ID  \
  --secret-key $AWS_SECRET_ACCESS_KEY \
  --kernel aki-773c6d32          \
  --ramdisk ari-713c6d34         \
  --region us-west-1</code></pre>
<p><code> </code></p>
<h4>Upload the bundle to a bucket on S3:</h4>
<pre><code>sudo -E ec2-upload-bundle        \
    -b $bucket                   \
    -m /mnt/$prefix.manifest.xml \
    -a $AWS_ACCESS_KEY_ID        \
    -s $AWS_SECRET_ACCESS_KEY    \
    --location us-west-1
</code></pre>
<p>You may be prompted with something like:</p>
<pre><code>You are bundling in one region, but uploading to another. If the kernel or ramdisk associated with this AMI are not in the target region, AMI registration will fail.
You can use the ec2-migrate-manifest tool to update your manifest file with a kernel and ramdisk that exist in the target region.
Are you sure you want to continue? [y/N]
</code></pre>
<p>You should enter y return to accept.</p>
<h4>Register the AMI</h4>
<p>Back on your local development machine:</p>
<pre><code>ec2-register $bucket/$prefix.manifest.xml --region us-west-1</code></pre>
<p>The output of this will be the ami-id of your new instance. You can use this to instantiate your new ami.</p>
<p>You now have a private ami image you can start just like any other image. If you want to make it public</p>
<pre><code>ec2-modify-image-attribute -l -a all </code></pre>
<h2>Using the new AMI Image</h2>
<p>You can now use this instance as the basis for chef clients and also the basis to create a Chef Server. Use the Amazon EC2 tool, ElasticFox or whatever you favorite tool for managing EC2 instances to make a new instance first to create a Chef Server. Then after that you can create clients and have them load their roles and recipes from the chef server. Once you have a Chef Server, you can use knife ec2 instance command to create user data that includes a run list, credentials and other json that can be passed to the general ec2 tools to build specific instances.</p>
<h3>Creating a Chef Server from your new Image</h3>
<p>Using an EC2 tool like ec2-tools or elasticfox, create a new instance based on the AMI created earlier. You should use at least a c1.medium as the m1.small is just too painfully wimpy to use. Assume the new instance has the Public DNS name: <code>ec2-204-203-51-20.us-west-1.compute.amazonaws.com</code><br />
Copy the chef server gems to the new instance from the ~/src directory in your local dev environment to the new instance:</p>
<pre><code>scp -i ~/.ssh/gsg-keypair chef/*/pkg/*.gem \
ec2-204-203-51-20.us-west-1.compute.amazonaws.com:</code></pre>
<p>ssh to the new instance and do the following:</p>
<pre><code>sudo gem install chef-server-0.8.0.gem chef-server-api-0.8.0.gem \
chef-server-webui-0.8.0.gem chef-solr-0.8.0.gem</code></pre>
<h4>Set things up to use bootstrap client using chef-solo</h4>
<p>We&#8217;ll be using the last part of BTM&#8217;s GIST, and danielsdeleo (Dan DeLeo)&#8217;s <a href="http://github.com/danielsdeleo/cookbooks/tree/08boot/bootstrap" target="_blank">bootstrap cookbook</a> and chef-solo to set up this initial server.</p>
<pre><code>mkdir -p /tmp/chef-solo
cd /tmp/chef-solo
git clone git://github.com/danielsdeleo/cookbooks.git
cd cookbooks
git checkout 08boot
</code></pre>
<p>Create ~/chef.json:</p>
<pre><code>{
  "bootstrap": {
    "chef": {
      "url_type": "http",
      "init_style": "runit",
      "path": "/srv/chef",
      "serve_path": "/srv/chef",
      "server_fqdn": "localhost"
    }
  },
  "recipes": "bootstrap::server"
}
# End of file
</code></pre>
<p>Create ~/solo.rb with the following content:</p>
<pre><code>file_cache_path "/tmp/chef-solo"
cookbook_path "/tmp/chef-solo/cookbooks"
# End of ~/solo.rb file
</code></pre>
<p>Run chef-solo which will execute the chef bootstrap recipes using the bootstrap params in ~/chef.json to actually setup and configure this chef server</p>
<p>If you had installed rubygems with the ubuntu apt package you may have to specify the path:</p>
<pre><code>/var/lib/gems/1.8/bin/</code></pre>
<p>instead of:</p>
<pre><code>/usr/bin</code></pre>
<p>for the knife and various chef commands in the following code.</p>
<pre><code>/usr/bin/chef-solo -j ~/chef.json -c ~/solo.rb -l debug</code></pre>
<p>You will see a lot of Debug statements go by and it will take several minutes to complete. It should complete with something like:</p>
<pre><code>[Thu, 14 Jan 2010 00:19:38 +0000] INFO: Chef Run complete in 38.59808 seconds
[Thu, 14 Jan 2010 00:19:38 +0000] DEBUG: Exiting</code></pre>
<h5>Setup basic cookbooks</h5>
<p>The following will install the standard cookbooks on the chef server</p>
<pre><code>cd
git clone git://github.com/opscode/chef-repo.git
cd chef-repo
rm cookbooks/README
git clone git://github.com/opscode/cookbooks.git
</code></pre>
<p>Now upload the standard cookbooks using the credentials set up by the bootstrap process (user chef-webui)</p>
<pre><code>knife cookbook upload --all -u chef-webui \
-k /etc/chef/webui.pem -o cookbooks
</code></pre>
<h5>Startup the Chef Server web ui</h5>
<p>Do to a bug (http://tickets.opscode.com/browse/CHEF-839) you have to run this twice, the first time will create the admin user:</p>
<pre><code>sudo /usr/bin/chef-server-webui -p 4002</code></pre>
<p>But the first time will abort with an error message like:</p>
<pre><code>Loading init file from /usr/lib/ruby/gems/1.8/gems/chef-server-0.8.0/config/init-webui.rb
Loading /usr/lib/ruby/gems/1.8/gems/chef-server-0.8.0/config/environments/development.rb
~ Loaded slice 'ChefServerWebui' ...
WARN: HTTP Request Returned 404 Not Found: Cannot load user admin
~ Compiling routes...
~ Could not find resource model Node
~ Could not find resource model Client
~ Could not find resource model Role
~ Could not find resource model Search
~ Could not find resource model Cookbook
~ Could not find resource model Client
~ Could not find resource model Databag
~ Could not find resource model DatabagItem
/usr/lib/ruby/gems/1.8/gems/chef-server-0.8.0/config/init-webui.rb:32: uninitialized constant OpenID (NameError)
from /usr/lib/ruby/gems/1.8/gems/merb-core-1.0.15/lib/merb-core/bootloader.rb:1258:in `call'
from /usr/lib/ruby/gems/1.8/gems/merb-core-1.0.15/lib/merb-core/bootloader.rb:1258:in `run'
from /usr/lib/ruby/gems/1.8/gems/merb-core-1.0.15/lib/merb-core/bootloader.rb:1258:in `each'
from /usr/lib/ruby/gems/1.8/gems/merb-core-1.0.15/lib/merb-core/bootloader.rb:1258:in `run'
from /usr/lib/ruby/gems/1.8/gems/merb-core-1.0.15/lib/merb-core/bootloader.rb:99:in `run'
from /usr/lib/ruby/gems/1.8/gems/merb-core-1.0.15/lib/merb-core/server.rb:172:in `bootup'
from /usr/lib/ruby/gems/1.8/gems/merb-core-1.0.15/lib/merb-core/server.rb:42:in `start'
from /usr/lib/ruby/gems/1.8/gems/merb-core-1.0.15/lib/merb-core.rb:173:in `start'
from /usr/lib/ruby/gems/1.8/gems/chef-server-0.8.0/bin/chef-server-webui:76
from /usr/bin/chef-server-webui:19:in `load'
from /usr/bin/chef-server-webui:19</code></pre>
<p>Then again to actually start the WebUI and have it run in the background. You might want to start it in <a href="http://www.gnu.org/software/screen/" target="_blank">screen</a> for now or possibly redirect its output to a log file The following example shows sending the output of the command to a log file. You&#8217;ll want to check that log file after starting to make sure there were no errors.</p>
<pre><code>sudo sh -c '/usr/bin/chef-server-webui -p 4002 &gt; /var/log/</code><code>chef-server-webui.log' &amp;</code></pre>
<p>If you look at the output of a ps, you&#8217;ll see the shell command above, but the real work is being done by a merb instance with the port you specified (4002):</p>
<pre><code>#ps ax | grep webui
5533 pts/0    S      0:00 sh -c /usr/bin/chef-server-webui -p 4002 &gt; /var/log/chef-server-webui.log
#ps ax | grep merb
3694 ?        Sl     0:55 merb : worker (port 4000)
5534 pts/0    Sl     0:07 merb : worker (port 4002)</code></pre>
<p>The first merb worker is the chef-server itself, the second is the WebUI server.</p>
<p>Accessing the Chef Web UI</p>
<p>You can access the Chef Web UI web server using a web browser at the IP address / Public DNS name of this server that was just set up. Assuming the Public DNS is</p>
<pre><code>ec2-204-203-51-20.us-west-1.compute.amazonaws.com</code></pre>
<p>Assuming that you set up this instance to allow you to access port 4002 from the IP adddress of your local dev machine, you should be able to access the Web UI at</p>
<pre><code>http://ec2-204-203-51-20.us-west-1.compute.amazonaws.com:4002</code></pre>
<p>You can allow access to port 4002 from specific ip address ranges by updating your <a href="http://docs.amazonwebservices.com/AWSEC2/2007-08-29/DeveloperGuide/distributed-firewall-concepts.html" target="_blank">security group</a>. You can do that with ElasticFox (easy) or via the <a href="http://docs.amazonwebservices.com/AWSEC2/2007-08-29/DeveloperGuide/distributed-firewall-examples.html" target="_blank">command line tools</a> (a pain for a one off). Eventually you (or hopefully Opscode) will  set up an apache or nginx reverse proxy, Passenger or equiv to allow normal port 80 / 443 http/https access.</p>
<h2>Conclusion</h2>
<p>You should now be able to use  knife your local dev environment to develop cookbooks and upload roles and cookbooks to your new Chef Server and spin up new chef cookbook driven instances. You should use the knife documentation from the Opscode main wiki <a href="http://wiki.opscode.com/display/chef/Knife" target="_blank">Knife Page</a> <strong>NOT</strong> the docs in the Alpha Forums / Getting Started With Opscode / <a href="http://opscode.zendesk.com/forums/58858/entries/53988" target="_blank">Knife &#8211; Commandline API</a> as the later is actually more obsolete in terms of the version that you built from the opscode git repository. There is also a man page and knife &#8211;help gives you pretty much the same correct info as the wiki.</p>
<p>I hope to have a follow up post on how to do this in more details.</p>
<p>Feel free to leave comments if you find problems or have questions.</p><p>The post <a href="https://www.ibd.com/howto/creating-an-amazon-ami-for-chef-0-8/">Creating an Amazon EC2 AMI for Opscode Chef 0.8 Client and Server</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.ibd.com/howto/creating-an-amazon-ami-for-chef-0-8/feed/</wfw:commentRss>
			<slash:comments>7</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">333</post-id>	</item>
		<item>
		<title>Experience installing Hbase 0.20.0 Cluster on Ubuntu 9.04 and EC2</title>
		<link>https://www.ibd.com/howto/experience-installing-hbase-0-20-0-cluster-on-ubuntu-9-04-and-ec2/</link>
					<comments>https://www.ibd.com/howto/experience-installing-hbase-0-20-0-cluster-on-ubuntu-9-04-and-ec2/#comments</comments>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Sat, 05 Sep 2009 01:34:41 +0000</pubDate>
				<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Runa]]></category>
		<category><![CDATA[Scalable Deployment]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[Cloud Computing]]></category>
		<category><![CDATA[EC2]]></category>
		<category><![CDATA[Hadoop]]></category>
		<category><![CDATA[HBase]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=237</guid>

					<description><![CDATA[<p>NOTE (Sep 7 2009): Updated info on need to use Amazon Private DNS Names and clarified the need for the masters, slaves and regionservers files. Also updated to use HBase 0.20.0 Release Candidate 3 Introduction As someone who has &#8220;skipped&#8221; Java and wants to learn as little as possible about it, and as one who has not had much experience&#8230;</p>
<p>The post <a href="https://www.ibd.com/howto/experience-installing-hbase-0-20-0-cluster-on-ubuntu-9-04-and-ec2/">Experience installing Hbase 0.20.0 Cluster on Ubuntu 9.04 and EC2</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<p><strong>NOTE (Sep 7 2009):</strong> Updated info on need to use Amazon Private DNS Names and clarified the need for the masters, slaves and regionservers files. Also updated to use HBase 0.20.0 Release Candidate 3</p>
<h2>Introduction</h2>
<p>As someone who has &#8220;skipped&#8221; Java and wants to learn as little as possible about it, and as one who has not had much experience with Hadoop so far, HBase deployment has a big learning curve. So some of the things I describe below may be obvious to those who have had experience in those domains.</p>
<h2>Where&#8217;s the docs for HBase 0.20</h2>
<p>If you go to the HBase wiki, you will find that there is not much documentation on the 0.20 version. This puzzled me since all the twittering, blog posting and other buzz was talking about people using 0.20 even though its &#8220;pre-release&#8221;</p>
<p>One of the great things about going to meetups such as the <a title="HBase Meetup" href="http://www.meetup.com/hbaseusergroup/" target="_blank">HBase Meetup</a> is you can talk to the folks who actually wrote the thing and ask them &#8220;Where is the documentation for HBase 0.20</p>
<p>Turns out its in the HBase 0.20.0 distribution in the docs directory. The easiest thing is to get the <a href="http://people.apache.org/~stack/hbase-0.20.0-candidate-3" target="_blank">pre-built 0.20.0 release candididate 3</a>. If you download the source from the version control repository you have to build the documentation using Ant. If you are an Java/Ant kind of person it might not be hard. But just to build the docs, you have to meet some dependencies like</p>
<h2>What we learnt with 0.19.x</h2>
<p>We have been learning a lot about making HBase Cluster work at a basic level. I had a lot of problems getting 0.19.x running beyond a single node in Psuedo Distributed mode. I think a lot of my problems was just not getting how it all fit together with Hadoop and what the different startup/shutdown scripts did.</p>
<p>Then we finally tried the <a href="http://issues.apache.org/jira/browse/HBASE-838" target="_blank">HBase EC2 Scripts </a>even though it uses an AMI based on Fedora 8 and seemed wired to 0.19.0. Its a pretty nice script if you want to have an opionated HBase cluster set up. But it did educate us on how to get a cluster to go. It has a bit of strangeness by having a script in /root/hbase_init that is called at boot time to configure all the hadoop and hbase conf scripts and then call the hadoop and hbase startup scripts. Something like this is kind of needed for Amazon EC2 since you don&#8217;t really know what the IP Address/FQDN is until boot time.</p>
<p>The scripts also set up an Amazon Security Group for the cluster master and one for the rest of the cluster. I beleive it then uses this as a way to identify the group as well.</p>
<p>The main thing we did get was by going thru mainly the /root/hbase_init script we were able to figure out what the process was for bringing up Hadoop/HBase as a cluster.</p>
<p>We did build a staging cluster with this script. We were able to pretty easily change the scripts to use 0.19.3 instead of 0.19.0. But its opions were different than ours for many things. Plus after talking to the folks at the HBase Meetup, and having all sort of weird problems with our app on 0.19.3, we were convinced that our future is in HBase 0.20. And 0.20 introduces some new things like using Zookeeper to manage the Master selection so seems like its not worth it for us to continue to use this script. Though it helped in our learning quite a bit!</p>
<h2>Building an HBase 0.20.0 Cluster</h2>
<p>This post will use the HBase pre-built Release Candidate 3 and the prebuild standard Hadoop 0.20.0.</p>
<p>This post will show how to do all this &#8220;by hand&#8221;. Hopefully we&#8217;ll have an article on how to do all this with Chef sometime soon.</p>
<p>The Hbase folks say that you really should have at least 5 regionservers and one master. The master and several of the regionservers can also run the zookeeper quorum. Of course the master serveris also going to run the Hadoop Nameserver Secondary name server. Then the 5 other nodes are running the Hadoop HDFS Data nodes as well as the HBase region servers. When you build out larger clusters, you will probably want to dedicate machines to Zookeepers and hot-standby Hbase Masters. Name Servers are still the Single Point of Failure (SPOF). Rumour has it that this will be fixed in Hadoop 0.21.</p>
<p>We&#8217;re not using Map / Reduce yet so won&#8217;t go into that, but its just a mater of different startup scripts to make the same nodes do Map/Reduce as HDFS and HBase.</p>
<p>In this example, we&#8217;re installing and running everything as Root. It can also be done as a special user like hadoop as described in the earlier blog post <a href="http://blog2.ibd.com/scalable-deployment/hadoop-hdfs-an…base-on-ubuntu/" target="_blank">Hadoop, HDFS and Hbase on Ubuntu &amp; Macintosh Leopard</a></p>
<h2 style="font-size: 1.17em;">Getting the pre-requisites in order</h2>
<p>We started with the vanilla <a href="http://alestic.com/" target="_blank">alestic</a> Ubuntu 9.04 Jaunty 64Bit Server AMI: <a href="http://developer.amazonwebservices.com/connect/entry.jspa?externalID=1951&amp;categoryID=101" target="_blank">ami-5b46a732</a> and instantiated 6 High CPU Large Instances. You really want as much memory and cores as you can get. You can do the following by hand or combine it with the shell scripting described below in the section <em>Installing Hadoop and HBase.</em></p>
<pre>apt-get update
apt-get upgrade</pre>
<p>Then added via apt-get install:</p>
<pre>apt-get install sun-java6-jdk</pre>
<h3>Downloading Hadoop and HBase</h3>
<p>You can use the production Hadoop 0.20.0 release. You can find them at the mirrors at http://www.apache.org/dyn/closer.cgi/hadoop/core/. The examples show from one mirror:</p>
<pre>wget http://mirror.cloudera.com/apache/hadoop/core/hadoop-0.20.0/hadoop-0.20.0.tar.gz

You can download the HBase 0.20.0 Release Candidate 3 in a prebuilt form from <a href="http://people.apache.org/~stack/hbase-0.20.0-candidate-3/" target="_blank">http://people.apache.org/~stack/hbase-0.20.0-candidate-3/</a> (You can get the source out of Version Control:<a href="http://hadoop.apache.org/hbase/version_control.html" target="_blank">http://hadoop.apache.org/hbase/version_control.html</a> but  you'll have to figure out how to build it.)

wget http://people.apache.org/~stack/hbase-0.20.0-candidate-3/hbase-0.20.0.tar.gz</pre>
<h3>Installing Hadoop and HBase</h3>
<p>Assuming that you are running in your home directory on the master server and that the target for the versioned packages is in /mnt/pkgs and that there will be a link in /mnt for the path to the home for hadoop and hbase:</p>
<p>You can do a some simple scripting to do the following on all the nodes at once:</p>
<p>Create a file named servers with the list of the fully qualified domain names of all your servers including &#8220;localhost&#8221; for the master and call the file &#8220;servers&#8221;.</p>
<p>Make sure you can ssh to all the servers from the master. Ideally you are using ssh keys. On master:</p>
<pre>ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub &gt;&gt; ~/.ssh/authorized_keys</pre>
<p>On each of your region servers make sure that the id_dsa.pub is also in their authorized_keys (don&#8217;t delete any other keys you have in the authorized keys!)</p>
<p>Now with a bit of shell command line scripting you can install on all your servers at once:</p>
<pre>for host in `cat servers`
 do
 echo $host
 ssh $host 'apt-get update; apt-get upgrade; apt-get install sun-java6-jdk'
 scp ~/hadoop-0.20.0.tar.gz ~/hbase-0.20.0.tar.gz $host:
 ssh $host 'mkdir -p /mnt/pkgs; cd /mnt/pkgs; tar xzf ~/hadoop-0.20.0.tar.gz; tar xzf ~/hbase-0.20.0.tar.gz; ln -s /mnt/pkgs/hadoop-0.20.0 /mnt/hadoop; ln -s /mnt/pkgs/hbase-0.20.0 /mnt/hbase'
done</pre>
<h4>Use Amazon Private DNS Names in Config files</h4>
<p>So far I have found that its best to use the Amazon Private DNS names in the hadoop and hbase config files. It looks like HBase uses the system hostname to determine various things at runtime. Thie is always the Private DNS name. It also means that its difficult to use the Web GUI interfaces to HBase from outside of the Amazon Cloud. I set up a &#8220;desktop&#8221; version of Ubuntu that is running in the Amazon Cloud that I VNC (or NX) into and use its browser to view the Web Interface.</p>
<p>In any case, Amazon instances normally have limited TCP/UDP access to the outside world due to the default security group settings. You would have to add the various ports used by HBase and Hadoop to the security group to allow outside access.</p>
<p>If you do use the Amazon Public DNS names in the config files, there will be startup errors like the following for each instance that is assigned to the zookeeper quorum (there may be other errors as well, but these are the most obvious):</p>
<pre>ec2-75-101-104-121.compute-1.amazonaws.com: java.io.IOException: Could not find my address: domU-12-31-39-06-9D-51.compute-1.internal in list of ZooKeeper quorum servers
ec2-75-101-104-121.compute-1.amazonaws.com:     at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.writeMyID(HQuorumPeer.java:128)
ec2-75-101-104-121.compute-1.amazonaws.com:     at org.apache.hadoop.hbase.zookeeper.HQuorumPeer.main(HQuorumPeer.java:67)</pre>
<h3>Configuring Hadoop</h3>
<p>Now you have to configure the hadoop on master in /mnt/hadoop/conf:</p>
<h4>hadoop-env.sh:</h4>
<p>The minimal things to change are:</p>
<p>Set your JAVA_HOME to where the java package is installed. On Ubuntu:</p>
<pre>export JAVA_HOME=/usr/lib/jvm/java-6-sun</pre>
<p>Add the hbase path to the HADOOP_CLASSPATH:</p>
<pre>export HADOOP_CLASSPATH=/mnt/hbase/hbase-0.20.0.jar:/mnt/hbase/hbase-0.20.0-test.jar:/conf</pre>
<h4>core-site.xml:</h4>
<p>Here is what we used. Primarily setting where the hadoop files are and the nameserver path and port:</p>
<pre>&lt;?xml version="1.0"?&gt;
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;

&lt;configuration&gt;
   &lt;property&gt;
     &lt;name&gt;hadoop.tmp.dir&lt;/name&gt;
     &lt;value&gt;/mnt/hadoop&lt;/value&gt;
   &lt;/property&gt;

   &lt;property&gt;
     &lt;name&gt;fs.default.name&lt;/name&gt;
     &lt;value&gt;hdfs://domU-12-31-39-06-9D-51.compute-1.internal:50001&lt;/value&gt;
   &lt;/property&gt;

   &lt;property&gt;
     &lt;name&gt;tasktracker.http.threads&lt;/name&gt;
     &lt;value&gt;80&lt;/value&gt;
   &lt;/property&gt;
&lt;/configuration&gt;</pre>
<p>mapred-site.xml:</p>
<p>Even though we are not currently using Map/Reduce this is a basic config:</p>
<pre>&lt;?xml version="1.0"?&gt;
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;

&lt;configuration&gt;
   &lt;property&gt;
     &lt;name&gt;mapred.job.tracker&lt;/name&gt;
     &lt;value&gt;domU-12-31-39-06-9D-51.compute-1.internal:50002&lt;/value&gt;
   &lt;/property&gt;

   &lt;property&gt;
     &lt;name&gt;mapred.tasktracker.map.tasks.maximum&lt;/name&gt;
     &lt;value&gt;4&lt;/value&gt;
   &lt;/property&gt;

   &lt;property&gt;
     &lt;name&gt;mapred.tasktracker.reduce.tasks.maximum&lt;/name&gt;
     &lt;value&gt;4&lt;/value&gt;
   &lt;/property&gt;

   &lt;property&gt;
     &lt;name&gt;mapred.output.compress&lt;/name&gt;
     &lt;value&gt;true&lt;/value&gt;
   &lt;/property&gt;

   &lt;property&gt;
     &lt;name&gt;mapred.output.compression.type&lt;/name&gt;
     &lt;value&gt;BLOCK&lt;/value&gt;
   &lt;/property&gt;
&lt;/configuration&gt;</pre>
<h4>hdfs-site.xml:</h4>
<p>The main thing to change based on your config is the dfs.replication. It should be less than the total number of data-nodes / region-servers.</p>
<pre>&lt;?xml version="1.0"?&gt;
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;

&lt;configuration&gt;
   &lt;property&gt;
     &lt;name&gt;dfs.client.block.write.retries&lt;/name&gt;
     &lt;value&gt;3&lt;/value&gt;
   &lt;/property&gt;

   &lt;property&gt;
     &lt;name&gt;dfs.replication&lt;/name&gt;
     &lt;value&gt;3&lt;/value&gt;
   &lt;/property&gt;
&lt;/configuration&gt;</pre>
<p>Put the Fully qualified domain name of your master in the file <em>masters</em> and the names of the data-nodes in the file <em>slaves.</em></p>
<h4>masters:</h4>
<pre>domU-12-31-39-06-9D-51.compute-1.internal</pre>
<h4>slaves:</h4>
<pre>domU-12-31-39-06-9D-C1.compute-1.internal
domU-12-31-39-06-9D-51.compute-1.internal</pre>
<p>We did not change any of the other files so far.</p>
<p>Now copy these files to the data-nodes:</p>
<pre>for host in `cat slaves`
do
  echo $host
  scp slaves masters hdfs-site.xml hadoop-env.sh core-site.xml ${host}:/mnt/hadoop/conf
done</pre>
<p>And also format the hdfs on the master</p>
<pre>/mnt/hadoop/bin/hadoop namenode -format</pre>
<h3>Configuring HBase</h3>
<h4>hbase-env.sh:</h4>
<p>Similar to the hadoop-env.sh, you must set the JAVA_HOME:</p>
<pre>export JAVA_HOME=/usr/lib/jvm/java-6-sun</pre>
<p>and add the hadoop conf directory to the HBASE_CLASSPATH:</p>
<pre>export HBASE_CLASSPATH=/mnt/hadoop/conf</pre>
<p>And for the master you will want to say:</p>
<pre>export HBASE_MANAGES_ZK=true</pre>
<h4>hbase-site.xml:</h4>
<p>Mainly need to define the hbase master, hbase rootdir and the list of zookeepers. We also had to bump up the hbase.zookeeper.property.maxClientCnxns from the default of 30 to 300.</p>
<pre>&lt;?xml version="1.0"?&gt;
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;
&lt;configuration&gt;
   &lt;property&gt;
     &lt;name&gt;hbase.master&lt;/name&gt;
     &lt;value&gt;domU-12-31-39-06-9D-51.compute-1.internal:60000&lt;/value&gt;
   &lt;/property&gt;

   &lt;property&gt;
     &lt;name&gt;hbase.rootdir&lt;/name&gt;
     &lt;value&gt;hdfs://domU-12-31-39-06-9D-51.compute-1.internal:50001/hbase&lt;/value&gt;
   &lt;/property&gt;
   &lt;property&gt;
     &lt;name&gt;hbase.zookeeper.quorum&lt;/name&gt;
     &lt;value&gt;domU-12-31-39-06-9D-51.compute-1.internal,domU-12-31-39-06-9D-C1.compute-1.internal,domU-12-31-39-06-9D-51.compute-1.internal&lt;/value&gt;
   &lt;/property&gt;
   &lt;property&gt;
     &lt;name&gt;hbase.cluster.distributed&lt;/name&gt;
     &lt;value&gt;true&lt;/value&gt;
   &lt;/property&gt;
   &lt;property&gt;
     &lt;name&gt;hbase.zookeeper.property.maxClientCnxns&lt;/name&gt;
     &lt;value&gt;300&lt;/value&gt;
   &lt;/property&gt;
&lt;/configuration&gt;</pre>
<p>You will also need to have a file called regionservers. Normally it contains the same hostnames as the hadoop slaves:</p>
<h4>regionservers:</h4>
<pre>domU-12-31-39-06-9D-C1.compute-1.internal
domU-12-31-39-06-9D-51.compute-1.internal</pre>
<p>Copy the files to the region-servers:</p>
<pre>for host in `cat regionservers`
do
  echo $host
  scp hbase-env.sh hbase-site.xml regionservers ${host}:/mnt/hbase/conf
done</pre>
<h3>Starting Hadoop and HBase</h3>
<p>On the master:</p>
<p>(This just starts the Hadoop File System services, not Map/Reduce services)</p>
<pre>/mnt/hadoop/bin/start-dfs.sh</pre>
<p>Then start hbase:</p>
<pre>/mnt/hbase/bin/start-hbase.sh</pre>
<p>You can shut things down by doing the reverse:</p>
<pre>/mnt/hbase/bin/stop-hbase.sh
/mnt/hadoop/bin/stop-dfs.sh</pre>
<p>It is advisable to set up init scripts. This is described in the <em>Ubuntu /etc/init.d style startup scripts</em> section of the earlier blog post:<a href="http://blog2.ibd.com/scalable-deployment/hadoop-hdfs-and-hbase-on-ubuntu/" target="_blank">Hadoop, HDFS and Hbase on Ubuntu &amp; Macintosh Leopard</a></p><p>The post <a href="https://www.ibd.com/howto/experience-installing-hbase-0-20-0-cluster-on-ubuntu-9-04-and-ec2/">Experience installing Hbase 0.20.0 Cluster on Ubuntu 9.04 and EC2</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.ibd.com/howto/experience-installing-hbase-0-20-0-cluster-on-ubuntu-9-04-and-ec2/feed/</wfw:commentRss>
			<slash:comments>10</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">237</post-id>	</item>
		<item>
		<title>Want to work at a Startup with Cool Tech? (HBase, Clojure, Chef, Swarms, Javascript, Ruby &#038; Rails)</title>
		<link>https://www.ibd.com/macintosh/want-to-work-at-a-startup-with-cool-tech-hbase-clojure-chef-swarms-javascript-ruby-rails/</link>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Fri, 28 Aug 2009 18:15:01 +0000</pubDate>
				<category><![CDATA[Macintosh]]></category>
		<category><![CDATA[Opscode Chef]]></category>
		<category><![CDATA[Ruby / Rails]]></category>
		<category><![CDATA[Runa]]></category>
		<category><![CDATA[Scalable Deployment]]></category>
		<category><![CDATA[AWS]]></category>
		<category><![CDATA[Git]]></category>
		<category><![CDATA[Hadoop]]></category>
		<category><![CDATA[HBase]]></category>
		<category><![CDATA[rabbitmq]]></category>
		<category><![CDATA[tweekts]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=253</guid>

					<description><![CDATA[<p>Opportunity Knocks Runa.com, the startup where I am CTO, is looking for great developers to join our small agile team. We&#8217;re an early stage, pre-series-A startup (presently funded with strategic investments from two large corporations). Runa offers a SaaS to on-line merchant that allows them to offer dynamic product and consumer specific promotions embeded in their website. This will be&#8230;</p>
<p>The post <a href="https://www.ibd.com/macintosh/want-to-work-at-a-startup-with-cool-tech-hbase-clojure-chef-swarms-javascript-ruby-rails/">Want to work at a Startup with Cool Tech? (HBase, Clojure, Chef, Swarms, Javascript, Ruby & Rails)</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<h1 style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;"><strong>Opportunity Knocks</strong></h1>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">Runa.com, the startup where I am CTO, is looking for great developers to join our small agile team. We&#8217;re an early stage, pre-series-A startup (presently funded with strategic investments from two large corporations). Runa offers a SaaS to on-line merchant that allows them to offer dynamic product and consumer specific promotions embeded in their website. This will be a very large positive disruption to the online retailing world.</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;"><span style="text-decoration: underline;">Techie keywords:</span> <strong>clojure, hadoop, hbase, rabbitmq, erlang, chef, swarm computing, ruby, rails, javascript, amazon EC2, emacs, Macintosh, Linux, selenium, test/behavior driven development, agile, lean, XP, scalability</strong></p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">If you&#8217;re interested, email  <a href="mailto:jobs@runa.com">jobs@runa.com</a></p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">If you want to know more, read on!</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<h1 style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;"><strong>What do we do</strong></h1>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">Runa aims to provide the top of the long tail thru the middle of the top 500 online retailers with tools/services that companies like amazon.com use/provide. These smaller guys can&#8217;t afford or don&#8217;t have the resources to do anything on that scale, but by using our SaaS services, they can make more money while providing customers with greater value.</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">The first service we&#8217;re building is what we call Dynamic Sale Price.</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">It&#8217;s a simple concept &#8211; it allows the online-retailer to offer a sale price for each product on his site, personalized to the individual consumer who is browsing it. By using this service, merchants are able to &#8211;</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<ul>
<li>Increase conversion (get them to buy!) and</li>
<li>Offer consumers a special price which maximizes the merchant&#8217;s profit</li>
</ul>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">This is different from &#8220;dumb-discounting&#8221; where something is marked-down, and everyone sees the same price. This service is more like airline or hotel pricing which varies from day to day, but much more dynamic and real-time. Further, it is based on broad statistical factors AND individual consumer behavior. After all, if you lower prices enough, consumers will buy. Instead, we dynamically lower prices to a point where statistically, that consumer is most likely to buy.</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<h1 style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;"><strong>How we do it</strong></h1>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">Runa does this by performing statistical analysis and pattern recognition of what consumers are doing on the merchant sites. This includes browsing products on various pages, adding and removing items from carts, and purchasing or abandoning the carts. We track consumers as they browse, and collect vast quantities of this click-stream data. By mining this data and applying algorithms to determine a price point per consumer based on their behavior, we&#8217;re able to  maximize both conversion (getting the consumer to buy) AND merchant profit.</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">We also offer the merchant comprehensive reports based on analysis of the mountains of data we collect. Since the data tracks consumer activity down to the individual product SKU level (for each individual consumer), we can provide very rich analytics.  This is a tool that merchants need today, but don&#8217;t have the resources to build for themselves.</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<h1 style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;"><strong>The business model</strong></h1>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">For reference, it is useful to understand the affiliate marketing space. Small-to-medium merchants (our target audience) pay affiliates up to 40% of a sale price. Yes, 40%. The average is in the 20% range.</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">We charge our merchants around 10% of sales the Runa delivers. Our merchants are happy to pay it, because it is a performance-based pay, lower than what they pay affiliates, and there is zero up-front cost to the service. In fact, the above mentioned analytics reports are free.</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">We&#8217;re targeting e-commerce PLATFORMS (as opposed to individual merchants); in this way, we&#8217;re able to scale up merchant-acquisition. We have 10 early-customer merchants right now, with about 100 more planned to go live in the next 2-3 months. By the end of next year, we&#8217;re targeting about 1,000 merchants and 10,000 merchants the following year. Our channel deployment model makes these goals achievable.</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">At something like a 5 to 10% service charge, and a typical merchant having between 500K to 1M in sales per year, this is a VERY profitable business model. That is, of course, if we&#8217;re successful&#8230; but we&#8217;re seeing very positive signs so far.</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<h1 style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;"><strong>Technology</strong></h1>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">Most of our front-end stuff (like the merchant-dashboard, reports, campaign management) is built with Ruby on Rails. Our merchant integration requires browser-side Javascript magic. All our analytics (batch-processing) and real-time pricing services are written in Clojure. We use RabbitMQ for all our messaging needs. We store data in HBase. We&#8217;re deployed on Amazon&#8217;s EC2.</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">Here are a few blog postings about what we&#8217;ve been up to &#8211;</p>
<p><a href="http://s-expressions.com/2009/05/02/startup-logbook-distributed-clojure-system-in-production-v02/" target="_blank">Distributed Clojure system in production</a><br />
<a href="http://s-expressions.com/2009/04/12/using-messaging-for-scalability/" target="_blank">Using messaging for scalability</a><br />
<a href="http://s-expressions.com/2009/03/31/capjure-a-simple-hbase-persistence-layer/" target="_blank">Capjure: a simple HBase persistence layer</a><br />
<a href="http://s-expressions.com/2009/01/28/startup-logbook-clojure-in-production-release-v01/" target="_blank">Clojure in production<br />
</a><span style="color: #0000ee; "><span style="text-decoration: underline;"><a href="http://blog2.ibd.com/scalable-deployment/experience-installing-hbase-0-20-0-cluster-on-ubuntu-9-04-and-ec2/" target="_blank">Experience installing Hbase 0.20.0 Cluster on Ubuntu 9.04 and EC2</a></span></span></p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">We&#8217;ve also open-sourced a few of our projects &#8211;</p>
<p><a href="http://github.com/amitrathore/swarmiji/tree/master" target="_blank">swarmiji</a> &#8211; A distributed computing system to write and run Clojure code in parallel, across CPUs<br />
<a href="http://github.com/amitrathore/capjure/tree/master" target="_blank">capjure</a> &#8211; Clojure persistence for HBase</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<h1 style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;"><strong>Culture at Runa</strong></h1>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">We&#8217;re a small team, very passionate about what we do. We&#8217;re focused on delivering a ground-breaking, disruptive service that will allow merchants to really change the way they sell online. We work start-up hours, but we&#8217;re flexible and laid-back about it. We know that a healthy personal life is important for a good professional life. We work with each other to support it.</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">We use an agile process with a lot of influences from the &#8220;Lean&#8221;:http://en.wikipedia.org/wiki/Lean_software_development and &#8220;Kanban&#8221;:http://leansoftwareengineering.com/2007/08/29/kanban-systems-for-software-development/ world. We use &#8220;Mingle&#8221;:http://studios.thoughtworks.com/mingle-agile-project-management to run our development process. Everything, OK mostly everything <img src="https://s.w.org/images/core/emoji/14.0.0/72x72/1f642.png" alt="🙂" class="wp-smiley" style="height: 1em; max-height: 1em;" /> is covered by automated tests, so we can change things as needed.</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">We&#8217;re all Apple in the office &#8211; developers get a MacPro with a nice 30&#8243; screen, and a nice 17&#8243; MacBook Pro.  We deploy on Ubuntu servers.  Aeron chairs are cliché, yes; but, very comfy.</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">The environment is chilled out&#8230; you can wear shorts and sandals to work&#8230;  Very flat organization, very non-bureaucratic&#8230; nice open spaces (no cubes!). Lunch is brought in on most days! Beer and snacks are always in the fridge.</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">We&#8217;re walking distance to the San Antonio Caltrain station (biking distance from the Mountain View Caltrain/VTA lightrail station).</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<h1 style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;"><strong>What&#8217;s in it for you</strong></h1>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<ul>
<li>Competitive salaries, and lots of stock-options</li>
<li>Cutting edge technology stack</li>
<li>Fantastic business opportunity, and early-stage (= great time to join!)</li>
<li>Developer #5 &#8211; means plenty of influence on foundational architecture and design</li>
<li>Smart, full bandwidth, fun people to work with</li>
<li>Very comfortable, nice office environment</li>
<li>We have a &#8220;No Assholes&#8221; policy</li>
</ul>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<h1 style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;"><strong>OK!</strong></h1>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana; min-height: 15.0px;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">So, if you&#8217;re interested, email us at <a href="mailto:jobs@runa.com">jobs@runa.com</a></p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">No recruiters please!</p>
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">
<p style="margin: 0.0px 0.0px 0.0px 0.0px; line-height: 14.0px; font: 12.0px Verdana;">We would prefer folks who are already in the Bay Area (but if you not local and are really great let&#8217;s talk!)</p>
<div><span style="font-family: verdana, arial, helvetica, clean, sans-serif; font-size: small;"><span style="line-height: 14px; white-space: pre-wrap; "><br />
</span></span></div><p>The post <a href="https://www.ibd.com/macintosh/want-to-work-at-a-startup-with-cool-tech-hbase-clojure-chef-swarms-javascript-ruby-rails/">Want to work at a Startup with Cool Tech? (HBase, Clojure, Chef, Swarms, Javascript, Ruby & Rails)</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">253</post-id>	</item>
	</channel>
</rss>
