<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>Mac OS X - Cognizant Transmutation</title>
	<atom:link href="https://www.ibd.com/tag/mac-os-x/feed/" rel="self" type="application/rss+xml" />
	<link>https://www.ibd.com</link>
	<description>Internet Bandwidth Development: Composting the Internet for over Two Decades</description>
	<lastBuildDate>Thu, 05 Aug 2021 05:42:34 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.1</generator>

 
<atom:link rel="hub" href="https://pubsubhubbub.appspot.com"/><atom:link rel="hub" href="https://pubsubhubbub.superfeedr.com"/><atom:link rel="hub" href="https://websubhub.com/hub"/><site xmlns="com-wordpress:feed-additions:1">156814061</site>	<item>
		<title>Your Mac Won&#8217;t Reboot when Installing Mac OS X Lion &#8211; Reset Your PRAM</title>
		<link>https://www.ibd.com/howto/your-mac-wont-reboot-when-installing-mac-os-x-lion-reset-your-pram/</link>
					<comments>https://www.ibd.com/howto/your-mac-wont-reboot-when-installing-mac-os-x-lion-reset-your-pram/#comments</comments>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Thu, 28 Jul 2011 06:27:14 +0000</pubDate>
				<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Macintosh]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[Mac OS X]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=782</guid>

					<description><![CDATA[<p>Overseeing the Mac OS X Lion upgrade of all the Macs at Work, I&#8217;ve seen the Lion installs generally be the easiest OS X Upgrade ever. But I wasted almost two days upgrading one of our Mac Pros. We have several Mac Pros with the same configuration, 2008 Vintage Dual Quad Core&#8217;s with Software RAID 1 drives. The first ones&#8230;</p>
<p>The post <a href="https://www.ibd.com/howto/your-mac-wont-reboot-when-installing-mac-os-x-lion-reset-your-pram/">Your Mac Won’t Reboot when Installing Mac OS X Lion – Reset Your PRAM</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<p style="text-align: left;"><img decoding="async" loading="lazy" class="alignleft wp-image-786 size-medium" title="Mac OS X Lion" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2011/07/mac-os-x-lion-logo-300x86.jpg?resize=300%2C86" alt="Mac OS X Lion" width="300" height="86" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2011/07/mac-os-x-lion-logo.jpg?resize=300%2C86&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2011/07/mac-os-x-lion-logo.jpg?resize=150%2C43&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2011/07/mac-os-x-lion-logo.jpg?resize=400%2C115&amp;ssl=1 400w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2011/07/mac-os-x-lion-logo.jpg?w=450&amp;ssl=1 450w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" />Overseeing the Mac OS X Lion upgrade of all the Macs at Work, I&#8217;ve seen the Lion installs generally be the easiest OS X Upgrade ever. But I wasted almost two days upgrading one of our Mac Pros. We have several Mac Pros with the same configuration, 2008 Vintage Dual Quad Core&#8217;s with Software RAID 1 drives.</p>
<p>The first ones we did had no problem at all. But one of them gave the dreaded multi-lingual kernel panic display on the initial reboot after the user ran the Lion Updater.<img decoding="async" loading="lazy" class="alignleft wp-image-784 size-medium" title="Multi-Lingual Kernel Panic Screen of Death" src="https://i0.wp.com/www.ibd.com/wp-content/uploads/2011/07/kernel-panic-300x162.jpg?resize=300%2C162" alt="Multi-Lingual Kernel Panic Screen of Death" width="300" height="162" srcset="https://i0.wp.com/www.ibd.com/wp-content/uploads/2011/07/kernel-panic.jpg?resize=300%2C162&amp;ssl=1 300w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2011/07/kernel-panic.jpg?resize=150%2C81&amp;ssl=1 150w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2011/07/kernel-panic.jpg?resize=400%2C216&amp;ssl=1 400w, https://i0.wp.com/www.ibd.com/wp-content/uploads/2011/07/kernel-panic.jpg?w=472&amp;ssl=1 472w" sizes="(max-width: 300px) 100vw, 300px" data-recalc-dims="1" /></p>
<p>At first I suspected the fact that the unit had Software RAID as the initial Googling showed that many folks had similar issues with Software RAID. When I booted from the Lion boot DVD I had created from the Lion installer download and ran the Disk Utilities, it reinforced the assumption that there was some issue with Lion and the RAID as the Disk Utility wouldn&#8217;t do anything with the disks (the various First Aid buttons were greyed out). But then I had some similar problems with the Snow Leopard DVD Disk Utility, so I assumed that the disk[s] had gotten corrupt. I spend most of the next day using Disk Warrior to recover the disks, tried various attempts of using Disk Utility and command line diskutil to make the RAID work with Lion.</p>
<p>Of course the itterations were long since it required reboots from DVD, etc. Eventually I got some new drives and did a totally fresh install and would just end up with the Grey Screen with the circle with a line thru it instead of the Apple Symbol on boot.</p>
<p>That was when I realized it probably wasn&#8217;t the RAID issue (also found plenty of folks including ourselves where Lion and Software RAID did work). So some different Googling found one <a href="https://discussions.apple.com/message/15749666#15749666" target="_blank" rel="noopener">article that mentioned resetting the PRAM</a>. And of course, that worked. I had previously done the &#8220;hold down the power button till the power led blinks rapidly and hear the reset tone&#8221; assuming that reset everything that needed to be reset on modern Macs.</p>
<p>I haven&#8217;t had to<a href="http://support.apple.com/kb/ht1379" target="_blank" rel="noopener"> CMD-OPTION-P-R keyboard chord reset the PRAM</a> in ages. I had thought that it was no longer a thing with Modern Macs and the push and hold the power button had replaced it. But low and behold what was need the was the old fashioned PRAM Reset.</p>
<p>So if you have a problem where your Mac won&#8217;t reboot after the initial install of Lion, give the old PRAM reset power chord a try.</p><p>The post <a href="https://www.ibd.com/howto/your-mac-wont-reboot-when-installing-mac-os-x-lion-reset-your-pram/">Your Mac Won’t Reboot when Installing Mac OS X Lion – Reset Your PRAM</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.ibd.com/howto/your-mac-wont-reboot-when-installing-mac-os-x-lion-reset-your-pram/feed/</wfw:commentRss>
			<slash:comments>2</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">782</post-id>	</item>
		<item>
		<title>Bonjour / AVAHI &#038; Netatalk to share files files between Ubuntu 10.4 &#038; Mac OS X</title>
		<link>https://www.ibd.com/howto/bonjour-avahi-netatalk-to-share-files-files-between-ubuntu-10-4-mac-os-x/</link>
					<comments>https://www.ibd.com/howto/bonjour-avahi-netatalk-to-share-files-files-between-ubuntu-10-4-mac-os-x/#comments</comments>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Sun, 09 May 2010 05:57:08 +0000</pubDate>
				<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[Mac OS X]]></category>
		<category><![CDATA[Macintosh]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=574</guid>

					<description><![CDATA[<p>It use to be somewhat difficult to have Filesystems on an Ubuntu system show up on the Mac Finder the same way that other Mac Filesystems would show up. There has been the Open Source Unix implementation of the Apple File System (afp) but for a long time the Ubuntu packages were not properly configured to work transparently with modern&#8230;</p>
<p>The post <a href="https://www.ibd.com/howto/bonjour-avahi-netatalk-to-share-files-files-between-ubuntu-10-4-mac-os-x/">Bonjour / AVAHI & Netatalk to share files files between Ubuntu 10.4 & Mac OS X</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>It use to be somewhat difficult to have Filesystems on an Ubuntu system show up on the Mac Finder the same way that other Mac Filesystems would show up. There has been the Open Source Unix implementation of the Apple File System (afp) but for a long time the Ubuntu packages were not properly configured to work transparently with modern (Snow Leopard) Mac OS X.</p>
<p>One blog post, <a href="http://www.kremalicious.com/2008/06/ubuntu-as-mac-file-server-and-time-machine-volume/" target="_blank">HowTo: Make Ubuntu A Perfect Mac File Server And Time Machine Volume</a> did a great job going through all the steps needed to build Netatalk from source and configure it to work very transparently with Ubuntu releases of the past. But with the Ubuntu 10.4 Lucid release, the Netatalk that is in the Ubuntu repository is built and configure to support transparent Apple File Protocol based file sharing.</p>
<p>But there are a few configuration issues, mainly with the Unix implementation of Bonjour resource discovery protocol, that still needs to be done to make it so you can see your Ubuntu Filesystems on your Mac&#8217;s Finder like other Macintosh instances. Also we&#8217;ll see how to make it so that the Ubuntu instance will show up as an ssh server as well.</p>
<h2>Installing Packages</h2>
<p>You will need to install the following packages onto your Ubuntu 10.4 instance. This assumes that you already did a clean install of Ubuntu 10.4 and used the update manager to bring it up to date. If you have already installed some of these, it should not be a problem.</p>
<h3>Install ssh server</h3>
<p>I can&#8217;t believe that ubuntu doesn&#8217;t install an ssh server by default. But in any case its pretty easy. This is not needed to use netatalk but I wanted to make ssh and netatalk to work and be available via bonjour.</p>
<pre><code>sudo apt-get install openssh-server</code></pre>
<p>Then you&#8217;ll need to set up your authorized keys on the ubuntu server. In your home directory do the following:</p>
<pre><code>mkdir -p .ssh
# Copy your public key[s] to .ssh/authorized_keys (not shown here)
# Set the permissions to only allow your user to access the .ssh directory and files in there
chmod -R og-rwx .ssh</code></pre>
<h3>Install Netatalk</h3>
<pre><code>sudo apt-get install netatalk</code></pre>
<h4>Configure Netatalk</h4>
<p>You don&#8217;t need to change any of the configuration files for netatalk. The defaults will enable the sharing of your home directory. If you want to share any additional filesystems from your Ubuntu instance to your Macs, you can add them to the <em>/etc/netatalk/AppleVolumes.default</em>. That file has explanations of al the options.</p>
<p>You may want to change the default last item in /etc/netatalk/AppleVolumes.default from:</p>
<pre>~/			"Home Directory"</pre>
<p>to something like:</p>
<pre>~/ "$h_$u Home Directory" options:upriv,usedots</pre>
<p>This will change the name that shows up in listing to be &#8220;<em>hostname_username Home Directory</em>&#8221; and will use Unix Privilages. Most importantly the usedots says to not do Hex translation of dot files. If you don&#8217;t do this, you&#8217;ll see things like<br />
<code>:2e_somefilename</code> instead of <code>.somefilename</code> where filenames start with &#8220;dot&#8221;.</p>
<h3>Configure AVAHI</h3>
<p>AVAHI is probably already installed if you did a standard installation.</p>
<p>Copy the avahi ssh service configuration into <em>/etc/avahi/services</em></p>
<pre><code>sudo cp /usr/share/doc/avahi-daemon/examples/ssh.service /etc/avahi/services/</code></pre>
<p>Create an avahi afpd service configuration by creating a file <em>/etc/avahi/services/afpd.service</em> with the following content:</p>
<pre><code>&lt;?xml version="1.0" standalone='no'?&gt;&lt;!--*-nxml-*--&gt;
&lt;!DOCTYPE service-group SYSTEM "avahi-service.dtd"&gt;
&lt;service-group&gt;
  &lt;name replace-wildcards="yes"&gt;%h&lt;/name&gt;
  &lt;service&gt;
    &lt;type&gt;_afpovertcp._tcp&lt;/type&gt;
    &lt;port&gt;548&lt;/port&gt;
  &lt;/service&gt;
  &lt;service&gt;
    &lt;type&gt;_device-info._tcp&lt;/type&gt;
    &lt;port&gt;0&lt;/port&gt;
    &lt;txt-record&gt;model=Xserve&lt;/txt-record&gt;
  &lt;/service&gt;
&lt;/service-group&gt;
</code></pre>
<p>You should now be able to see the Ubuntu host in your Finder under the SHARED section on the left side of the Finder. You should also see your Ubuntu host in the &#8220;New Remote Connection&#8221; window of the Mac Terminal app (CMD-SHIFT-K) if you select the &#8220;Secure Shell (ssh)&#8221; Service.</p>
<p>If you don&#8217;t see the Ubuntu hostname in the FInder or in the Terminal New Remote Connection service,  restart the avahi-daemon service:</p>
<pre><code>sudo restart avahi-daemon</code></pre>
<h2>TimeMachine Support</h2>
<p>The new Ubuntu Netatalk package is supposed to also support TimeMachine storage. You can enable this in <em>/etc/netatalk/AppleVolumes.default</em> and add <em>tm</em> as an option to the filesystems that is published in this file. I have not tried this and many sources consider this a risky way to store Time Machine backups.</p>
<h2>Troubleshooting</h2>
<p>You should make sure that there is at least one afpd process running on the Ubuntu instance. You can see the log info in <em>/var/log/daemon.log</em>.</p>
<p>That&#8217;s it!</p><p>The post <a href="https://www.ibd.com/howto/bonjour-avahi-netatalk-to-share-files-files-between-ubuntu-10-4-mac-os-x/">Bonjour / AVAHI & Netatalk to share files files between Ubuntu 10.4 & Mac OS X</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.ibd.com/howto/bonjour-avahi-netatalk-to-share-files-files-between-ubuntu-10-4-mac-os-x/feed/</wfw:commentRss>
			<slash:comments>25</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">574</post-id>	</item>
		<item>
		<title>HBase/Hadoop on Mac OS X (Pseudo-Distributed)</title>
		<link>https://www.ibd.com/howto/hbase-hadoop-on-mac-ox-x/</link>
					<comments>https://www.ibd.com/howto/hbase-hadoop-on-mac-ox-x/#comments</comments>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Mon, 03 May 2010 03:50:13 +0000</pubDate>
				<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Macintosh]]></category>
		<category><![CDATA[Scalable Deployment]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[Hadoop]]></category>
		<category><![CDATA[HBase]]></category>
		<category><![CDATA[Mac OS X]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=565</guid>

					<description><![CDATA[<p>I wanted to do some experimenting with various tools for doing Hadoop and HBase activities and didn&#8217;t want to have to bother making it work with our Cluster in the Cloud. I just wanted a simple experimental environment on my Macbook Pro running Snow Leopard Mac OS X. So I thought it was time to revisit installing Hadoop and HBase&#8230;</p>
<p>The post <a href="https://www.ibd.com/howto/hbase-hadoop-on-mac-ox-x/">HBase/Hadoop on Mac OS X (Pseudo-Distributed)</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>I wanted to do some experimenting with various tools for doing Hadoop and HBase activities and didn&#8217;t want to have to bother making it work with our Cluster in the Cloud. I just wanted a simple experimental environment on my Macbook Pro running Snow Leopard Mac OS X.</p>
<p>So I thought it was time to revisit installing Hadoop and HBase on the Mac using the latest versions of everything. This will be deployed as Psuedo-Distributed mode native to Mac OS X. Some folks actually create a set of Linux VMs with a full Hadoop/HBase stack and run that on the Mac, but that is a bit of overkill for now.</p>
<p>These instructions mainly follow the standard instructions for <a href="http://hadoop.apache.org/common/docs/current/quickstart.html" target="_blank">Apache Hadoop</a> and <a href="http://hadoop.apache.org/hbase/docs/current/api/overview-summary.html#pseudo-distrib" target="_blank">Apache HBase</a></p>
<h2>Prerequisits</h2>
<p>Mac OS X Xcode developer tools which includes Java 1.6.x. You can get this for free from the <a href="https://developer.apple.com/mac/" target="_blank">Apple Mac Dev Center</a>. You have to become a member but there is a free membership available.</p>
<h2>Download and Unpack Latest Distros</h2>
<p>You can get a link to a mirror for Hadoop via the <a href="http://www.apache.org/dyn/closer.cgi/hadoop/core/" target="_blank">Hadoop Apache Mirror link</a> and for Hbase at the <a href="http://www.apache.org/dyn/closer.cgi/hadoop/hbase/" target="_blank">HBase Apache Mirror link</a>. Each of those links will bring you to a suggested link to a mirror for Hadoop or HBase. Once you click on the suggest link, it will bring you to a mirror with the recent releases. You can click on the <em>stable</em> link which will then bring you to a directory that has the latest stable Hadoop (as of this writing: hadoop-0.20.2.tar.gz) or HBase (as of this writing: hbase-0.20.3.tar.gz ). Click on those tar.gz files to download them.</p>
<p>I am going to keep the distros in ~/work/pkgs. I usually create a directory ~/work/pkgs and unpack the tar files there as numbered versions and then create symbolic links to them in ~/work. But you can do this all in any directory that you can control.:</p>
<pre><code>cd ~/work
mkdir -p pkgs
cd pkgs
tar xvzf hadoop-0.20.2.tar.gz
tar xvzf hbase-0.20.3.tar.gz
cd ..
ln -s pkgs/hadoop-0.20.2 hadoop
ln -s pkgs/hbase-020.3 hbase
mkdir -p hadoop/logs
mkdir -p hbase/logs</code></pre>
<p>Now you can have your tools all access ~/work/hadoop or ~/work/hbase and not care what version it is. You can update to later version just by downloading, untarring the distro and then just change the symbolic links.</p>
<h2>Configure Hadoop</h2>
<p>All the configuration files mentioned here will be in <em>~/work/hadoop/conf.</em> In this example we are assuming that the Hadoop servers will only be accessed from this <em>localhost</em>. If you need to make it accessable from other hosts or VMs on your lan that support Bonjour, you could use the bonjour name  (ie. the name of your mac followed by .local such as <em>mymac.local</em>) instead of <em>localhost</em> in the following Hadoop and HBase configuraitons</p>
<h3>hadoop-env.sh</h3>
<p>Mainly need to tell Hadoop where your JAVA_HOME is.</p>
<p>Add the following line below the commented out JAVA_HOME line is in hadoop-env.sh</p>
<pre><code>export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home</code></pre>
<h3>core-site.xml</h3>
<pre><code>&lt;?xml version="1.0"?&gt;
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;

&lt;configuration&gt;
  &lt;property&gt;
    &lt;name&gt;fs.default.name&lt;/name&gt;
    &lt;value&gt;hdfs://localhost:9000&lt;/value&gt;
  &lt;/property&gt;
&lt;/configuration&gt;</code></pre>
<h3>hdfs-site.xml</h3>
<pre><code>&lt;?xml version="1.0"?&gt;
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;

&lt;configuration&gt;
  &lt;property&gt;
    &lt;name&gt;dfs.replication&lt;/name&gt;
    &lt;value&gt;1&lt;/value&gt;
  &lt;/property&gt;
&lt;/configuration&gt;</code></pre>
<h3>mapred-site.xml</h3>
<pre><code>&lt;?xml version="1.0"?&gt;
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;

&lt;configuration&gt;
  &lt;property&gt;
    &lt;name&gt;mapred.job.tracker&lt;/name&gt;
    &lt;value&gt;localhost:9001&lt;/value&gt;
  &lt;/property&gt;
&lt;/configuration&gt;</code></pre>
<h3>Make sure you can ssh without a password to the hostname used in the configs</h3>
<p>The Hadoop and Hbase start/stop scripts use ssh to access the various servers. In this case of doing a Pseudo-Distributed mode, everything is running on the <em>localhost</em>, but we still need to allow the scripts to ssh to the localhost.</p>
<h4>Check that you can ssh to the <em>localhost</em> (or whatever hostname you used in the above configs)</h4>
<p>We&#8217;re assuming that we&#8217;ll be running the Hadoop/HBase servers as the same user as our login. You can set things up to run as the hadoop user, but its kind of complicated on Mac OS X. See the section<em> File System Layout</em> in an earlier post <em><a href="http://blog2.ibd.com/scalable-deployment/hadoop-hdfs-and-hbase-on-ubuntu/" target="_blank">Hadoop, HDFS and Hbase on Ubuntu &amp; Macintosh Leopard</a>.</em> That section and a few other points thru that post describe how to create and use a hadoop user to run the Hadoop and HBase servers.</p>
<p>Back to just doing this as our own user. Test that you can ssh to the <em>localhost</em> without a password:</p>
<pre>ssh localhost</pre>
<p>If you see something like the following paragraph  that ends up with a password prompt, then you need to add a key to your ssh setup that does not need a password (you may need to say yes if you are asked if you want to continue connecting).</p>
<pre>The authenticity of host 'localhost (::1)' can't be established.
RSA key fingerprint is 3c:5d:6a:39:64:78:02:9d:a3:c9:69:68:50:23:71:eb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (RSA) to the list of known hosts.
Password:</pre>
<p>To create a passwordless key and add it to your set of authorized keys that can access your host, do the following (as yourself, not as root. The id_dsa file name can be arbitrary):</p>
<pre>ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa_for_hadoop
cat ~/.ssh/id_dsa_for_hadoop.pub &gt;&gt; ~/.ssh/authorized_keys</pre>
<p>If you have strong alternative opinions on how to set up your own keys to accomplish the same thing please do it your own way. This is just the basic way of doing a passwordless ssh. You may want to use a key you already have lying around or some other mechanism.</p>
<h3>Start Hadoop</h3>
<h4>One time format of  Hadoop File System</h4>
<p>Only once, before the first time you use Hadoop, you have to create a formated Hadoop File System. Don&#8217;t do this again once you have data in your Hadoop file system as it will erase anything you might have saved there. You may have to do this command again if somehow you screw up your file system. But its not something to do lightly the second time.</p>
<pre>~/work/hadoop/bin/hadoop namenode -format</pre>
<p>If all goes well, you should see something like:</p>
<pre>10/05/02 18:45:04 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = Psion.local/192.168.50.16
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 0.20.2
STARTUP_MSG:   build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-0.20 -r 911707; compiled by 'chrisdo' on Fri Feb 19 08:07:34 UTC 2010
************************************************************/
10/05/02 18:45:04 INFO namenode.FSNamesystem: fsOwner=rberger,rberger,admin,com.apple.access_screensharing,_developer,_lpoperator,_lpadmin,_appserveradm,_appserverusr,localaccounts,everyone,com.apple.sharepoint.group.2,com.apple.sharepoint.group.3,dev,com.apple.sharepoint.group.1,workgroup
10/05/02 18:45:04 INFO namenode.FSNamesystem: supergroup=supergroup
10/05/02 18:45:04 INFO namenode.FSNamesystem: isPermissionEnabled=true
10/05/02 18:45:04 INFO common.Storage: Image file of size 97 saved in 0 seconds.
10/05/02 18:45:04 INFO common.Storage: Storage directory /tmp/hadoop-rberger/dfs/name has been successfully formatted.
10/05/02 18:45:04 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at Psion.local/192.168.50.16
************************************************************/</pre>
<h4>Starting and stopping Hadoop</h4>
<p>Now you can start Hadoop. You will use this command to start Hadoop in general:</p>
<pre>~/work/hadoop/bin/start-all.sh</pre>
<p>You can stop Hadoop with the command</p>
<pre>~/work/hadoop/bin/stop-all.sh</pre>
<p>But remember if you are running HBase, stop that first, then stop Hadoop.</p>
<h3>Making sure Hadoop is working</h3>
<p>You can see the Hadoop logs in ~/work/hadoop/logs</p>
<p>You should be able to see the Hadoop Namenode web interface at <a href="http://localhost:50070/" target="_blank">http://localhost:50070/</a> and the JobTracker Web Interface at <a href="http://localhost:50030/" target="_blank">http://localhost:50030/</a>. If not, check that you have 5 java processes running where each of those java processes have one of the following as their last command line (as seen from a <code>ps ax | grep hadoop</code> command) :</p>
<pre>org.apache.hadoop.mapred.JobTracker
org.apache.hadoop.hdfs.server.namenode.NameNode
org.apache.hadoop.mapred.TaskTracker
org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
org.apache.hadoop.hdfs.server.datanode.DataNode</pre>
<p>If you do not see these 5 processes, check the logs in ~work/hadoop/logs/*.{out,log} for messages that might give you a hint as to what went wrong.</p>
<h4>Run some example map/reduce jobs</h4>
<p>The Hadoop distro comes with some example / test map / reduce jobs. Here we&#8217;ll run them and make sure things are working end to end.</p>
<pre><code>cd ~/work/hadoop
# Copy the input files into the distributed filesystem
# (there will be no output visible from the command):
bin/hadoop fs -put conf input
# Run some of the examples provided:
# (there will be a large amount of INFO statements as output)
bin/hadoop jar hadoop-*-examples.jar grep input output 'dfs[a-z.]+'
# Examine the output files:
bin/hadoop fs -cat output/part-00000
</code></pre>
<p>The resulting output should be something like:</p>
<pre>3	dfs.class
2	dfs.period
1	dfs.file
1	dfs.replication
1	dfs.servers
1	dfsadmin
1	dfsmetrics.log</pre>
<h2>Configuring HBase</h2>
<p>The following config files all reside in <em>~/work/hbase/conf</em>. As mentioned earlier, use a FQDN or a Bonjour name instead of localhost if you need remote clients to access HBase. But if you don&#8217;t use localhost here, make sure you do the same in the Hadoop config.</p>
<h3>hbase-env.sh</h3>
<p>Add the following line below the commented out JAVA_HOME line is in hbase-env.sh</p>
<pre><code>export JAVA_HOME=/System/Library/Frameworks/JavaVM.framework/Versions/CurrentJDK/Home</code></pre>
<p>Add the following line below the commented out HBASE_CLASSPATH= line</p>
<pre><code>export HBASE_CLASSPATH=${HOME}/work/hadoop/conf</code></pre>
<h3>hbase-site.xml</h3>
<pre><code>&lt;?xml version="1.0"?&gt;
&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;
&lt;?xml version="1.0"?&gt;&lt;?xml-stylesheet type="text/xsl" href="configuration.xsl"?&gt;
&lt;configuration&gt;
  &lt;property&gt;
    &lt;name&gt;hbase.rootdir&lt;/name&gt;
    &lt;value&gt;hdfs://localhost:9000/hbase&lt;/value&gt;
    &lt;description&gt;The directory shared by region servers.
    &lt;/description&gt;
  &lt;/property&gt;
&lt;/configuration&gt;
</code></pre>
<h3>Making Sure HBase is Working</h3>
<p>If you do a ps ax | grep hbase you should see two java processes. One should end with:<br />
<code>org.apache.hadoop.hbase.zookeeper.HQuorumPeer start</code><br />
And the other should end with:<br />
<code>org.apache.hadoop.hbase.master.HMaster start</code><br />
Since we are running in the Pseudo-Distributed mode, there will not be any explicit regionservers running. If you have problems, check the logs in ~/work/hbase/logs/*.{out,log}</p>
<h3>Testing HBase using the HBase Shell</h3>
<p>From the unix prompt give the following command:</p>
<pre>~/work/hbase/bin/hbase shell</pre>
<p>Here is some example commands from the Apache HBase Installation Instructions:</p>
<pre>base&gt; # Type "help" to see shell help screen
hbase&gt; help
hbase&gt; # To create a table named "mylittletable" with a column family of "mylittlecolumnfamily", type
hbase&gt; create "mylittletable", "mylittlecolumnfamily"
hbase&gt; # To see the schema for you just created "mylittletable" table and its single "mylittlecolumnfamily", type
hbase&gt; describe "mylittletable"
hbase&gt; # To add a row whose id is "myrow", to the column "mylittlecolumnfamily:x" with a value of 'v', do
hbase&gt; put "mylittletable", "myrow", "mylittlecolumnfamily:x", "v"
hbase&gt; # To get the cell just added, do
hbase&gt; get "mylittletable", "myrow"
hbase&gt; # To scan you new table, do
hbase&gt; scan "mylittletable"</pre>
<p>You can stop hbase with the command:</p>
<pre>~/work/hbase/bin/stop-hbase.sh</pre>
<p>Once that has stopped you can stop hadoop:</p>
<pre>~/work/hadoop/bin/stop-all.sh</pre>
<h2>Conclusion</h2>
<p>You should now have a fully working Pseudo-Distributed Hadoop / HBase setup on your Mac. This is not suitable for any kind of large data or production project. In fact it will probably fail if you try to do anything with lots of data or high volumes of I/O. HBase seems to not like to work well until you get 4 &#8211; 5 regionservers.</p>
<p>But this Pseudo-Distributed version should be fine for doing experiments with tools and small data sets.</p>
<p>Now I can get on with playing with <a href="http://github.com/clj-sys/cascading-clojure" target="_blank">Cascading-Clojure</a> and <a href="http://nathanmarz.com/blog/introducing-cascalog/" target="_blank">Cascalog</a>!</p><p>The post <a href="https://www.ibd.com/howto/hbase-hadoop-on-mac-ox-x/">HBase/Hadoop on Mac OS X (Pseudo-Distributed)</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.ibd.com/howto/hbase-hadoop-on-mac-ox-x/feed/</wfw:commentRss>
			<slash:comments>25</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">565</post-id>	</item>
		<item>
		<title>Installing Apache Thrift on Ubuntu and Leopard</title>
		<link>https://www.ibd.com/howto/installing-apache-thrift-on-ubuntu-and-leopard/</link>
					<comments>https://www.ibd.com/howto/installing-apache-thrift-on-ubuntu-and-leopard/#comments</comments>
		
		<dc:creator><![CDATA[Robert J Berger]]></dc:creator>
		<pubDate>Fri, 06 Mar 2009 00:50:59 +0000</pubDate>
				<category><![CDATA[HowTo]]></category>
		<category><![CDATA[Macintosh]]></category>
		<category><![CDATA[Scalable Deployment]]></category>
		<category><![CDATA[Sysadmin]]></category>
		<category><![CDATA[APIs]]></category>
		<category><![CDATA[Leopard]]></category>
		<category><![CDATA[Mac OS X]]></category>
		<category><![CDATA[REST]]></category>
		<category><![CDATA[Thrift]]></category>
		<category><![CDATA[ubuntu]]></category>
		<guid isPermaLink="false">http://blog2.ibd.com/?p=172</guid>

					<description><![CDATA[<p>The instructions for installing the Apache Thrift on the Wiki missed a few key things in terms of installing on Ubuntu (8.04 in my case) and Macintosh OS X Leopard (10.5.6). Gitting the latest source For instance they show you how to get the latest via SVN or a snapshop via wget. But the wget actually gets it from a&#8230;</p>
<p>The post <a href="https://www.ibd.com/howto/installing-apache-thrift-on-ubuntu-and-leopard/">Installing Apache Thrift on Ubuntu and Leopard</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></description>
										<content:encoded><![CDATA[<p>The instructions for installing the <a href="http://wiki.apache.org/thrift/ThriftInstallation">Apache Thrift on the Wiki</a> missed a few key things in terms of installing on Ubuntu (8.04 in my case) and Macintosh OS X Leopard (10.5.6).</p>
<h2>Gitting the latest source</h2>
<p>For instance they show you how to get the latest via SVN or a snapshop via wget. But the wget actually gets it from a git repository, but they don&#8217;t tell you how to directly git it! Which is:</p>
<pre>git clone git://git.thrift-rpc.org/thrift.git</pre>
<p>That will create a a source distribution of thrift in a directory called thrift.</p>
<p>The git repository is where the developers are really working according to the <a href="http://wiki.apache.org/thrift/GitRepository">Developers Wiki on the GitRepository</a>. There is also a <a href="http://github.com/dreiss/thrift">copy on github</a>.</p>
<h2>Requirements</h2>
<p>The relevant requirements as stated by the wiki are:</p>
<blockquote><p>GNU build tools: autoconf 2.59+ (2.60+ recommended), automake 1.9+, libtool 1.5.24+<br />
boost 1.34.0+<br />
g++ 3.3.5+<br />
pkgconfig (Use MacPorts for Mac OS X)<br />
lex and yacc (developed primarily with flex and bison)</p></blockquote>
<p>Well, for Ubuntu it wasn&#8217;t quite clear what was really required. The <a href="http://wiki.apache.org/thrift/GettingUbuntuPackages#preview">GettingUbuntuPackages wiki page</a> listed only a few of the required packages. <a href="http://lueb.be/2009/02/27/installing-apache-thrift-on-ubuntu-804/" target="_blank">Max Luebbe has a blog page</a> that has a more in depth list:</p>
<pre>apt-get install libboost-dev libevent-dev python-dev automake pkg-config libtool flex bison sun-java5-jdk</pre>
<p>We already had Sun Java6 installed and that worked fine, so I didn&#8217;t include sun-java5-jdk. But we didn&#8217;t have g++ installed, so also do:</p>
<pre>apt-get install g++</pre>
<p>Confusingly, the ./configure did not fail saying there was no g++ but failed by saying there was no boost. It took a while to figure out it was actually not finding boost because it could not compile the little configure test script that was used to detect if boost was installed or not!</p>
<p>So the actual apt-get used on our ubuntu 8.04 server was:</p>
<pre>sudo apt-get install g++ libboost-dev libevent-dev python-dev automake pkg-config libtool flex bison</pre>
<p>On the Mac you can use the MacPorts to install the required packages.. Max also had a good page on <a href="http://lueb.be/2009/02/23/installing-apache-thrift-on-mac-os-x-105-leopard/" target="_blank">Installing Apache Thrift on Mac OS X 10.5 Leopard</a> that doesn&#8217;t require MacPorts.</p>
<pre>sudo port selfupdate
sudo port install boost
sudo port install pkgconfig</pre>
<h2>The pkg.m4 workaround</h2>
<p>As noted in the <a href="http://wiki.apache.org/thrift/FAQ" target="_blank">Thrift Wiki FAQ</a>,the ./configure command may generate an error like:</p>
<pre>./configure: line 21183: syntax error near unexpected token `MONO,'
./configure: line 21183: `  PKG_CHECK_MODULES(MONO, mono &gt;= 1.2.6, have_mono=yes, have_mono=no)'</pre>
<p>This will happen if there is no pkg.m4 file in the aclocal directory of the thrift source tree. For the Macintosh, install pkgconfig via MacPorts and copy /opt/local/share/aclocal to aclocal (assuming you are in the thrift source distro):</p>
<pre>cp /opt/local/share/aclocal/pkg.m4 aclocal</pre>
<p>This is not necessary in ubuntu if you have installed pkgconfig there.</p>
<h2>Actual Build and Installation</h2>
<p>In the Thrift directory run:</p>
<pre>./bootstrap.sh</pre>
<p>on the Mac if boost was installed with MacPorts use the following (If you manually installed boost elsewhere use that path instead):</p>
<pre>./configure --with-boost=/opt/local</pre>
<p>on Ubunto you can just say&#8221;</p>
<pre>./configure</pre>
<p>On both Mac and Ubuntu:</p>
<pre>make
sudo make install</pre>
<p>If you want any of the bindings for different languages, cd into lib and there are directories for each language. Its not always clear what to do to build them. For the ruby one what I ended up doing was:</p>
<pre>cd lib/rb
sudo ruby setup.rb</pre>
<h2>Next step</h2>
<p>Figure out how to test and use Thrift!</p><p>The post <a href="https://www.ibd.com/howto/installing-apache-thrift-on-ubuntu-and-leopard/">Installing Apache Thrift on Ubuntu and Leopard</a> first appeared on <a href="https://www.ibd.com">Cognizant Transmutation</a>.</p>]]></content:encoded>
					
					<wfw:commentRss>https://www.ibd.com/howto/installing-apache-thrift-on-ubuntu-and-leopard/feed/</wfw:commentRss>
			<slash:comments>10</slash:comments>
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">172</post-id>	</item>
	</channel>
</rss>
