Parallel Distributed Computing Example

You may have seen article, Hadoop Example, AccessLogCountByHourOfDay. This is a distributed computing solution, using Hadoop. The purpose of this article is to dive into the theory behind this.

To understand the power of distributed computing, we need to step back and understand the problem. First we’ll look at a command line java program that will process each http log file, one file at a time, one line at a time, until done. To speed up the job, we’ll then look at another approach: multi-threaded; we should be able to get the job done faster if we break the job up into a set of sub tasks and run them in parallel. Then, we’ll come to Hadoop, distributed computing. Same concept of breaking the job up into a set of sub tasks, but rather than running with one server, we’ll run on multiple servers in parallel.

At first you’d think that Hadoop would be the fastest, but in our basic example, you’ll see that Hadoop takes isn’t significantly faster. Why? The Hadoop overhead of scheduling the job and tracking the tasks is slowing us down. In order to see the power of Hadoop, we need much larger data sets. Think about our single server approach for a minute. As we ramp up the size and/or number of files to process, there is going to be a point where the server will hit resource limitations (cpu, ram, disk). If we have 4 threads making use of 4 cores of our CPU effectively, we may be able to do a job 4 times faster than single threaded. But, if we have a terabyte of data to process and it takes say 100 second per GB, it’s going to take 100,000 seconds to finish (that’s more than 1 day). With Hadoop, we can scale out horizontally. What if we had a 1000 node Hadoop cluster. Suddenly the overhead of scheduling the job and tracking the tasks is minuscule in comparison to the whole job. The whole job may complete in 100 seconds or less! We went from over a day to less than 2 minutes. Wow.

Please note: the single thread and multi-threaded examples in this article are not using the Map/Reduce algorithm. This is intentional. I’m trying to demonstrate the evolution of thought. When we think about how to solve the problem, the first thing that comes to mind is to walk through the files, one line at a time, and accumulate the result. Then, we realize we could split the job up into threads and gain some speed. The last evolution is is the Map/Reduce algorithm across a distributed computing platform.

Let’s dive in….

Read more

Hadoop Example, AccessLogCountByHourOfDay

Inspired by an article written by Tom White, AWS author and developer:
Running Hadoop MapReduce on Amazon EC2 and Amazon S3

Instead of minute of the week, this one does by Hour Of The Day. I just find this more interesting than the minute of the week that’s most popular. The output is:


The main reason for writing this, however, is to provide a working example that will compile. I found a number of problems in the original post.

Read more

Inspiring MapReduce lectures by Google

Watched a set of 3 lectures run at Google, by Aaron Kimball, on MapReduce was inspiring to me. I feel like I have a much more solid grasp on MapReduce after watching these. I really liked how it started out with some basic functional programming and socket theory, then moved into how MapReduce builds on the basic principle of Map and Fold. Well worth the 3 hours!

Thank you Google and Aaron Kimball!

hadoop-0.18.3 Could not create the Java virtual machine

Installed hadoop on a VM, and needed to set the java heap size, -Xmx1000m, lower than the default 1000 to get it to work.  I set the HADOOP_HEAPSIZE var in the conf/ dir to the lower value, but hadoop continued to spit out this error:

# hadoop -help
Could not create the Java virtual machine.
Exception in thread "main" java.lang.NoClassDefFoundError: Could_not_reserve_enough_space_for_object_heap
Caused by: java.lang.ClassNotFoundException: Could_not_reserve_enough_space_for_object_heap
        at Method)
        at java.lang.ClassLoader.loadClass(
        at sun.misc.Launcher$AppClassLoader.loadClass(
        at java.lang.ClassLoader.loadClass(
        at java.lang.ClassLoader.loadClassInternal(
Could not find the main class: Could_not_reserve_enough_space_for_object_heap.  Program will exit.

Didn’t matter what I set the HADOOP_MAXHEAP to, the problem persisted. I never did find the answer online, so figured I do the world a favor today and make a note about how to fix it. Maybe I’ll save someone else the 2 hours it took me to figure this out!


Read more

Fuse mounting HDFS on CentOS 5

The first step is to get fuse installed.  It’s not as simple as “yum install fuse” – it doesn’t ship with RHEL5/CentOS5.

rpm --import RPM-GPG-KEY.dag.txt
rm RPM-GPG-KEY.dag.txt
yum install yum-priorities
rpm -Uhv rpmforge-release-0.3.6-1.el5.rf.i386.rpm
rm rpmforge-release-0.3.6-1.el5.rf.i386.rpm
vim /etc/yum.repos.d/CentOS-Base.repo # set priority to 1 on all, but 2 on centosplus
vim /etc/yum.repos.d/rpmforge.repo # set priority to 10
yum install fuse fuse-libs fuse-devel

Ok, now we have fuse :)

Next, get hdfs-fuse from

wget wget

tar -xzf hdfs-fuse-0.2.linux2.6-gcc4.1-x86.tar.gz
mv hdfs-fuse /usr/local/
cd /usr/local/hdfs-fuse
vim conf/hdfs-fuse.conf  # edit to your HDFS
echo '#!/bin/sh
export JAVA_HOME=/usr/java/jdk1.6.0_13
export HADOOP_HOME=/usr/local/hadoop
export FUSE_HOME=/usr/local
export HDFS_FUSE_HOME=/usr/local/hdfs-fuse
export HDFS_FUSE_CONF=/usr/local/hdfs-fuse/conf
./hdfs-mount /hdfs' > bin/
chmod 755 bin/
mkdir /hdfs
cd bin