The end of the Internet is near

Well, maybe not the end of the Internet, but the end of the IP Address space, as we know it, is near. You know, IP addresses are the 4 blocks of number separated by periods, i.e. 10.0.0.1; well, there are only 4 billion of them. Sometime next year, 2012, Internet Assigned Numbers Authority (IANA) will have no more IP addresses to give to the Regional Internet Registries (RIRs). About a year after that, the RIRs will have no more to give out to Internet Service Providers (ISPs), such as Cox Communication and Go Daddy.

Read moreThe end of the Internet is near

Making bp-events work on BuddyPress 1.2 (with invites working)

There is a lot of chatter about wanting the plugin creator to update bp-events to work with Buddypress version 1.2, and so far it hasn’t happened, however, this guy has ported it for us so kindly:

http://codewarrior.getpaidfrom.us/2010/04/21/buddypress-event-plugin-for-1-2/

Read moreMaking bp-events work on BuddyPress 1.2 (with invites working)

Hack to make vipers-video-quicktags detect iPad and use HTML5 Video Tag

I use this WordPress plugin, vipers-video-quicktag, in a couple sites I maintain. I use it to display FLV video. I’ve found if I create .mp4 videos with the H.262 video and AAC audio codecs, it’ll play in the Flash player, as well as on iPhone, iPad and iPod. This is neat, a single video format … Read moreHack to make vipers-video-quicktags detect iPad and use HTML5 Video Tag

Load Balancing Techniques

Load balancing is a term that describes a method to distribute incoming socket connections to different servers. It’s not distributed computing, where jobs are broken up into a series of sub-jobs, so each server does a fraction of the overall work. It’s not that at all. Rather, incoming socket connections are spread out to different servers. Each incoming connection will communicate with the node it was delegated to, and the entire interaction will occur there. Each node is not aware of the other nodes existence.

Why do you need load balancing?
Simple answer: Scalability and Redundancy.

Scalability

If your application becomes busy, resource limits, such as bandwidth, cpu, memory, disk space, disk I/O, and more may reach its limits. In order to remedy such problem, you have two options: scale up, or scale out. Load balancing is a scale out technique. Rather than increasing server resources, you add cost effective, commodity servers, creating a “cluster” of servers that perform the same task. Scaling out is more cost effective, because commodity level hardware provides the most bang for the buck. High end super computers come at a premium, and can be avoided in many cases.

Redundancy

Servers crash, this is the rule, not the exception. Your architecture should be devised in a way to reduce or eliminate single points of failure (SPOF). Load balancing a cluster of servers that perform the same role provides room for a server to be taken out manually for maintenance tasks, without taking down the system. You can also withstand a server crashing. This is called High Availability, or HA for short. Load balancing is a tactic that assists with High Availability, but is not High Availability by itself. To achieve high availability, you need automated monitoring that checks the status of the applications in your cluster, and automates taking servers out of rotation, in response to failure detected. These tools are often bundled into Load Balancing software and appliances, but sometimes need to be programmed independently.

How to perform load balancing?

There are 3 well known ways:

  1. DNS based
  2. Hardware based
  3. Software based

Read moreLoad Balancing Techniques

Parallel Distributed Computing Example

You may have seen article, Hadoop Example, AccessLogCountByHourOfDay. This is a distributed computing solution, using Hadoop. The purpose of this article is to dive into the theory behind this.

To understand the power of distributed computing, we need to step back and understand the problem. First we’ll look at a command line java program that will process each http log file, one file at a time, one line at a time, until done. To speed up the job, we’ll then look at another approach: multi-threaded; we should be able to get the job done faster if we break the job up into a set of sub tasks and run them in parallel. Then, we’ll come to Hadoop, distributed computing. Same concept of breaking the job up into a set of sub tasks, but rather than running with one server, we’ll run on multiple servers in parallel.

At first you’d think that Hadoop would be the fastest, but in our basic example, you’ll see that Hadoop takes isn’t significantly faster. Why? The Hadoop overhead of scheduling the job and tracking the tasks is slowing us down. In order to see the power of Hadoop, we need much larger data sets. Think about our single server approach for a minute. As we ramp up the size and/or number of files to process, there is going to be a point where the server will hit resource limitations (cpu, ram, disk). If we have 4 threads making use of 4 cores of our CPU effectively, we may be able to do a job 4 times faster than single threaded. But, if we have a terabyte of data to process and it takes say 100 second per GB, it’s going to take 100,000 seconds to finish (that’s more than 1 day). With Hadoop, we can scale out horizontally. What if we had a 1000 node Hadoop cluster. Suddenly the overhead of scheduling the job and tracking the tasks is minuscule in comparison to the whole job. The whole job may complete in 100 seconds or less! We went from over a day to less than 2 minutes. Wow.

Please note: the single thread and multi-threaded examples in this article are not using the Map/Reduce algorithm. This is intentional. I’m trying to demonstrate the evolution of thought. When we think about how to solve the problem, the first thing that comes to mind is to walk through the files, one line at a time, and accumulate the result. Then, we realize we could split the job up into threads and gain some speed. The last evolution is is the Map/Reduce algorithm across a distributed computing platform.

Let’s dive in….

Read moreParallel Distributed Computing Example