openldap-2.3 on CentOS5 Tutorial

I had to get a dummy openldap setup that had “mail” as one of it’s attributes for the records. I specifically needed all the records to live in the root ou, meaning no Organizational Units, just the root, then all the records. Like this:

dn: cn=1,dc=example,dc=com
cn: 1
objectClass: top
objectClass: dkuser
mail: someemail1@somedomain1.com
mailHost: somesmtphostname1:25

dn: cn=2,dc=example,dc=com
cn: 2
objectClass: top
objectClass: dkuser
mail: someemail2@somedomain2.com
mailHost: somesmtphostname2:25

…. and so on.

It was hard to find a step by step instruction set. So, in this tutorial, I’ll give you command by command steps to install, configure and load openldap on a CentOS5 OS.

First, install the packages with Yum:

yum install openldap openldap-clients openldap-servers nss_ldap python-ldap

Next, set ldap to run at system startup time:

/sbin/chkconfig ldap on

Next, get your password for slapd.conf:

cd /etc/openldap/
/usr/sbin/slappasswd

…. it’ll prompt you for a new password, type it twice. All it does is spit out a password that you can copy paste into slapd. Looks like this:

New password:
Re-enter new password:
{SSHA}zskkuz1hd90SyXA4y+zN4AA0FBQorVEd

Read moreopenldap-2.3 on CentOS5 Tutorial

Load Balancing Techniques

Load balancing is a term that describes a method to distribute incoming socket connections to different servers. It’s not distributed computing, where jobs are broken up into a series of sub-jobs, so each server does a fraction of the overall work. It’s not that at all. Rather, incoming socket connections are spread out to different servers. Each incoming connection will communicate with the node it was delegated to, and the entire interaction will occur there. Each node is not aware of the other nodes existence.

Why do you need load balancing?
Simple answer: Scalability and Redundancy.

Scalability

If your application becomes busy, resource limits, such as bandwidth, cpu, memory, disk space, disk I/O, and more may reach its limits. In order to remedy such problem, you have two options: scale up, or scale out. Load balancing is a scale out technique. Rather than increasing server resources, you add cost effective, commodity servers, creating a “cluster” of servers that perform the same task. Scaling out is more cost effective, because commodity level hardware provides the most bang for the buck. High end super computers come at a premium, and can be avoided in many cases.

Redundancy

Servers crash, this is the rule, not the exception. Your architecture should be devised in a way to reduce or eliminate single points of failure (SPOF). Load balancing a cluster of servers that perform the same role provides room for a server to be taken out manually for maintenance tasks, without taking down the system. You can also withstand a server crashing. This is called High Availability, or HA for short. Load balancing is a tactic that assists with High Availability, but is not High Availability by itself. To achieve high availability, you need automated monitoring that checks the status of the applications in your cluster, and automates taking servers out of rotation, in response to failure detected. These tools are often bundled into Load Balancing software and appliances, but sometimes need to be programmed independently.

How to perform load balancing?

There are 3 well known ways:

  1. DNS based
  2. Hardware based
  3. Software based

Read moreLoad Balancing Techniques