Fix Home Directory Permissions for SSH Logins on AWS Instance

Fix Home Directory Permissions for SSH Logins on AWS Instance

What happens when you accidentally change the permissions on the ec2-user’s home directory in an AWS instance?  You get locked out – no more SSH access!

Before you panic (like I did at first), there is a solution.

  1. Shutdown the instance that is affected
  2. Make a note of the root (/) volume name and detach it (hopefully you chose an EBS backed volume for your root partition) from the instance. This is most likely attached as /dev/sda1 (this is VERY important later)
  3. Attach the volume to another instance (if you don’t have another one, just launch one) and name it something or accept the one chosen for you. In this case, we’ll assume /dev/sdd (you can add 1 to the end of it if you’re picky, but I just stuck with the default – it DOES matter in the next step)
  4. Mount the volume you just attached:
    1. As root on the other volume type:
      1. “mkdir /aws-root” (to create the mount point
      2. “mount /dev/xvdd /aws-root” (Linux renames the device by changing the ‘s’ to an ‘xv’ – most likely a Xen thing. NOTE: the device has to have the same name you gave it when you attached it – taking into consideration the Xen device name change.  In other words, /dev/sdd becomes /dev/xvdd; /dev/sde becomes /dev/xvde; and so on.
    2. Now you have the volume mounted and can change the home directory permissions. Using our mount point, they would be located in /aws-root/home/ec2-user:
      1. “chmod 700 /aws-root/home/ec2-user”
    3. Unmount the volume (make sure you’re not in the directory by doing a quick pwd first):
      1. “umount /aws-root”
    4. Detach the volume from the instance
    5. Attach the volume to the original instance. This time, BE SURE to name it /dev/sda1 <– the 1 is IMPORTANT – otherwise your instance won’t boot because the kernel won’t be able to find your root partition!!

Start your instance up again and you should now be able to login again with SSH.

How to Capture Traffic on Cisco ASA / PIX (sniffer)

How to Capture Traffic on Cisco ASA / PIX (sniffer)
 

To capture traffic on a Cisco ASA or PIX firewall the capture command can be used.

Example: Capturing traffic on ASA/PIX

You want to capture traffic from/to host 10.100.100.1 located behind the dmz interface.

The access-list is optional and is used to filter to interesting traffic
pix(config)# access-list interesting_traffic permit ip host 10.100.100.1 any
pix(config)# access-list interesting_traffic permit ip any host 10.100.100.1
pix(config)# capture cap1 access- interesting_traffic interface dmz

pix1(config)# show capture
capture cap1 access-list access-interesting interface dmz

Commands to show capturing results:
show capture cap1
show capture cap1 detail
show capture cap1 dump

Command to clear captured traffic:
clear capture cap1

Command to save results to tftp server:

copy capture:cap1 tftp://10.1.1.1/dmzhost.txt

To save results in pcap format:
copy capture:cap1 tftp://10.1.1.1/dmzhost.txt pcap

Command to disable capturing:

pix(config)# no capture cap1

 

This can be very helpful in troubleshooting connectivity issues.  I most recently used this to troubleshoot VoIP issues for a customer.

Houston Airport System Selects Zenoss to Monitor Cisco Switches, Routers, and Firewalls

I recently implemented the Zenoss Enterprise appliance for the City of Houston Airport System to monitor over 250 Cisco network devices located at George Bush Intercontinental (IAH), William P. Hobby (HOU), and Ellington Airport (EFD).

Why Zenoss? According to the CTO of the Houston Airport System, Matt Hyde, the implementation of Zenoss “will give us greater visibility and control over our network devices and reduce our current monitoring costs”. He goes on to say that “within sixty days, we were able to make the switch to [Zenoss] and will pay for the cost of the new system with just two months of network monitoring savings. Total annual cost reductions are over 500%. Rarely do we ever get an ROI of that magnitude and we would not have achieved these savings without the help of Pate Consulting [implementing Zenoss].”
The appliance was nicely assembled in a 1U rack-mountable server. After plugging in the necessary keyboard, mouse, monitor, power, and network cables I powered it on and watched it proceed to boot CentOS 5.3

 

Aside from the necessary ‘yum update’, all I had to do was configure the correct time, time zone, assign a valid IP address, and setup an SMTP server (if you want Zenoss alerts sent via email, of course). As usual, I recommend Postfix.

 

Install and configure Postfix:

[root@host]# yum install postfix

[root@host]# vi /etc/postfix/main.cf

 

Change myhostname to a valid hostname in your environment

Make postfix startup on boot

 

[root@host]# chkconfig –level 2345 postfix on

[root@host]# service postfix start

 

Make sure that Postfix and Zenoss are running:

 

[root@host]# service postfix status

[root@host]# service zenoss status

 

Once we do that, then its on to the easy part – Zenoss Web Interface.

 

After adding all 5 networks, Zenoss automatically scanned them flawlessly.

 

As should be the case, the Cisco SNMP community string was not the Zenoss default ‘public’. This would cause problems if not corrected.

 

To solve this problem, I edited the SNMP community string in the /Network/Router/Cisco and /Network/Cisco class templates so that any device added to these classes would automatically inherit the correct community string.

 

Click on “Devices” on the left side menu then select the device class Network. Now, select the device class Router, then Cisco. At this point, the breadcrumb navigation should be “/Devices/Network/Router/Cisco”. This can be found just below the Zenoss logo. Click on the zProperties tab for this class. Change zSnmpCommunity to the new value. Just for good measure, go ahead and put your new community string in the zSnmpCommunities text area as well and save changes. From now on, every device you add to the /Network/Router/Cisco device class will inherit the new snmp community string. This is a lot of work for a few devices, but if you’re adding 250 devices this “feature” is a time-saver!

 

Overall, I was impressed with the Zenoss appliance and the City of Houston benefited from the power and flexibility of Zenoss.

How to Perform a Cisco Router Password Recovery Without Losing Your Configuration

Forgot your Cisco router password?  Did you know you can change them without losing your configuration?  In this brief how-to, I will walk you through it. In order to perform a password recovery, you will need to reboot the router a couple of times.  This means downtime, but it is a good sacrifice to make in order to get your passwords reset.

First, hook up the DB9 end of the standard light blue serial cable to your serial port.  The other end of the cable should plug into the port labeled “Console” on the back of the Cisco router.  If you do  not have a serial port, then you’ll need to go purchase a USB-to-serial adapter cable and install it on your computer.
Now that your hardware is connected, establish a serial connection with the router.

The settings you need are:

Baud: 9600
Data bits: 8
Parity: No
Stop bits: 1
Flow Control: None

On Windows, I use putty for this connection.  Yes, putty can be used to make serial connections as well as telnet/ssh.  Hyperterminal works great as well.  On Linux, I use minicom and on FreeBSD/OpenBSD, I use cu (cu -s 9600 -l /dev/cuad0).

Reboot the router and press the Break key to interrupt the boot sequence.

For break key sequences, refer to this Cisco link:http://www.cisco.com/en/US/products/hw/routers/ps133/products_tech_note0…

Type confreg 0x2142.  This tells the router to bypass NVRAM during bootup.  In other words, your existing configuration won’t be loaded.  The good news is that it won’t be deleted either.

Type reset to reboot the router.  Answer No when prompted to run setup.

Type copy start run.  This loads your startup configuration into memory.  Now, if you type a show run config, you’ll see the router configuration.  Also, you should notice that your router name is now in the prompt instead of the default “Router”.

Change the enable secret – “enable secret new_password

Change the register back to 0x2102:
config-register 0x2102

When the router reboots it will load the old configuration with the new password.

Save the password so that it will be persistent during reboots, type copy run start

Reboot the router by typing reload at the enable prompt.

Now, keep that password in a nice safe place – in your head does not count.  I keep mine saved in a safe place for future retrieval and I make sure my customers have a copy as well.  Remember, passwords are nice until you forget them.

CentOS Cluster-DRBD Setup

As part of a MySQL Cluster setup, I recently setup a 2-node web cluster using CentOS’s native cluster software suite with a twist. The web root was mounted on a DRBD partition in place of periodic file sync’ing. This article focuses on the cluster setup and does not cover the DRBD setup/configuration. Let’s go:

Install “Cluster Storage” group using yum:

[root@host]# yum installgroup “Cluster Storage”

Edit /etc/cluster/cluster.conf: ======================================================
<?xml version=”1.0″?>
<cluster name=”drbd_srv” config_version=”1″>
<cman two_node=”1″ expected_votes=”1″>
</cman>
<clusternodes>
<clusternode name=”WEB_Node1″ votes=”1″ nodeid=”1″>
<fence>
<method name=”single”>
<device name=”human” ipaddr=”10.255.255.225″/>
</method>
</fence>
</clusternode>
<clusternode name=”WEB_Node2″ votes=”1″ nodeid=”2″>
<fence>
<method name=”single”>
<device name=”human” ipaddr=”10.255.255.226″/>
</method>
</fence>
</clusternode>
</clusternodes>
<fence_devices>
<fence_device name=”human” agent=”fence_manual”/>
</fence_devices>
</cluster>
======================================================

Start the cluster:

[root@host]# service cman start (on both nodes)

To verify proper startup:

[root@host]# cman_tool nodes

Should show:

Node  Sts   Inc   Joined               Name
1   M     16   2009-08-11 22:13:27  WEB_Node1
2   M     24   2009-08-11 22:13:34  WEB_Node2

Status ‘M’ means normal, ‘X’ would mean there is a problem

Edit /etc/lvm/lvm.conf

change:

locking_type = 1

to:

locking_type = 3

and change:

filter = [“a/.*/”]

to:

filter = [ “a|drbd.*|”, “r|.*|” ]

Start clvmd:

[root@host]# service clvmd start (on both nodes)

Set cman and clvmd to start on bootup on both nodes:

[root@host]# chkconfig –level 345 cman on
[root@host]# chkconfig –level 345 clvmd on

Run vgscan:

[root@host]# vgscan

Create a new PV (physical volume) using the drbd block device

[root@host]# pvcreate /dev/drbd1

Create a new VG (volume group) using the drbd block device

[root@host]# vgcreate name-it-something /dev/drbd1 (VolGroup01, for example)

Now when you run vgdisplay, you should see:

— Volume group —
VG Name             VolGroup01
System ID
Format                lvm2
Metadata Areas        1
Metadata Sequence No  1
VG Access             read/write
VG Status             resizable
Clustered             yes
Shared                no
MAX LV                0
Cur LV                0
Open LV               0
Max PV                0
Cur PV                1
Act PV                1
VG Size               114.81 GB
PE Size               4.00 MB
Total PE              29391
Alloc PE / Size       0 / 0
Free  PE / Size       29391 / 114.81 GB
VG UUID               k9TBBF-xdg7-as4a-2F0c-XGTv-M2Wh-CabVXZ

Notice the line that reads “Clustered yes”

Create a LV (logical volume) in the PV that we just created.
**Make sure your drbd nodes are both in primary roles before doing this.

If there is a “Split-Brain detected, dropping connection!” entry in /var/log/messages, then a manual split-brain recovery is necessary.

To manually recover from a split-brain scenario (on the split-brain machine):

[root@host]# drbdadm secondary r0
[root@host]# drbdadm — –discard-my-data connect r0

On verified that both nodes are in primary mode:

On node2:

[root@host]# service clvmd restart

On node1:

[root@host]# lvcreate -l 100%FREE -n gfs VolGroup01

Format the LV with gfs:

[root@host]# mkfs.gfs -p lock_dlm -t drbd_srv:www -j 2 /dev/VolGroup01/gfs (drbd_srv is the cluster name, www is a locking table name)

Start the gfs service (on both nodes):

[root@host]# service gfs start

Set the gfs service to start automatically (on both nodes):

[root@host]# chkconfig –level 345 gfs on

Mount the filesystem (on both nodes):

[root@host]# mount -t gfs /dev/VolGroup01/gfs /srv

Modify the startup sequence as follows:

1) network

2) drbd  (S15)
3) cman  (S21)
4) clvmd (S24)
5) gfs   (S26)

Remove the soft link for shutting down openais (K20openais in runlevel 3 for our install) because shutting down cman does the same.

Modify the shutdown sequence as follows (in reverse order as startup sequence):

1) gfs   (K21)
2) clvmd (K22)
3) cman  (K23)
4) drbd  (K24)

5) network

Reboot each server to test availability of gfs filesystem during reboot.

Troubleshooting:

If one node’s drbd status is showing:

version: 8.2.6 (api:88/proto:86-88)
GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by buildsvn@c5-x8664-build, 2008-10-03 11:30:17

1: cs:StandAlone st:Primary/Unknown ds:UpToDate/DUnknown   r—
ns:0 nr:0 dw:40 dr:845 al:1 bm:3 lo:0 pe:0 ua:0 ap:0 oos:12288

and the other node is showing:

version: 8.2.6 (api:88/proto:86-88)
GIT-hash: 3e69822d3bb4920a8c1bfdf7d647169eba7d2eb4 build by buildsvn@c5-x8664-build, 2008-10-03 11:30:17

1: cs:WFConnection st:Primary/Unknown ds:UpToDate/DUnknown C r—
ns:0 nr:0 dw:56 dr:645 al:3 bm:2 lo:0 pe:0 ua:0 ap:0 oos:8200

Also, the node with cs:WFConnection in the status field should have the gfs filesystem mounted.

Then, there is a drbd sync issue.  To solve this, just restart the gfs cluster services on the node with “cs:Standalone” in the status field by:

1) service clvmd stop
2) service cman stop
3) service drbd restart

Verify you see Primary/Primary in the st: section of the drbd status (cat /proc/drbd)

4) service cman start
5) service clvmd start
6) service gfs start

Combine Multiple RSS Feeds Using PHP

Need to track and combine multiple RSS feeds? This script will allow you to add RSS feeds from multiple sites and integrate them into yours dynamically. Feel free to use and modify the code below for your own website. Need help with the script? We offer php and web devlopement.

<?php
// simple timeout in seconds
$timeout = 2;

// variable to be used later
$xml_string = “”;

// let’s open the socket to the RSS feed
$fp = @fsockopen(“feeds.warhammeronline.com”, 80, $errno, $errstr, $timeout);

// check to see if the fsockopen returned a valid resource
if ($fp) {
// now that we have a valid resource, let’s request the data that we need
fwrite($fp, “GET /warherald/RSSFeed.war?type=current HTTP/1.0\r\n”);
fwrite($fp, “Host: feeds.warhammeronline.com\r\n”);
fwrite($fp, “Connection: Close\r\n\r\n”);

// let’s wait until data becomes available within the stream
stream_set_blocking($fp, TRUE);

// but let’s only wait for a specific amount of time
stream_set_timeout($fp,$timeout);

// get the header/meta data to use, currently we are just using this to see if we timeout or not
$info = stream_get_meta_data($fp);

// while there is still data in the stream and we haven’t timedout, then get the next bit of data
while ((!feof($fp)) && (!$info[‘timed_out’])) {

// get the next line in the stream
$xml_string .= fgets($fp, 4096);

// get the header/meta data to use, currently we are just using this to see if we timeout or not
$info = stream_get_meta_data($fp);

// this isn’t required, but I like to send the data to the browser as soon as possible
ob_flush;
flush();
}

// we are done with the stream, so get rid of it
fclose($fp);
}

// we timed out, so let the user know, if you want
if ($info[‘timed_out’]) {
echo “Warhammer News feed – Connection timed out”;
} else {
// we need to strip off the initial part of the stream that we don’t need, we just want the XML from the RSS feed
$xml_string = substr($xml_string, strpos($xml_string, ‘<‘));

// we are using the built in php function to parse the xml
// and convert it into an object we can use
$xml = simplexml_load_string($xml_string);

// check to see if the object was created from the RSS feed
if (!is_object($xml)) {
echo “Warhammer News feed – Error converting xml”;
} else {
// this section, is a bit hardcoded for this particular RSS feed,
// but basically, you just walk the XML tree as an object

// here we are looping through all the children of this particular parent
foreach($xml->channel->item as $key) {
// variable to use later
$temp_title = $key->title;
$link = $key->link;

// I don’t always have enough room on the page to show the
// entire title of the RSS feed, so I am just limiting it here
// and then adding 3 dots (…) to show that the title is longer than displayed
$title = str_replace(‘-‘, ”, substr($key->title, 0, 25)) . “…”;

// display it to the page, even though the text of the link will be
// limited to a specific number of characters
// when you roll over it with your mouse, it will show the full title
// in the little popup that is built into the href tag
echo “<li><a href=’$link’ target=’_blank’ title=’$temp_title’>$title</a></li>”;
}
}

// we are done, so get rid of it
$xml = NULL;
}

Adding Pseudo-TTY Support in CentOS

Recently, I had a customer in the banking industry that needed legacy pseudo tty support on their servers. This article describes the process of adding pseudo tty support in the CentOS kernel. In addition, I describe how to create a customer kernel RPM for easy distribution to other servers.

Let’s get started:
Download source kernel RPM from http://mirror.centos.org/centos/5/updates/SRPMS
Install the source RPM:

[root@host]# rpm -Uvh kernel-.el5.src.rpm

Unpack sources:

[root@host]# cd /usr/src/redhat/SPECS

For kernels < 2.6.18-164: rpmbuild -bp –target=`uname -m` kernel-2.6.spec
For kernels >= 2.6.18-164: rpmbuild -bp –target=`uname -m` –without fips kernel-2.6.spec

Copy current kernel config to source tree

[root@host]# cd /usr/src/redhat/BUILD/kernel-2.6.18/linux-2.6.18.i686
[root@host]# cp /boot/config-`uname -r` .config
[root@host]# make oldconfig

edit the .config file and add:

CONFIG_LEGACY_PTYS=y
CONFIG_LEGACY_PTY_COUNT=256

after the line:

CONFIG_UNIX98_PTYS=y

**IMPORTANT**
Delete the line:
# CONFIG_LEGACY_PTYS is not set

Otherwise, the kernel won’t be compiled with PTY support!!

Also, add a line immediately at the top of the file reflecting your architecuture:

# i386

or

# x86_64

Save changes

Copy the new .config file to the source directory:

[root@host]# cp .config /usr/src/redhat/SOURCES/kernel-2.6.18-i686-PAE.config (or whatever the architecture/configuration is)

Modify the /usr/src/redhat/SPECS/kernel-2.6.spec file to customize the new kernel by uncommenting and editing this line (add .pty to it):

%define buildid .pty

Save changes

Build the new kernel RPM:

[root@host]# cd /usr/src/redhat/SPECS
For kernels < 2.6.18-164: [root@host]# rpmbuild -bb –target=`uname -m` kernel-2.6.spec
For kernels >= 2.6.18-164: [root@host]# rpmbuild -bb –target=`uname -m` –without fips kernel-2.6.spec
==AND==
Add the following switches to rpmbuild

for PAE only:

–without up –without xen –without debug –without debuginfo

for Base only:

–with baseonly –without debug –without debuginfo

For more info, visit wiki.centos.org/HowTos/Custom_Kernel

Once finished, install the new kernel:

[root@host]# rpm -ivh /usr/src/redhat/RPMS/`uname -m`/name-of-new-kernel-rpm.rpm