Posts

Showing posts from 2013

PowerShell and .Net > 2.0

Recently I started working on a Windows machine to configure it as a build agent for jenkins. One of the requirements was to make PowerShell work with .Net Version 4.5. Windows 7 ships with PowerShell 2.0 by default (Don't go by the folder structure that still says v1.0). You can get the real details by typing $psversiontable in the PS console. The CLRVersion is the .Net Version loaded by the shell. Most often it is 2.0. Powershell runs by default on .Net version 2.0 or less. In case you get errors about loading assemblies even after loading them in the code, most likely you are not using the correct .Net Version. To load .Net versions greater than 2.0, the following settings need to be done. ERROR : Unable to find type [System.IO.Compression. CompressionLevel]: make sure that th e assembly containing this type is loaded. At C:\jenkins\workspace\SSENext- Core-Win\scripts\win\finalize- core-package.ps1: 9 char:56 + $Compression = [System.IO.Compression. CompressionLevel]

Convert an Instance-Store AMI to EBS-Backed AMI

Launch an instance from an instance store-backed AMI. Make a note of the ramdisk ID and kernel ID your running instance is using  Create a 10GiB Amazon EBS volume in the same Availability Zone as that of your newly-launched instance e.g. /dev/sdf Attach the volume to the running instance using either the AWS Management Console or the command line interface. Make sure all services are turned off like mysqld, nginx etc. Format the volume with a file system. Newer kernels use /dev/xvdf convention mkfs.ext3 /dev/xvdf Create a directory /mnt/ebs and mount the volume on it. mkdir /mnt/ebs   mount /dev/xvdf /mnt/ebs Copy the data on the root storage device to the newly-attached volume. rsync -avx --exclude /mnt/ebs / /mnt/ebs [SPECIFIC TO UBUNTU] Edit /mnt/ebs/etc/fstab,/mnt/ebs/boot/grub/menu.lst and /mnt/ebs/boot/grub/grub.cfg and replace uses of LAB

Mongorestore using knife

This post is a continuation to my post Taking a Mongodump using Knife . Keeping the assumptions the same as the previous post, this post outlines a knife plugin to restore your mongo database from a dump taken as above. module KnifePlugins   class MongoDumpApply < Chef::Knife     banner 'knife mongo dump apply'         option :db_env,       :long => '--db-env DB_ENV',       :description => 'Environment to apply the dump to'      option :dump_dir,       :long => '--dump-dir DUMP_DIR',       :description => 'Full path to dump location'    deps do       require 'chef/search/query'    end     def run       @dump_dir = config[:dump_dir]       @env = config[:db_env]      if !valid_option(@env) && !valid_option(@dump_dir)         ui.fatal 'Please provide environment name and dump location' +           ' e.g. knife mongo dump apply --db-env ft --dump-dir "/opt/mongo_backup/mongodump_201304131122"&#

Mongodump and upload to S3 using knife

Quite recently I have had to write a knife plugin to take mongo dumps. This post contains the code for the same with a few assumptions in place. Assumptions: 1.  CentOS 6.3, mongo db version 2.4.1 2.  Chef environment has the db config as below :      override_attributes({ database:  { user: 'db_user',                  dump_dir: '/opt/mongodb-dump',                  auth: true,                  replica: {                         id: 'rsTest',                         mongo_primary:'x.x.x.x:27017',                         members: ['x.x.x.x:27017'],                         arbiter: 'x.x.x.x:27017'}                   },       }) 3.  The database credentials are written in encrypted data bag. 4.  The data bag key is placed in /etc/chef/encrypted_data_bag_secret.  5.  The data bag is called db_secret_data and each item has the same name as the environments in chef. 6.  Each item looks as below:       {          "id": &qu

Load Balancer with SSL offloading - nginx + HAProxy

Image
HAProxy and nginx can be configured together to work as an SSL offloader and a load balancer. Listed below are the steps to achieve the same on a centOS instance. Assume 192.168.1.1 and 192.168.1.2 running web servers on port 80 and 192.168.1.3 running haproxy on port 8181. Starting with HAPROXY set up. 1.  Install haproxy.        yum install -y haproxy 2.  Edit the haproxy configuration to update the backend web servers and keep it at the basic log level.     global   log 127.0.0.1   local2   chroot          /var/lib/haproxy   pidfile         /var/run/haproxy.pid   maxconn         4096   user            haproxy   group           haproxy   daemon   defaults                      mode            http   log             global   option          httplog   option          dontlognull   option          http-server-close   option          forwardfor except 127.0.0.0/8   option          redispatch   retries         3   timeout         http-request    20s   timeout        

Nginx Crumbs - Rewrite vs Return

More often than not there are times when your initial thought process changes mid way through your production ready application and you decide to change your application url. It could be your scheme (from a non-www to a www or vice-versa) or it could be your protocol (say, from http to https). There are two ways of implementing this change in nginx. ## Redirect from non-www to www  server {   server_name example.com;   # Option 1    return 301 $scheme://$host$request_uri;   # Option 2   rewrite ^ http://$host$request_uri? permanent;  } ## Redirect from http to https  server {   server_name example.com;   # Option 1    return 301 https://$server_name$request_uri;   # Option 2   rewrite ^ https://$server_name$request_uri? permanent;  } REWRITE Only the part of the original url that matches the regex is rewritten. Slower than a Return. Returns HTTP 302 (Moved Temporarily) in all cases, irrespective of permanent. Suitable for temporary url changes. RETURN The en

AWS VPC - Basic Set Up

This blog is a collection of notes that were taken while settings up a virtual private cloud in AWS. The scenario presented here is described as below A chef server for configuration management. Jenkins CI server. Knife plugins to create and deploy application artifact. Load Balanced Web servers. DB servers. The AWS VPC scenario is Scenario 2 .   Most of the basic setup is mentioned in the article above. The basic parts of this setup are as follows PUBLIC NETWORK This network is exposed to the internet. Each node set up in the public network can talk to the internet (outbound). For inbound traffic, each node needs to be associated with an Elastic IP. Each node should have the Ephemeral Ports open since ACLs are stateless. Ephemeral ports are ports opened on the client end when the client machine tries to connect to a server machine on a specific port. PRIVATE NETWORK This network is a protected network. Mostly used to host back-end servers like database se

Deploying an RoR app using Jenkins and Knife

The simplest way to deploy a Ruby on Rails app would be to package the deploy-able code into a tar and un-tar it at the server location. This can be achieved using rake but rake doesn't provide ssh ability. Another way is to use rake with Capistrano, but this would be useful when implementing a master-less puppet or chef-solo system for configuration management. Working on an RoR project, recently we discovered another approach to deploy an RoR app - using a custom knife plugin. There is a chef-deploy resource provided by chef which requires git repository access. Standards suggest that there should not be any development tools installed on the application server such as git etc. A series of discussions led us to decide on a chef based approach as we already had chef server managing the configuration on the nodes. Place the following knife plugin in the code repository's .chef/plugins/knife directory. Knife will automatically load the plugin. Configure jenkins as a knife