AWS VPC - Basic Set Up
This blog is a collection of notes that were taken while settings up a virtual private cloud in AWS. The scenario presented here is described as below
The AWS VPC scenario is Scenario 2 .
Most of the basic setup is mentioned in the article above. The basic parts of this setup are as follows
PUBLIC NETWORK
- A chef server for configuration management.
- Jenkins CI server.
- Knife plugins to create and deploy application artifact.
- Load Balanced Web servers.
- DB servers.
The AWS VPC scenario is Scenario 2 .
Most of the basic setup is mentioned in the article above. The basic parts of this setup are as follows
PUBLIC NETWORK
- This network is exposed to the internet.
- Each node set up in the public network can talk to the internet (outbound).
- For inbound traffic, each node needs to be associated with an Elastic IP.
- Each node should have the Ephemeral Ports open since ACLs are stateless.
- Ephemeral ports are ports opened on the client end when the client machine tries to connect to a server machine on a specific port.
- This network is a protected network.
- Mostly used to host back-end servers like database servers etc.
- In our set up we are using mongo databases and replica sets along with an arbiter.
- All the nodes in the private network connect to the internet (outbound) via a NAT server.
- The NAT server is set up automatically while setting up the VPC using the VPC wizard.
- Masquerading is configured by default on the NAT instance.
- Elastic IP is associated by default on the NAT instance.
- Elastic Load Balancing can be configured using the ELBs provided by AWS.
- Configuration management server to help in automated configuration of the nodes created.
- All nodes are bootstrapped to chef-server and roles are applied to them to configure them as db servers or web servers.
- Arbiter is used to take part in elections in case of a mongo replica setup.
- This node is created to deploy the application tar ball.
- Since we are using custom knife plugins for deployment, we need to set up this node as an admin client of the chef-server.
- This node would run the knife commands and query the chef-server.
Here are some notes to make the above set up more coherent and workable..
- The CHEF-SERVER is placed in the EC2 network outside the VPC. One would argue that this chef-server can be placed within the VPC as well. Yes, it can.
- The DEPLOYER node queries the chef-server and is used for automated bootstrapping of new nodes.
- The VPC networks are configured in a way that the public nodes act as jump boxes for the private nodes. No external access to the private nodes is provided, to safeguard the database servers.
- The DB Servers can only be SSH'ed into via the deployer/ web server nodes.
- Once the deployer has bootstrapped a new node to the chef server, appropriate role can be applied to the server and the node would be deployment ready.
- In case of a chef-server, make sure that the NAT Security Group has an exception to allow traffic to and from port 4000, where the chef-server is running since the DB nodes would need to talk to the server via 4000. (Chef Server 10).
- The public network needs to talk to the private network via port 22 (SSH) and the DB default port (27017 for Mongo).
- In the private network, the nodes can SSH into each other but it'll always be good to not configure outbound ssh into the public network.
- The web servers in the public network talk to the db servers only via the port which is listening for the database service in the private network.
- The DEPLOYER nodes have both inbound and outbound SSH configured to be able to deploy to the Public and Private Networks and talk to the chef server using knife bootstrap/ ssh plugins.
- After the Production deployment this node should disconnected by revoking ssh access into the public and private networks, to secure the VPC.
- A Load Balancer is set up between the web servers in case of application with heavy traffic. Elastic Load Balancers are provided by AWS. However, make sure that your application has proper health check pages for the load balancer to check the state of the applications on the web servers.
Hi Pratima,
ReplyDeletethis is very interesting. is the DEPLOYER just a machine with knife installed on it? it seems like that's the "trick" to making all this work, to have knife inside the VPC, while chef is still outside the VPC. Do you also have to make some manual steps to assign EIP or private IP, or can knife handle that? do you use the --ssh-gateway flag at all?
thanks,
Sam
Hi Sam,
DeleteApologies about the delayed response. I was caught up relocating. Well as for your question. You are right. The "trick" is to set up the deployer node to have knife inside the VPC. This node should be in the public subnet of the VPC. After configuring the securoty groups properly you won't need to use the --ssh-gateway flag as this node would be added to the CI server as an agent as well. With that I do want to clarify that this node should have Elastic IP addresses attached and should be started only when a deployment is scheduled in terms of production environment. In case you guys follow ideal continuous deployment where all every commit goes into production after thorough automated testing, then the security groups and ACLs "must" be robust enough to not allow any outside access other than the CI since this node is in all essence the jumpbox to the environment's secured VPC.