Pages

Friday, 9 May 2014

Docker - Lightweight Linux Container



Docker: Its a tool that helps you to pack, ship and run any application as a light-weight Linux container. More on https://www.docker.io/

Works best on Linux kernel 3.8 Ubuntu 12.04 precise has 3.2 and needs to be upgraded. 

Install Docker with on Ubuntu 12.04:

sudo apt-get update
sudo apt-get install linux-image-generic-lts-raring linux-headers-generic-lts-raring

sudo reboot

To check docker version:
sudo docker version

Client version: 0.11.1
Client API version: 1.11
Go version (client): go1.2.1
Git commit (client): fb99f99
Server version: 0.11.1
Server API version: 1.11
Git commit (server): fb99f99
Go version (server): go1.2.1
Last stable version: 0.11.1

To check info about docker installed:
sudo docker info

Containers: 1
Images: 9
Storage Driver: aufs
 Root Dir: /var/lib/docker/aufs
 Dirs: 11
Execution Driver: native-0.2
Kernel Version: 3.11.0-20-generic
WARNING: No swap limit support

To pull an existing docker image:
sudo docker pull <imagename>

sudo docker pull busybox

HelloWorld in docker:
sudo docker run busybox echo HelloWorld

Search for an existing image in the index:
docker search <image-name>
sudo docker search stackbrew/ubuntu
NAME                       DESCRIPTION                                     STARS     OFFICIAL   TRUSTED
stackbrew/ubuntu           Barebone ubuntu images                          36                   
jprjr/stackbrew-node       A stackbrew/ubuntu-based image for Docker,...   2                    [OK]
hcvst/erlang               Erlang R14B04 based on stackbrew/ubuntu         0                    [OK]
stackbrew/ubuntu-upstart                                                   0                    


Pull an existing image:
sudo docker pull ubuntu

Pulling repository ubuntu
a7cf8ae4e998: Pulling dependent layers 
3db9c44f4520: Downloading [=================>                                 ] 22.18 MB/63.51 MB 2m19s
74fe38d11401: Pulling dependent layers 
316b678ddf48: Pulling dependent layers 
99ec81b80c55: Pulling dependent layers 
5e019ab7bf6d: Pulling dependent layers 
511136ea3c5a: Download complete 
6cfa4d1f33fb: Download complete 


To the check the available images:
sudo docker images
REPOSITORY          TAG                 IMAGE ID            CREATED             VIRTUAL SIZE
ubuntu              13.10               5e019ab7bf6d        2 weeks ago         180 MB
ubuntu              saucy               5e019ab7bf6d        2 weeks ago         180 MB
ubuntu              12.04               74fe38d11401        2 weeks ago         209.6 MB
ubuntu              precise             74fe38d11401        2 weeks ago         209.6 MB
ubuntu              12.10               a7cf8ae4e998        2 weeks ago         171.3 MB
ubuntu              quantal             a7cf8ae4e998        2 weeks ago         171.3 MB
ubuntu              14.04               99ec81b80c55        2 weeks ago         266 MB
ubuntu              latest              99ec81b80c55        2 weeks ago         266 MB
ubuntu              trusty              99ec81b80c55        2 weeks ago         266 MB
ubuntu              raring              316b678ddf48        2 weeks ago         169.4 MB
ubuntu              13.04               316b678ddf48        2 weeks ago         169.4 MB
busybox             latest              2d8e5b282c81        2 weeks ago         2.489 MB
ubuntu              10.04               3db9c44f4520        2 weeks ago         183 MB
ubuntu              lucid               3db9c44f4520        2 weeks ago         183 MB

To run a command within an image:
docker run<image> command

sudo docker run ubuntu echo HelloWorld
HelloWorld

To install something on an ubuntu image
sudo docker run apt-get install <package>

find ID of the container 
sudo docker ps -l
CONTAINER ID        IMAGE               COMMAND             CREATED             STATUS                      PORTS               NAMES
0dac167b178d        ubuntu:14.04        ps aux              12 minutes ago      Exited (0) 12 minutes ago                       goofy_bell

committing changes made to the images:
docker commit 0da firstcommit
723aa6ead77a14ff05cd2c640163345ec5a36fa9a4c757a6872a1ec919ab9345

To get log of the present container:
sudo docker logs 0dac167b178d
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
root         1  0.0  0.0   7132   644 ?        Rs   09:10   0:00 ps aux

To inpect the details of an image
sudo docker inspect <id: 3-4 characters of id will work too>
sudo docker inspect 0da

<json output>

Push container image to the index
sudo docker push ubuntu


Creating Dockerfile:

All instructions in Dokerfile are in the form of 
INSTRUCTION arguments

Instruction are not case sensitive but CAPS are recommended. The first instruction in any Dockerfile is the FROM instruction. The syntax is:

FROM <image>
FROM ubuntu

This will look for the image in Docker index. You can also search docker index by the command docker search

Next is the RUN instruction. The RUN instruction will execute any commands on the current image. After executing, it will also commit the changes. The committed image can be used for the next instructions from the Dockerfile. This way the committed changes form a layer of changes just like any other source code control system. Syntax of RUN command:
RUN <command>
RUN apt get install -y apache2

Here the RUN command is equivalent to docker run image command + docker commit container_id. Here image will be automatically replaced with the current image and container_id is the result of the previous commit.

Once you have created your Dockerfile you can use docker build to create you image from it. You can use the command in this way.

Create a Dockerfile with the content
FROM ubuntu
RUN apt-get install -y memcached

Save and close the file. If the file is in you r present directory:
docker build .
If the file in in some other location 
docker build path/to/file
If passing through STDIN
docker build - < Dockerfile
If passing through github URL
docker build github.com/roshan4074

you can check the container with the command:
sudo docker images

To apply a tag to an image you the command: docker tag 
sudo docker tag <container_id>

To comment a code use the "#' symbol followed by the text

To specify the contact info of the Maintainer of the Dockerfile:
MAINTAINER Name contact@email

To trigger a command as soon as a container starts, use ENTRYPOINT instruction
ENTRYPOINT echo "Hello, Container Started"

Another Format to use ENTRYPOINT Instrcution is 
ENTRYPOINT ["echo", "Hello, Container Started"]
This is the preferred format

e.g
ENTRYPOINT ["wc", "-l"]

To execute a certain command by a particular user use the command USER
USER roshan

T open a particular port for a process use EXPOSE instruction
EXPOSE 8080

Saturday, 26 April 2014

Learning Chef - Part - I

some rights reserved by Matt Ray
The information here has been collected from Nathan Harvey's Video tutorials on Chef's website and from Chef's official documentation. Before starting with the tutorial, I thought it would be better to understand common jargons used in chef.
Three primary entities: workstation, chef-server, node
  • Chef-Work Station : System from where the configuration management professional / devops / sys admin will be working.
  • Chef-Server : System/Server where all the infrastructure as a code will be stored. Also the Chef-Server will have many other features that we will seee later.
  • Nodes : Servers in your infrastructure that will be managed by chef, They may represent a physical server or a virtual server. They may represent hardware you own or multiple compute instances in a public or a private cloud. Each node will belong to one organization and and other organization may not have access to it. Each node will belong to one environment. Either in staging or Production etc. Each node will have zero or more roles. An application called chef-client run on each of the node. The chef-client will gather the current system configuration. It will download the desired system configuration from the Chef-server and configure that node such that it adheres to the policy defined.
  • Knife : Command line utility that acts as an interface between local chef-repo(on work station) and server. Knife lets you manage nodes, cookbooks, recipes, roles, stores of json data, including encrypted data, environments, cloud resources including provisioning. The installation of chef on management workstations, Searching of indexed data on chef server. You can even extend knife to use plugins for managing cloud resources. E.g knife-ec2, knife-rackspace, knife-vcloud plugins.
  • Cookbooks : A cookbook is a container to describe or to contain our configuration data. It contain the recipes. It can also include templates, files, custom resources etc.
  • Recipes : A Recipe is a configuration file that describes resources and their desired state. A recipe can install and configure software components, Manage files, Deploy Applications, Execute other recipes, etc.
    • Sample Recipe
package "apache2" // 1st resource is package and chef knows that it should be installed on the server. If the package doesn’t exist it will install it.

template "/etc/apache2/apache.conf" do //Next resource is a template. The template will manage a file at /etc/apache2/apache.conf
source "apache2.conf.erb"
owner "root"
group "root"
mode "0644"
variable(:allow_override => "All")
notifies :reload, "service[apache2] //state of the apache2 if apache2.conf exist it knows that it doesn’t need to create that file. However chef needs to make sure that the file has proper contents. So it will generate a temporary file. It will use the source that we specified above apache2.conf.erb and then it will also use any variable content that we specified, i.e AllowOverride All. Once the temporary file is created, Chef will then compare the two files. If they are the same, chef will discard the temporary file and then move on to the next resource, then notifies line will be ignored. However if the two files are different. The chef-client will discard the version on the disk and place the temporary file into the proper location, i.e overwrite existing file. Whenever the overwrite happens a notification will be sent. Then it will tell Apache to reload with new configs.
end

service "apache2" do //service should be enabled and start automatically
action [:enable,:start]
supports :reload => true
end
  • Roles : A way of identfying different types of servers. e.g Load-balancer, app server, DB cache, DB , monitoring etc. Roles may include list of configs to be applied called as runlist. May include data attributes for configuring infra, i.e ports to listen on, list of apps to be deployed.
  • Data bags : Stores of json data
  • Attributes : Attributes are mentioned in cookbooks/recipes. An attribute gives the detail about the node. It tells about the state of the node; before the chef-client run, present state and state after the chef-client run
  • Resources : Items that we sysadmins manipulate to manage complexity. i.e Networking, Files, Directories, Symlinks, Mounts, Registry key, Scripts, Users, Groups, Packages, Services File-systems etc. Resource represent a piece of the system and its desired state. e.g a package to be installed, A service to be running, A file to be generated, a cronjob to be configured, a user to be managed, etc.
  • Ohai : Ohai is a tool used to detect the attributes on the node. These attributes are then passed to the chef client at the beginning of the chef-client run. Ohai is installed on a node as a part of chef-client installation. Ohai has the following types of attributes: Platform details, Network usage, Memory usage, Processor usage, Kernel data, Hostnames, FQDNs, etc. So, Ohai is a utility that will give you all the information of your system level data.
  • Shef : Chef-Shell was earlier known as Shef. Its a recipe debugging tool that allows breakpoints within recipes. Its runs as an irb session.
  • Environments : Environments can be development, test, staging and production. They may contain data attributes specific to an environment. Starts with single environment e.g default is 1st. Different names/URLs for payment services, location for package repository, version of chef configs etc.
  • Run List : The joining of a node to a set of policies is called as a run-list. The chef-client will download all the necessary components that make up the run-list.e.g recipe[npt::client], recipe[users], role[webserver]. Run List is a collection of policies that a node should follow. Chef-client obtains the run-list from chef-server. chef-client ensures the node complies with the policy in the run-list.
  • Search : You can search for nodes with Roles, Find topology data, i.e IP addresses, hostnames, FQDNs. Searchable index about your infrastructure. e.g load balancer needs to know which application should I be sending requests to? Chef-client can ask Chef-server which application servers are available and which application server should I be sending load to. And in return the chef server can send a list of nodes and then the load balancer can figure out which one based on the hostname or IP address or Fully Qualified Domain Name.
  • Organization : Everyone has their own infra and wont manage anyone else's infra. Organizations are independent tenants on Enterprise chef. So this could be different companies, business units or departments for managing.

Wednesday, 23 April 2014

Automation for VMware vCloud Director using Chef's knife-vcloud - Part-II

Version 1.2.0


 Some right reserved by Phil Wiffen

For some reason with the previous repo I could not see the list of all vApps. Only some of it (a mixture of both chef node and non chef nodes) were seen. So I went ahead with another version of knife-vcloud plugin available which solved my problem to a large extent.
Plugin is available at https://github.com/astratto/knife-vcloud

Configuration used:
  • CentOS 6.5
  • Chef 11.8.2
  • knife-vcloud 1.2.0
Following steps were used to complete the automation process:
Installation is fairly simple
gem install knife-vcloud
gem list | grep vcloud
- See if after entering the above command you see the gem knife-cloud. If yes the setup was successful. If no something went wrong.

cd ~./chef
vim knife.rb
Configuration is almost automated:
knife vc configure

You will be prompted for vcloud_url, login and password. After entering the details check that the details you entered are reflected in the knife.rb file.

knife[:vcloud_url] = 'https://vcloud.server.org'
knife[:vcloud_org_login] = 'vcloud_organization'
knife[:vcloud_user_login] = 'vcloud_user'
knife[:vcloud_password] =

Note: The organization was not updated for me, and it kept giving authorization failure for quite sometime. If you see that the organization is not updated automatically, please update it manually in the knife.rb file.

The subsequent commands would also change for the detailed listing. Although the documentation at many instances says that the name of VM or vApp should suffice to pull up the required details, note that at many instances you will be required to enter the ID and not just the name.

To see the list of catalog items

[root@chefworkstation ~]# knife vc catalog show All_ISOs
Description: All ISO Dumps
Name                                           ID                                          
CentOS-6.3                                          WhAtEvEr-Id-tO-bE-SeEn1       
CentOS-6.4_x64                                   WhAtEvEr-Id-tO-bE-SeEn2        
Ubuntu-copy                                           WhAtEvEr-Id-tO-bE-SeEn3        

To see details of the organization

[root@chefworkstation ~]# knife vc org show MYORG
CATALOGS                                                                 
Name                                  ID                                 
All_ISOs                                  WhAtEvEr-Id-tO-bE-SeEn4
Master Catalog                        WhAtEvEr-Id-tO-bE-SeEn5
                                                                         
VDCs                                                                     
Name                                  ID                                 
MyorgVDC-Tier1     WhAtEvEr-Id-tO-bE-SeEn6
MyorgVDC-Tier2        WhAtEvEr-Id-tO-bE-SeEn7
MyorgVDC-Tier3        WhAtEvEr-Id-tO-bE-SeEn8

NETWORKS                                                                 
Name                                  ID                                 
MyorgNet-Router                   WhAtEvEr-Id-tO-bE-SeEn9

TASKLISTS                                                                
Name                                  ID                                 
                        WhAtEvEr-Id-tO-bE-SeEn10
To create a new vApp:

[root@chefworkstation ~]# knife vc vapp create MyorgVDC-Tier1 chefnode2 "Just Created node2" WhAtEvEr-Id-tO-bE-SeEn
vApp creation...
Summary: Status: error - time elapsed: 52.012 seconds
WARNING: ATTENTION: Error code 400 - The following IP/MAC addresses have already been used by running virtual machines: MAC addresses: 10:20:30:40:50:0f IP addresses: 192.168.0.20 Use the Fence vApp option to use same MAC/IP. Fencing allows identical virtual machines in different vApps to be powered on without conflict, by isolating the MAC and IP addresses of the virtual machines.
vApp created with ID: WhAtEvEr-Id-tO-bE-SeEn

Note: that there are certain problems that were corrected later.
To show the deatils of created vApp:
[root@chefworkstation ~]# knife vc vapp show WhAtEvEr-Id-tO-bE-SeEn1
Note: --vdc not specified, assuming VAPP is an ID
Name: chefnode2
Description: Just Created node2
Status: stopped
IP: 192.168.0.12
Networks
MyorgNet-Router
   Gateway      Netmask        Fence Mode  Parent Network       Retain Network
      192.168.0.1  255.255.255.0  bridged     MyorgNet-Router  false        
      VMs
      Name    Status   IPs           ID                                    Scoped ID                          
      centos  stopped  192.168.0.12  WhAtEvEr-Id-tO-bE-SeEn  WhAtEvEr-Id-tO-bE-SeEn

To show the vm specific details:

[root@chefworkstation ~]# knife vc vm show WhAtEvEr-Id-tO-bE-SeEn --vapp MyvApp_Chef
Note: --vapp and --vdc not specified, assuming VM is an ID
VM Name: centos
OS Name: CentOS 4/5/6 (64-bit)
Status: stopped
Cpu                                          
Number of Virtual CPUs  1 virtual CPU(s)     

Memory                                       
Memory Size             2048 MB of memory    

Disks                                        
Hard disk 1             16384 MB             
Hard disk 2             16384 MB             

Networks                                     
MyorgNet-Router                          
Index                 0                    
Ip                    192.168.0.12         
External ip                                
Is connected          true                 
Mac address           10:20:30:40:50:0f    
Ip allocation mode    MANUAL               

Guest Customizations                         
Enabled                 false                
Admin passwd enabled    true                 
Admin passwd auto       false                
Admin passwd                                 
Reset passwd required   false                
Computer name           centos
  

To set new info to the vm:

[root@chefworkstation ~]# knife vc vm set info --name ChefNewNode WhAtEvEr-Id-tO-bE-SeEn --vapp MyvApp_Chef centos
Note: --vapp and --vdc not specified, assuming VM is an ID
Renaming VM from centos to ChefNewNode
Summary: Status: success - time elapsed: 7.09 seconds

To update other info:


[root@chefworkstation ~]# knife vc vm set info --ram 512 WhAtEvEr-Id-tO-bE-SeEn --vapp MyvApp_Chef
Note: --vapp and --vdc not specified, assuming VM is an ID
VM setting RAM info...
Summary: Status: success - time elapsed: 9.843 seconds

To edit network info:


[root@chefworkstation ~]# knife vc vm network edit WhAtEvEr-Id-tO-bE-SeEn MyorgNet-Router --net-ip 192.168.0.117 --ip-allocation-mode MANUAL
Note: --vapp and --vdc not specified, assuming VM is an ID
Forcing parent network to itself
VM network configuration...
Guest customizations must be applied to a stopped VM, but it's running. Can I STOP it? (Y/N) Y
Stopping VM...
Summary: Status: success - time elapsed: 7.092 seconds
VM network configuration for MyorgNet-Router...
Summary: Status: success - time elapsed: 6.783 seconds
Forcing Guest Customization to apply changes...
Summary: Status: success - time elapsed: 22.639 seconds

To show the changes made:

[root@chefworkstation ~]# knife vc vm show WhAtEvEr-Id-tO-bE-SeEn
Note: --vapp and --vdc not specified, assuming VM is an ID
VM Name: ChefNewNode
OS Name: CentOS 4/5/6 (64-bit)
Status: running

Cpu                                          
Number of Virtual CPUs  1 virtual CPU(s)     

Memory                                       
Memory Size             512 MB of memory     

Disks                                        
Hard disk 1             16384 MB             
Hard disk 2             16384 MB             

Networks                                     
MyorgNet-Router                          

Index                 0                    
Ip                    192.168.0.117        
External ip                                
Is connected          true                 
Mac address           10:20:30:40:50:0f    
Ip allocation mode    MANUAL               

Guest Customizations                         
Enabled                 true                 
Admin passwd enabled    true                 
Admin passwd auto       false                
Admin passwd                                 
Reset passwd required   false                

Computer name           centos

Reference Links:

Monday, 17 March 2014

Automation for VMware vCloud Director using Chef's knife-vcloud

 Some right reserved by jdhancock


Plugin is available at https://github.com/opscode/knife-vcloud

Configuration used:
  • CentOS 6.5
  • Chef 11.8.2
  • knife-vcloud 1.0.0
Following steps were used to complete the automation process:

cd ~
git clone https://github.com/opscode/knife-vcloud.git
cd knife-vcloud/
bundle install
gem build knife-vcloud.gemspec
gem install knife-vcloud-1.0.0.gem
gem list | grep vcloud
- See if after entering the above command you see the gem knife-cloud. If yes the setup was successful. If no something went wrong.

cd ~./chef
vim knife.rb

- add the following details to the last line of this file (Note: username is @ i.e organisation name supplied) :
knife[:vcloud_username] = "username@orgname"
knife[:vcloud_password] = "##########"
knife[:vcloud_host] = "xxx.xxxxxxxxxxxxx.com"

[ESC]:wq

knife vcloud server list
- Should list all the existing servers

You can also create your own server using "knife vcloud server create" with additional parameters with caution.

e.g

knife vcloud server create --vcpus 2 -m 1024 -I TestServer -A 'roshan' -K "MyPassword" -r 'role[webserver]' --network myNetword-id

Good Luck!!
Reference Links:

Tuesday, 4 March 2014

Public, Private and Hybrid Cloud

A lot has been said, heard and read about Cloud. There so many ways that the cloud gets filtered further. In my previous blog we discussed about Saas, Paas and Iaas. More and more companies are looking for cloud as the solution for their business needs. We shall further discuss 3 important types of cloud.
  1. Public Cloud
  2. Private Cloud
  3. Hybrid Cloud
Some rights reserved by FutUndBeidl

 Public Cloud : Public cloud is considered to be a standard cloud computing model where there is a direct interaction with the users of cloud. It also called as 'shared cloud'. All applications, infrastructures or storage are made directly available to the users. It could be a Pay as You Go service or free as well. Types of public clouds include all Saas, Pass and Iaas platforms. The primary benefit is that it is accessible from anywhere anytime. Public cloud is an ultimate choice when you have lots of users for you application. For e.g an email application like Google, a social network like Facebook. A collaboration is needed among developers over a Paas or for employees to work remotely public cloud is the best choice. This cloud may or may not be managed by the providers but usually it is. Also, it can be scaled very easily as per our needs.
For e.g Iaas based service like Dropbox you can add and remove space dynamically as per your choice.

Some rights reserved by FutUndBeidl


Private Cloud : Here the services and Applications are not exposed to general public and are instead kept private. Highest level of security and control is maintained in these kind of architectures. These services often run behind a firewall and are also called as 'Enterprise clouds'. Advantage is security and resources can be shared among groups. There's a choice to make for H/W and S/W with private clouds and the ability is greatly dependent on what is being used. Its used mostly by companies dealing with high level confidential data.
Many companies are now opting for Enterprise cloud



Hybrid Cloud : Even though many organizations make the use of private as well as public cloud as per their need, there could be vendors looking for functionality of both a private as well as a public cloud. This is achieved with a Hybrid Cloud. At times there are companies that want their data to be secure as well as still are required to communicate to the customers over the network. Many of such companies choose a Hybrid Cloud. Here basically you can set access permissions for which applications need to be publicly accessible and which of them should not be and needs to be in private cloud. 

(To be contd..)

Monday, 24 February 2014

Virtualization with Vagrant



 some rights reserved by MedithIT

While going ahead with Chef configuration management tutorials i came across "Vagrant". I heard this term many a times. For quite sometime i thought it was just another VirtualBox like application. until i finally started using it.


What is Vagrant?

Open Source application for creating and configuring virtual dev environment. Vagrant manages VMs hosted in VirtualBox. Basically its a commandline utility that allows you to communicate with VirtualBox (or any virtualization software) with an easy set of commands. Many describe it as a wrapper around the virtualization software too.


How is it used?

Fist of all download Vagrant for your OS from this link. Install it. As mentioned its a commandline utility tool, you need commandline to access this.
  1. You will need to initialize vagrant box using the init command. This will initialize a vagrant environment in the present directory you are in. The second argument will set the name for your box and the third will set the URL to access in the Vagrant file.
  2. Next you need to create and configure the vagrant box as per your vagrant file. Use the up command for this. This command will be used frequently as this is how you start you machine as well.
    • vagrant up
  3. Now that the machine has started you still have not logged in to the machine. You can use ssh to login. You don't need to use the traditional long ssh command to login to your box a simple vagrant ssh suffices.
    • vagrant ssh
    • Note: Please check the documentation on the vagrant website as there is a list of optional parameters that you may need in case you run into any errors. Fatal can be expected.
  4. To check valid configurations to ssh into a running vagrant box use vagrant ssh-config
    • vagrant ssh-config
  5. As you now know how to setup and start using your vagrant box, we also need to know how to shut it down. This is simple too by just a "vagrant halt" similar to the halt command in linux machine.
    • vagrant halt
  6. These were the few basics of vagrant. You can do more with vagrant as well. The vagrant box command gives you other alternatives that you could try out.
    • vagrant box add [box-name] [url-path]
      • This adds a box with the specified name using the local file path or url specified to access it.
    • vagrant box list
      • Lists all the boxes installed and available
    • vagrant box remove [box-name] [provider]
      • Removes the box with the specified box name for the specific provider. Providers are VirtualBox or VMWare or any other utility.
    • vagrant box repackage [box-name] [provider]
      • Repackages the given box and puts it to present directory for redistribution purpose. When a box is added, vagrant unpacks and stores it internally and the original box is not preserved.
  7. Restarting a vagrant machine can be done using vagrant reload. Its equivalent to a vagrant halt and vagrant up.
    • vagrant reload
  8. To check the current state of the machine, i.e to verify if the machine is running, stopped or not created etc a status command helps
    • vagrant status
  9. To save the status of the machine and suspend it so that you can resume it at a later instance and not completely shut it down, you can use suspend command.
    • vagrant suspend
  10. To resume a suspended machine use vagrant resume
    • vagrant resume
  11. Finally, to stop and delete/destroy and existing machine use the destroy command. All the resources allocated are destroyed as if the machine was never there. This command asks for confirmation before destroying.
    • vagrant destroy

Friday, 7 February 2014

Checking Open Ports on a Remote Computer using PortQry

Some rights reserved by Ryan Franklin

 Today for one of the projects the SFTP connection kept failing for some reason. The user-id password used for connecting to the host was correct the hostname was as well correct. There was no way to find out what went wrong. Thankfully command-line gives a good log to verify what goes wrong.

I tried connecting to the SFTP host with various tools like FileZilla, WinSCP but could not get good enough logs. Finally i tried connecting the server using ssh on command-line using my Ubuntu machine. The connection used to time out. That is what i see in the logs as well. I assumed that probably the SFTP port number 22 was closed for the host.

I googled for if i could find a tool to check if a particular port on a machine is accessible or not. I finally found something called as PortQry taht could be used on Windows machine using Commandline.

Its a very small 140 KB command-line based software tool that you can use to check if a port on some machine is accessible or not.

After using this tool i got to know that the machine had a Firewall  kind of protection which wasn't allowing me to access the SFTP port on it.
Here's how you PortQry on Windows:
  • Download the software using the link : http://www.microsoft.com/en-in/download/details.aspx?id=17148
  • Double click an unzip the files to any location say C:/
  • Hit Windows+R in the run box enter "cmd"
  • Go to the directory where the PortQry was extracted.
  • Execute the program PortQry by entering PortQry<enter>
  • This will display a list of help information and the correct usage of the command

The following is the syntax to check the port status :
portqry -n myhostname.net -e 80

PortQry can inform the status of a port as "Listening", "Not Listening", or "Filtered"
Listening : There is some service active on that port
Not Listening : Port is closed
Filtered : No response, Presumably its behind some kind of firewall.

Syntax
portqry -n name_to_query [-p protocol] [-e || -r || -o endpoint(s)]

Common command line switches:
-n : IP address or name of system to query
-p : TCP or UDP or BOTH (default is TCP)
-e : single port to query (valid range: 1-65535)
-r : range of ports to query (start:end)

For single port use
portqry -n 127.0.0.1 -e 80

For a Range of ports, use the -r switch:
portqry -n 127.0.0.1 -r 80:85

Note:
- PortQry also displays extended information for known services, such as SMTP, POP3, IMAP4, FTP, and is capable of performing LDAP queries.
- A GUI based alternative is also available now called PortQryUI

Sample Output:
C:\PortQryV2>portqry -n 127.0.0.1 -e 40
Querying target system called:
 127.0.0.1
Attempting to resolve name to IP address...
Name resolved to xx.xx.xx.xx
querying...
TCP port 22 (ssh service): FILTERED


C:\PortQryV2>portqry -n 127.0.0.1 -e 80
Querying target system called:
 127.0.0.1
Attempting to resolve IP address to a name...
IP address resolved to xx.xx.xx.xx
querying...
TCP port 80 (http service): LISTENING


C:\PortQryV2>portqry -n 127.0.0.1 -e 22
Querying target system called:
 127.0.0.1
Attempting to resolve IP address to a name...
IP address resolved to xx.xx.xx.xx
querying...
TCP port 22 (ssh service): NOT LISTENING

Data Recovery with TestDisk

some rights reserved by epSos .de

 You buy a portable hard disk drive to keep a back up of all your important data, Or you are carrying an extremely important doc in a USB drive, or you have some data on your internal hard disk drive. You try to get the data and ... boom!!! No Data!!! You have absolutely no clue how to get back the important data you had in the drives. Finally you just consider it lost and curse your fate and carry on.

This happened to me too. 1 TeraBytes of important data in my external Hard disk drive got corrupt in a second and was lost. I tried all OS's i could right from MAC to Windows to Linux but no luck. I was about to give up when i came across Data Recovery softwares on Internet.

Data Recovery softwares are those softwares that help you read corrupt/lost/deleted data from your drives. There were so many softwares out there. I started trying out my luck on all the softwares i could with a windows machine(i should not have done that, but i have no option!!). Unfortunately most of them just gives a list of deleted items on the disk that i never needed. One or two of them did get me the data however since they were trial versions, i could only view the data and not copy them. Anyways it just gave me the hope that the data was still there in the disk.

Finally i searched for Open Source alternatives and there i got my answer. TestDisk!!!

What is TestDisk?
As per their website wiki: TestDisk is a free data recovery software designed to help recover lost partitions and/or make non-booting disks bootable again when these symptoms are caused by faulty software, certain types of viruses, or human error. It can also be used to repair some filesystem errors.

Installation:
You don't need to install it. Its a very small 1.5 MB software, where you just need to run the executable, it was really easy to use command line based software with most of the information self explanatory.

Steps to Use:
  1. Logs Creation : Like every other Linux based software, this one as well makes a log of every session carried out. when you execute the TestDisk.exe You are 1st asked for if, you would like to CREATE a new log, APPEND  the existing log, or want NO LOG.
  2. Media/Disk Detection :Next you need to select the disk you want to recover the lost data from. It shows a list of all volumes connected to you computer and you can select using keyboard arrow keys.
  3. Disk/Partition Table Type Selection : The next screen prompts you to select the partition table type. In most of the cases it detects all by itself the type. Otherwise it keeps it to none for you to select. Mine was a windows machine that i was running this program on and i used this with quiet a many volumes and every time it detected Intel. I am assuming windows users could probably select Intel in case they are unsure of this option. But selection "None"is not recommended as its very rare that a drive is non partitioned.
  4. Next Screen Gives the following Options :
Analyse Analyse current partition structure and search for lost partitions to restore them
Advanced Filesystem Utils
FAT: Boot and FAT repair
NTFS: Boot and MFT repair
ext2/ext3: Find Backup SuperBlock
FAT file undelete
NTFS file undelete
ext2 file undelete
Image Creation
Geometry Change disk geometry
Options Modify options
MBR Code Write TestDisk MBR code to first sector
Delete Delete all data in the partition table

Analyzing of disk is done to look for lost partitions. This takes long time based on the size of the disk. Most of the times for windows if you get an error "The type of the file system is RAW." or "The disk in drive D is not formatted. Do you want to format it now?" then that means the Boot sector is damaged. You can click on Advanced file system Utils and and use the FAT: Boot and FAT repair/NTFS: Boot and MFT repair based on the type of partitioin you have. This was the problem with my disk too and got corrected in no time.



To be contd ...

Thursday, 2 January 2014

Monitoring in Linux/Unix Environment using TOP

Some rights reserved by Steve Jurvetson

Top

Top is the Linux performance monitoring program. For windows user, its analogous to the Task Manager. This command displays the active processes at real-time and updates the list regularly. Other system details like CPU usage, Memory usage, Swap Memory, Cache Size, Buffer Size, Process PID, User, Commands and much more.
The 1st line of the command mentions the following:
  • current time in hh:mm:ss format seconds keep updating
  • uptime of the machine, how long has the machine been running
  • no. of users logged in with running sessions
  • average load on the system, 3 values mentioned are load in last one minute, 5 minutes and 15 minutes
The 2nd line mentions the following :
  • Total number of processes running
  • Present number of running processes
  • Total sleeping processes
  • Total stopped processes
  • Total Zombie processes(waiting for parent process to stop)
The 3rd row mentions the following :
  • % of CPU for user processes
  • % of CPU for system/kernel processes
  • % of CPU for prioritized or priority upgraded processes nice
  • % of CPU not used
  • % of CPU awaiting i/o operation
  • % of CPU serving h/w interrupts
  • % of CPU for s/w interrupts
  • % of CPU stolen from virtual memory(steal time) this will be zero if no virtual machine running.
The 4th and 5th row mentions the following:
  • The use of Physical memory
  • The use of swap memory
  • Both free buffer and cached
The details of the processes are given  with the following details:
  • PID: Process ID
  • USER: The owner user of the process
  • PR: Priority of the process
  • NI: Nice value of the process
  • VIRT: Amount of Virtual Memory used by the process
  • RES: Amount of physical memory used by the process
  • SHR: Shared memory of the process
  • S: Status of the process Sleep Running Zombie
  • %CPU: % of CPU used
  • %MEM: % of RAM used
  • COMMAND: name of the process

The default sorting of the list displayed is based on CPU usage. you can change the sorting of the list as per your convenience.
Changing the sorting:
Press Shift+o. A list will be displayed giving all possible options using which you may sort the list, a letter corresponding to the sort criteria. Select that particular letter and hit 'return/enter' and see the new sorted list.
Display Processes for a specific User:
top -u username
This command will show the details of all the processes under the specific username mentioned in the command.
Highlight any Running Process :
Press Z after running top to highlight the running process to identify them easily.
Show absolute Path of the process:
To see the path from where the processes are being invoked press 'c' after running top
Change screen refresh interval:
To change the screen refresh interval of the processes running press 'd' and enter any number in seconds to set the time interval for refresh.
Kill Running Process:
to kill any of the running processes press 'k' and enter the process id of the process to be killed. After this you will be required to enter the signal (15) to kill the process.
To sort by CPU Utilization: Shift+p
Save Results of Top Command: press Ctrl+w
for help: press h
Exit top after specific Repetition: top -n <number>
Manual Page for top : man top