Install the AWS CLI Using the Bundled Installer for MAC, OS X El Captain Version 10.11.2

Follow these steps from the command line to install the AWS CLI using the bundled installer.

To install the AWS CLI using the bundled installer

  1. Download the AWS CLI Bundled Installer.
    $ curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
  2. Unzip the package.
    $ unzip awscli-bundle.zip

    Note

    If you don’t have unzip, use your Linux distribution’s built in package manager to install it.

  3. Run the install executable.
    $ sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws

The above steps as mentioned in the article http://docs.aws.amazon.com/cli/latest/userguide/awscli-install-bundle.html, worked for me but when I tried “pip install awscli –upgrade –user” on my mac book, the installation went fine but not sure why I was unable to use the cli command “aws”.

Advertisements

CENTOS 7 ROOT PASSWORD RESET

1 – Go to the boot grub menu select option to edit i.e, press the key “e”

Selection_003

2 – Go to the word “ro” as shown in the screen below

Selection_005

3 – Change “ro” to “rw init=/sysroot/bin/sh” as shown below

Selection_006

4 – Now press Control+x to start on single user mode.

Selection_007

5 – Now access the system with this command.

chroot /sysroot

6 – Reset the password.

passwd root

7 – Update selinux information

touch /.autorelabel

8 – Exit chroot

exit

9 – Reboot your system

reboot

Setup HA-Proxy with Keepalived

Setup HA-Proxy with Keepalived

Two Ubuntu 14.04 servers [1GB RAM each]

hostnames and IP addresses:

    haproxy1 (192.168.1.30)
    haproxy2 (192.168.1.31)

We’ll also need to allocate a third IP address to use as the virtual IP address (VIP).  
We’ll use 192.168.1.32.

# vi /etc/sysctl.conf
net.ipv4.ip_nonlocal_bind=1

Then we run the following command to make this take effect without rebooting:
# sysctl -p

# apt-get update && apt-get install keepalived haproxy -y

# vi /etc/keepalived/keepalived.conf

global_defs {
  router_id haproxy1
}
vrrp_script haproxy {
  script “killall -0 haproxy”
  interval 2
  weight 2
}
vrrp_instance 50 {
  virtual_router_id 50
  advert_int 1
  priority 101
  state MASTER
  interface eth0
  virtual_ipaddress {
192.168.1.32 dev eth0
  }
  track_script {
    haproxy
  }
}

=====================================================================

Note:
router_id to be the hostname, and
the VIP as 192.168.1.32

# vi /etc/haproxy/haproxy.cfg

global
    chroot /var/lib/haproxy
    user haproxy
    group haproxy
    daemon
    log 192.168.1.30 local0
 stats socket /var/lib/haproxy/stats
    maxconn 4000

defaults
    log    global
    mode    http
    option    httplog
    option    dontlognull
        contimeout 5000
        clitimeout 50000
        srvtimeout 50000
    errorfile 400 /etc/haproxy/errors/400.http
    errorfile 403 /etc/haproxy/errors/403.http
    errorfile 408 /etc/haproxy/errors/408.http
    errorfile 500 /etc/haproxy/errors/500.http
    errorfile 502 /etc/haproxy/errors/502.http
    errorfile 503 /etc/haproxy/errors/503.http
 errorfile 504 /etc/haproxy/errors/504.http

listen stats 192.168.1.30:80
        mode http
        stats enable
        stats uri /stats
        stats realm HAProxy\ Statistics
        stats auth admin:password

=====================================================================

Note:
Local IP addressis used in the file in two locations,
in the global section for the log location, and
in the stats listener.  

When you setup the second node, make sure to use its IP address.  
Also notice the username and password in the status auth line.  Set this to whatever you want.  
Then, you will be able to access the stats page via your browser.

Edit the file /etc/default/haproxy and change ENABLED from 0 to 1

Now we can restart the services:
service keepalived restart
service haproxy restart

Once you’ve completed all of these steps on both nodes,
you should now have a highly available load balancer pair.  
At this point, our VIP should be active on one node
(assuming that you built node 1 first, it should be active on that node).  

To confirm, we can use the ip command:

# ip a | grep eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    inet 192.168.1.30/24 brd 192.168.1.255 scope global eth0
    inet 192.168.1.32/32 scope global eth0

Notice that both the local IP and the VIP are shown.
If we now shutdown node 1, node 2 will quickly pick up the VIP.

To further confirm,
# ping 192.168.1.32
from any machine within same network & bring down the node1.

Final Note: Don’t forget to repeat all the steps on second node as well.