AWSサーバー環境構築

技術文書
2016/02/09

利用サービス

  • VPC
  • EC2: web, api, mongo, spark
  • Route53
  • Elasticache: redis
  • Kinesis
  • Crowdwatch

VPC

http://docs.aws.amazon.com/ja_jp/AmazonVPC/latest/UserGuide/GetStarted.html

VPC

VPC ID: vpc-7a31751f | xg-vpc
Network ACL: acl-91045cf4
State: available
Tenancy: Default
VPC CIDR: 10.0.0.0/16
DNS resolution: yes
DHCP options set: dopt-bd76a6d8
DNS hostnames: yes
Route table: rtb-b3ba9dd6
ClassicLink DNS Support: no

Subnets

Subnet ID: subnet-78b4950f | xg-vpc-sn-1
Availability Zone: ap-northeast-1a
CIDR: 10.0.1.0/24
Route table: rtb-b3ba9dd6
State: available
Network ACL: acl-91045cf4
VPC: vpc-7a31751f (10.0.0.0/16) | xg-vpc
Default subnet: no
Available IPs: 249
Auto-assign Public IP: yes

Subnet ID: subnet-079de45e | xg-vpc-sn-2
Availability Zone: ap-northeast-1c
CIDR: 10.0.0.0/24
Route table: rtb-b3ba9dd6
State: available
Network ACL: acl-91045cf4
VPC: vpc-7a31751f (10.0.0.0/16) | xg-vpc
Default subnet: no
Available IPs: 250
Auto-assign Public IP: yes

Route Tables

Route Table ID: rtb-b3ba9dd6 | xg-rt
Main: yes
Explicitly Associated With: 1 Subnet
VPC: vpc-7a31751f (10.0.0.0/16) | xg-vpc

Internet Gateways

ID: igw-2d49fb48 | xg-gw
Attached VPC ID: vpc-7a31751f (10.0.0.0/16) | xg-vpc
State: attached
Attachment state: available

EC2

Security Groups

xg-ssh-sg: ssh - MyIP
xg-web-sg: 80,443,3000,3001 ~ 10.0.0.0/16
xg-api-sg: 80,443 ~ 10.0.0.0/16
xg-jmeter-sg: 1099,30000 - 60000 ~ 10.0.0.0/16
xg-lb-sg: 80,443 ~ all
xg-mongo-sg: 27017 ~ 10.0.0.0/16
xg-redis-sg: 6379 ~ 10.0.0.0/16

Load Balancers

http://docs.aws.amazon.com/ja_jp/ElasticLoadBalancing/latest/DeveloperGuide/elb-create-https-ssl-load-balancer.html

DNS Name:    
xg-lb-1800740380.ap-northeast-1.elb.amazonaws.com (A Record)
Note: Because the set of IP addresses associated with a LoadBalancer can change over time, you should never create an "A" record with any specific IP address. If you want to use a friendly DNS name for your load balancer instead of the name generated by the Elastic Load Balancing service, you should create a CNAME record for the LoadBalancer DNS name, or use Amazon Route 53 to create a hosted zone. For more information, see Using Domain Names With Elastic Load Balancing.
Scheme:    internet-facing
Status:    
0 of 1 instances in service
Port Configuration:    
80 (HTTP) forwarding to 80 (HTTP)
Stickiness: Disabled(Edit)
443 (HTTPS, Certificate: xg-cert) forwarding to 80 (HTTP)
Stickiness: Disabled(Edit)
Availability Zones:    subnet-079de45e - ap-northeast-1c,
subnet-78b4950f - ap-northeast-1a
Cross-Zone Load Balancing:    Enabled (Edit)
Source Security Group:    
903572815946/xg-lb
Owner Alias: 903572815946
Group Name: xg-lb
Hosted Zone ID:    Z2YN17T5R711GT
VPC ID:    vpc-7a31751f
Access Logs:
Disabled(Edit)
Connection Settings:
Idle Timeout: 60 seconds(Edit)

Web

Instances

AWSインスタンス作成: xg-web

初期設定

時刻設定

[ec2-user@ip-10-0-1-218 ~]$ sudo cp /usr/share/zoneinfo/Japan /etc/localtime
[ec2-user@ip-10-0-1-218 ~]$ date
2016年  3月 11日 金曜日 13:25:16 JST
[ec2-user@ip-10-0-1-218 ~]$ sudo vi /etc/sysconfig/clock
ZONE="Japan" 
UTC=false

再起動して時刻反映されているか確認
[ec2-user@ip-10-0-1-218 ~]$ date
2016年  3月 11日 金曜日 15:28:48 JST

セキュリティ自動アップデートOFF、文字コード設定

[ec2-user@ip-10-0-1-218 ~]$ sudo vi /etc/cloud/cloud.cfg
repo_upgrade: none
locale: ja_JP.UTF-8

再起動して時刻反映されているか確認
[ec2-user@ip-10-0-1-218 ~]$ env
LANG=ja_JP.UTF-8

gitインストール

[ec2-user@ip-10-0-1-218 ~]$ sudo yum install git

git flowインストール

[ec2-user@ip-10-0-1-218 www]$ sudo su -
[root@ip-10-0-1-218 ~]# cd /usr/local/src/
[root@ip-10-0-1-218 src]# git clone --recursive git://github.com/nvie/gitflow.git
[root@ip-10-0-1-218 src]# cd gitflow/
[root@ip-10-0-1-218 gitflow]# make install

nvmインストール

[ec2-user@ip-10-0-1-218 ~]$ git clone https://github.com/creationix/nvm.git ~/.nvm
[ec2-user@ip-10-0-1-218 ~]$ source ~/.nvm/nvm.sh
[ec2-user@ip-10-0-1-218 ~]$ nvm install v0.12.7
[ec2-user@ip-10-0-1-218 ~]$ npm -v
2.11.3
[ec2-user@ip-10-0-1-218 ~]$ node -v
v0.12.7
[ec2-user@ip-10-0-1-218 ~]$ vi ~/.bashrc 
if [[ -s ~/.nvm/nvm.sh ]]; then
 source ~/.nvm/nvm.sh
 nvm use "v0.12.7" > /dev/null 2>&1
fi

アプリケーション構築

[ec2-user@ip-10-0-1-218 ~]$ sudo mkdir /var/www
[ec2-user@ip-10-0-1-218 ~]$ sudo chmod a+w /var/www
[ec2-user@ip-10-0-1-218 www]$ git clone ssh://admin@49.212.24.211/opt/git/x-generation.git
[ec2-user@ip-10-0-1-218 www]$ cd /var/www/x-generation/
[ec2-user@ip-10-0-1-218 x-generation]$ git flow init -d
[ec2-user@ip-10-0-1-218 x-generation]$ git checkout master
  develop
* master

gcc、g++インストール

[ec2-user@ip-10-0-1-218 x-generation]$ sudo yum install gcc
[ec2-user@ip-10-0-1-218 x-generation]$ sudo yum install gcc-c++

node_moduleインストール

[ec2-user@ip-10-0-1-218 x-generation]$ npm install

pm2インストール&自動起動

[ec2-user@ip-10-0-1-218 ~]$ npm install pm2 -g
[ec2-user@ip-10-0-1-218 ~]$ sudo su -c "env PATH=$PATH:/home/ec2-user/.nvm/versions/node/v0.12.7/bin pm2 startup centos -u ec2-user --hp /home/ec2-user" 
[ec2-user@ip-10-0-1-218 ~]$ sudo su -c "chmod +x /etc/init.d/pm2-init.sh; chkconfig --add pm2-init.sh" 

アプリケーション起動

[ec2-user@ip-10-0-1-218 x-generation]$ NODE_ENV=aws NODE_PATH=./config:./app/controllers pm2 start bin/www
[ec2-user@ip-10-0-1-218 x-generation]$ NODE_ENV=aws NODE_PATH=./config:./app/controllers pm2 start bin/wwws
[ec2-user@ip-10-0-1-218 x-generation]$ NODE_ENV=aws NODE_PATH=./config:./app/controllers pm2 start cron/sync.js
[ec2-user@ip-10-0-1-218 x-generation]$ NODE_ENV=aws NODE_PATH=./config:./app/controllers pm2 start cron/aggregation.js
[ec2-user@ip-10-0-1-218 x-generation]$ NODE_ENV=aws NODE_PATH=./config:./app/controllers pm2 start cron/postback.js
[ec2-user@ip-10-0-1-218 x-generation]$ NODE_ENV=aws NODE_PATH=./config:./app/controllers pm2 start cron/log_sync.js

pm2再起動して反映されることを確認

[ec2-user@ip-10-0-1-218 x-generation]$ pm2 save
[PM2] Dumping processes
[ec2-user@ip-10-0-1-218 x-generation]$ sudo /etc/init.d/pm2-init.sh restart
Restarting pm2
Stopping pm2
[PM2] Deleting all process
[PM2] deleteProcessId process id 0
[PM2] deleteProcessId process id 1
[PM2] deleteProcessId process id 2
[PM2] deleteProcessId process id 3
[PM2] deleteProcessId process id 4
[PM2] deleteProcessId process id 5
┌──────────┬────┬──────┬─────┬────────┬─────────┬────────┬────────┬──────────┐
│ App name │ id │ mode │ pid │ status │ restart │ uptime │ memory │ watching │
└──────────┴────┴──────┴─────┴────────┴─────────┴────────┴────────┴──────────┘
 Use `pm2 show <id|name>` to get more details about an app
[PM2] Stopping PM2...
[PM2][WARN] No process found
[PM2] All processes have been stopped and deleted
[PM2] PM2 stopped
Starting pm2
[PM2] Spawning PM2 daemon
[PM2] PM2 Successfully daemonized
[PM2] Resurrecting
Process /var/www/x-generation/bin/www launched
Process /var/www/x-generation/bin/wwws launched
Process /var/www/x-generation/cron/sync.js launched
Process /var/www/x-generation/cron/aggregation.js launched
Process /var/www/x-generation/cron/postback.js launched
Process /var/www/x-generation/cron/log_sync.js launched
┌─────────────┬────┬──────┬───────┬────────┬─────────┬────────┬─────────────┬──────────┐
│ App name    │ id │ mode │ pid   │ status │ restart │ uptime │ memory      │ watching │
├─────────────┼────┼──────┼───────┼────────┼─────────┼────────┼─────────────┼──────────┤
│ www         │ 0  │ fork │ 11295 │ online │ 0       │ 0s     │ 23.922 MB   │ disabled │
│ wwws        │ 1  │ fork │ 11298 │ online │ 0       │ 0s     │ 23.203 MB   │ disabled │
│ sync        │ 2  │ fork │ 11301 │ online │ 0       │ 0s     │ 22.684 MB   │ disabled │
│ aggregation │ 3  │ fork │ 11309 │ online │ 0       │ 0s     │ 19.688 MB   │ disabled │
│ postback    │ 4  │ fork │ 11319 │ online │ 0       │ 0s     │ 18.605 MB   │ disabled │
│ log_sync    │ 5  │ fork │ 11327 │ online │ 0       │ 0s     │ 17.094 MB   │ disabled │
└─────────────┴────┴──────┴───────┴────────┴─────────┴────────┴─────────────┴──────────┘
 Use `pm2 show <id|name>` to get more details about an app

nginxインストール

[ec2-user@ip-10-0-1-218 x-generation]$ sudo rpm -ivh http://nginx.org/packages/centos/6/noarch/RPMS/nginx-release-centos-6-0.el6.ngx.noarch.rpm
[ec2-user@ip-10-0-1-218 ~]$ sudo cp /etc/nginx/nginx.conf /etc/nginx/nginx.conf.bk
[ec2-user@ip-10-0-1-218 ~]$ sudo vi /etc/nginx/nginx.conf

user  nginx;
worker_processes  1;

error_log  /var/log/nginx/error.log warn;
pid        /var/run/nginx.pid;

events {
    worker_connections  1024;
}

http {
    include       /etc/nginx/mime.types;
    default_type  application/octet-stream;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile        on;
    #tcp_nopush     on;

    keepalive_timeout  65;

    #gzip  on;

    include /etc/nginx/conf.d/*.conf;
}
[ec2-user@ip-10-0-1-218 ~]$ sudo vi /etc/nginx/conf.d/node-app.conf 
server {
    listen 80;
    server_name localhost;

    access_log /var/log/nginx/access.log  main;

    error_page 404              /404.html;
    error_page 500 502 503 504  /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }

    proxy_redirect                          off;
    proxy_set_header Host                   $host;
    proxy_set_header X-Real-IP              $remote_addr;
    proxy_set_header X-Forwarded-Host       $host;
    proxy_set_header X-Forwarded-Server     $host;
    proxy_set_header X-Forwarded-For        $proxy_add_x_forwarded_for;

    location / {
        proxy_pass http://localhost:3000/;
    }
}
[ec2-user@ip-10-0-1-218 x-generation]$ sudo vi /etc/nginx/conf.d/ssl.conf
# HTTPS server

server {
  listen       443 ssl;
  server_name  i-joji.com;

  ssl_certificate      /etc/nginx/conf.d/ssl/i-joji.com.crt;
  ssl_certificate_key  /etc/nginx/conf.d/ssl/i-joji.com.key;

  ssl_session_cache shared:SSL:1m;
  ssl_session_timeout  5m;

  ssl_ciphers  HIGH:!aNULL:!MD5;
  ssl_prefer_server_ciphers   on;

  proxy_redirect off;
  proxy_set_header Host $host;
  proxy_set_header X-Real-IP $remote_addr;
  proxy_set_header X-Forwarded-Host $host;
  proxy_set_header X-Forwarded-Server $host;
  proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;

  location / {
    proxy_pass https://localhost:3001/;
  }
}
[ec2-user@ip-10-0-1-218 x-generation]$ sudo mkdir /etc/nginx/conf.d/ssl
[ec2-user@ip-10-0-1-218 x-generation]$ sudo cp /var/www/x-generation/config/ssl/i-joji.com.* /etc/nginx/conf.d/ssl/
[ec2-user@ip-10-0-1-218 x-generation]$ sudo /etc/init.d/nginx start
[ec2-user@ip-10-0-1-218 x-generation]$ sudo chkconfig nginx on

AMIイメージ作成

xg-web-img

ロードバランサー

xg-web-lb
ヘルスチェック: TCP/3000ポート

動作確認
http://xg-lb-1800740380.ap-northeast-1.elb.amazonaws.com/

API

Instances

AWSインスタンス作成: xg-api

初期設定

時刻設定

[ec2-user@ip-10-0-1-118 ~]$ date
2016年  3月 16日 水曜日 09:34:49 UTC
[ec2-user@ip-10-0-1-118 ~]$ sudo cp /usr/share/zoneinfo/Japan /etc/localtime
[ec2-user@ip-10-0-1-118 ~]$ date
2016年  3月 16日 水曜日 18:34:53 JST
[ec2-user@ip-10-0-1-118 ~]$ sudo vi /etc/sysconfig/clock
ZONE="Japan" 
UTC=false

再起動して時刻反映されているか確認

セキュリティ自動アップデートOFF、文字コード設定

[ec2-user@ip-10-0-1-118 ~]$ sudo vi /etc/cloud/cloud.cfg
repo_upgrade: none
locale: ja_JP.UTF-8

再起動して時刻反映されているか確認
[ec2-user@ip-10-0-1-118 ~]$ env | grep LANG
LANG=ja_JP.UTF-8

gitインストール

[ec2-user@ip-10-0-1-118 ~]$ sudo yum install git

git flowインストール

[ec2-user@ip-10-0-1-118 ~]$ sudo su -
[root@ip-10-0-1-118 ~]# cd /usr/local/src/
[root@ip-10-0-1-118 src]# git clone --recursive git://github.com/nvie/gitflow.git
[root@ip-10-0-1-118 src]# cd gitflow/
[root@ip-10-0-1-118 gitflow]# make install

openrestyインストール

[root@ip-10-0-1-118 gitflow]# cd /usr/local/src
[root@ip-10-0-1-118 src]# sudo yum install readline-devel pcre-devel openssl-devel gcc gcc-c++
[root@ip-10-0-1-118 src]# wget https://openresty.org/download/ngx_openresty-1.7.10.2.tar.gz
[root@ip-10-0-1-118 src]# tar zxvf ngx_openresty-1.7.10.2.tar.gz 
[root@ip-10-0-1-118 src]# cd ngx_openresty-1.7.10.2
[root@ip-10-0-1-118 ngx_openresty-1.7.10.2]# gmake
[root@ip-10-0-1-118 ngx_openresty-1.7.10.2]# gmake install
[root@ip-10-0-1-118 ngx_openresty-1.7.10.2]# cd ..
[root@ip-10-0-1-118 src]# git clone https://github.com/bungle/lua-resty-template
[root@ip-10-0-1-118 src]# cp lua-resty-template/lib/resty/template.lua /usr/local/openresty/lualib/resty/

アプリケーション構築

[root@ip-10-0-1-118 src]# mkdir -p /var/www/
[root@ip-10-0-1-118 src]# chmod a+w /var/www/
[root@ip-10-0-1-118 src]# exit
[ec2-user@ip-10-0-1-118 ~]$ cd /var/www/
[ec2-user@ip-10-0-1-118 www]$ git clone ssh://admin@49.212.24.211/opt/git/x-generation-api.git
[ec2-user@ip-10-0-1-118 www]$ cd x-generation-api/
[ec2-user@ip-10-0-1-118 x-generation-api]$ git flow init -d
[ec2-user@ip-10-0-1-118 x-generation-api]$ git checkout master

シンボリックリンク作成

[ec2-user@ip-10-0-1-118 x-generation-api]$ cd /usr/local/openresty/nginx
[ec2-user@ip-10-0-1-118 nginx]$ sudo ln -s /var/www/x-generation-api/lua lua
[ec2-user@ip-10-0-1-118 nginx]$ sudo ln -s /var/www/x-generation-api/html/templates html/templates
[ec2-user@ip-10-0-1-118 nginx]$ sudo mv conf/nginx.conf conf/nginx.conf.bkup
[ec2-user@ip-10-0-1-118 nginx]$ sudo ln -s /var/www/x-generation-api/conf/nginx.conf conf/nginx.conf
[ec2-user@ip-10-0-1-118 nginx]$ sudo ln -s /var/www/x-generation-api/conf/nginx.conf_production conf/nginx.conf_production
[ec2-user@ip-10-0-1-118 nginx]$ sudo ln -s /var/www/x-generation-api/conf/nginx.conf_aws conf/nginx.conf_aws
[ec2-user@ip-10-0-1-118 nginx]$ sudo ln -s /var/www/x-generation-api/ssl/ad.i-joji.com.crt conf/ssl/
[ec2-user@ip-10-0-1-118 nginx]$ sudo ln -s /var/www/x-generation-api/ssl/ad.i-joji.com.key conf/ssl/

nginx自動起動

[ec2-user@ip-10-0-1-118 nginx]$ sudo vi /etc/init.d/nginx
#!/bin/sh
#
# chkconfig: 2345 55 25
# Description: Nginx init.d script, put in /etc/init.d, chmod +x /etc/init.d/nginx
#              For Debian, run: update-rc.d -f nginx defaults
#              For CentOS, run: chkconfig --add nginx
#
### BEGIN INIT INFO
# Provides:          nginx
# Required-Start:    $all
# Required-Stop:     $all
# Default-Start:     2 3 4 5
# Default-Stop:      0 1 6
# Short-Description: nginx init.d script
# Description:       OpenResty (aka. ngx_openresty) is a full-fledged web application server by bundling the standard Nginx core, lots of 3rd-party Nginx modules, as well as most of their external dependencies.
### END INIT INFO
#

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
DESC="Nginx Daemon" 
NAME=nginx
PREFIX=/usr/local/openresty/nginx
DAEMON=$PREFIX/sbin/$NAME
CONF=$PREFIX/conf/$NAME.conf_aws
PID=$PREFIX/logs/$NAME.pid
SCRIPT=/etc/init.d/$NAME

if [ ! -x "$DAEMON" ] || [ ! -f "$CONF" ]; then
    echo -e "\033[33m $DAEMON has no permission to run. \033[0m" 
    echo -e "\033[33m Or $CONF doesn't exist. \033[0m" 
    sleep 1
    exit 1
fi

do_start() {
    if [ -f $PID ]; then
        echo -e "\033[33m $PID already exists. \033[0m" 
        echo -e "\033[33m $DESC is already running or crashed. \033[0m" 
        echo -e "\033[32m $DESC Reopening $CONF ... \033[0m" 
    kill `cat ${PID}`
        rm -f $PID
        $DAEMON -c $CONF
        sleep 1
        echo -e "\033[36m $DESC reopened. \033[0m" 
    else
        echo -e "\033[32m $DESC Starting $CONF ... \033[0m" 
        $DAEMON -c $CONF
        sleep 1
        echo -e "\033[36m $DESC started. \033[0m" 
    fi
}

do_stop() {
    if [ ! -f $PID ]; then
        echo -e "\033[33m $PID doesn't exist. \033[0m" 
        echo -e "\033[33m $DESC isn't running. \033[0m" 
    else
        echo -e "\033[32m $DESC Stopping $CONF ... \033[0m" 
        $DAEMON -s stop -c $CONF
        sleep 1
        echo -e "\033[36m $DESC stopped. \033[0m" 
    fi
}

do_reload() {
    if [ ! -f $PID ]; then
        echo -e "\033[33m $PID doesn't exist. \033[0m" 
        echo -e "\033[33m $DESC isn't running. \033[0m" 
        echo -e "\033[32m $DESC Starting $CONF ... \033[0m" 
        $DAEMON -c $CONF
        sleep 1
        echo -e "\033[36m $DESC started. \033[0m" 
    else
        echo -e "\033[32m $DESC Reloading $CONF ... \033[0m" 
        $DAEMON -s reload -c $CONF
        sleep 1
        echo -e "\033[36m $DESC reloaded. \033[0m" 
    fi
}

do_quit() {
    if [ ! -f $PID ]; then
        echo -e "\033[33m $PID doesn't exist. \033[0m" 
        echo -e "\033[33m $DESC isn't running. \033[0m" 
    else
        echo -e "\033[32m $DESC Quitting $CONF ... \033[0m" 
        $DAEMON -s quit -c $CONF
        sleep 1
        echo -e "\033[36m $DESC quitted. \033[0m" 
    fi
}

do_test() {
    echo -e "\033[32m $DESC Testing $CONF ... \033[0m" 
    $DAEMON -t -c $CONF
}

do_info() {
    $DAEMON -V
}

case "$1" in
 start)
 do_start
 ;;
 stop)
 do_stop
 ;;
 reload)
 do_reload
 ;;
 restart)
 do_stop
 do_start
 ;;
 quit)
 do_quit
 ;;
 test)
 do_test
 ;;
 info)
 do_info
 ;;
 *)
 echo "Usage: $SCRIPT {start|stop|reload|restart|quit|test|info}" 
 exit 2
 ;;
esac

exit 0
[ec2-user@ip-10-0-1-118 nginx]$ sudo chmod 755 /etc/init.d/nginx 
[ec2-user@ip-10-0-1-118 nginx]$ sudo /etc/init.d/nginx start
[ec2-user@ip-10-0-1-118 nginx]$ sudo chkconfig nginx on
[ec2-user@ip-10-0-1-118 nginx]$ chkconfig | grep nginx
nginx              0:off    1:off    2:on    3:on    4:on    5:on    6:off

nginxログローテート

[ec2-user@ip-10-0-1-118 x-generation-api]$ sudo vi /etc/logrotate.d/nginx
/usr/local/openresty/nginx/logs/*.log {
    missingok
    notifempty
    daily
    rotate 7
    compress
    delaycompress
    sharedscripts
    postrotate
        [ ! -f /usr/local/openresty/nginx/logs/nginx.pid ] || kill -USR1 `cat /usr/local/openresty/nginx/logs/nginx.pid`
    endscript
}
[ec2-user@ip-10-0-1-118 x-generation-api]$ sudo logrotate -df /etc/logrotate.d/nginx

ファイルディスクリプタ上限設定

[ec2-user@ip-10-0-1-118 ~]$ cat /proc/sys/fs/file-max
815941

ulimit確認

[ec2-user@ip-10-0-1-118 ~]$ ps ax | grep worker
 3228 ?        S      0:00 nginx: worker process                                                                  
[ec2-user@ip-10-0-1-118 ~]$ cat /proc/3228/limits 
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             31877                31877                processes 
Max open files            100000               100000               files     
Max locked memory         65536                65536                bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       31877                31877                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us   

再起動して値が反映されていること確認

fluentdインストール

[ec2-user@ip-10-0-1-118 ~]$ sudo curl -L http://toolbelt.treasuredata.com/sh/install-redhat-td-agent2.sh | sh
[ec2-user@ip-10-0-1-118 ~]$ rpm -q td-agent
td-agent-2.3.1-0.el2015.x86_64

kinesisプラグインインストール

[ec2-user@ip-10-0-1-118 ~]$ sudo td-agent-gem install fluent-plugin-kinesis

fluentd設定

[ec2-user@ip-10-0-1-118 ~]$ sudo cp /etc/td-agent/td-agent.conf /etc/td-agent/td-agent.conf.bk
[ec2-user@ip-10-0-1-118 ~]$ sudo vi /etc/td-agent/td-agent.conf

<source>
  type tail
  path /usr/local/openresty/nginx/logs/access.log
  tag nginx.access
  pos_file /var/log/td-agent/nginx.pos
  format /^(?<remote>[^ ]*) (?<host>[^ ]*) (?<user>[^ ]*) \[(?<time>[^\]]*)\] "(?<method>\S+)(?: +(?<path>[^\"]*) +\S*)?" (?[^ ]*) (?<size>[^ ]*)(?: "(?<referer>[^\"]*)" "(?<agent>[^\"]*)" "(?<app_params>[^\"]*)")?$/
  time_format %d/%b/%Y:%H:%M:%S %z
</source>
<match nginx.access>
  type kinesis

  stream_name x-generation

  aws_key_id YOUR_AWS_ACCESS_KEY
  aws_sec_key YOUR_SECRET_KEY

  region ap-northeast-1

  #partition_key xg
  random_partition_key true

  flush_interval 1s
</match>

セキュリティグループを一時的に全許可にしてブラウザ確認
xg-api-sg: 80 - 0.0.0.0/0

http://ec2-52-196-12-9.ap-northeast-1.compute.amazonaws.com

アクセスログ確認

[ec2-user@ip-10-0-1-118 ~]$ tail -f /usr/local/openresty/nginx/logs/access.log 
36.12.42.38 - - [17/Mar/2016:02:27:55 +0900] "GET / HTTP/1.1" 200 35 "-" "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36" "-" 

AWS CLI設定

[ec2-user@ip-10-0-1-118 ~]$ aws configure
      Name                    Value             Type    Location
      ----                    -----             ----    --------
   profile                <not set>             None    None
access_key     ****************ZITA shared-credentials-file    
secret_key     ****************mlUB shared-credentials-file    
    region           ap-northeast-1      config-file    ~/.aws/config

kinesisに転送されているかログ確認

[ec2-user@ip-10-0-1-118 ~]$ aws kinesis describe-stream --stream-name x-generation
{
    "StreamDescription": {
        "RetentionPeriodHours": 24, 
        "StreamStatus": "ACTIVE", 
        "StreamName": "x-generation", 
        "StreamARN": "arn:aws:kinesis:ap-northeast-1:903572815946:stream/x-generation", 
        "Shards": [
            {
                "ShardId": "shardId-000000000000", 
                "HashKeyRange": {
                    "EndingHashKey": "113427455640312821154458202477256070484", 
                    "StartingHashKey": "0" 
                }, 
                "SequenceNumberRange": {
                    "StartingSequenceNumber": "49560141242973645396710414004178987145404113746877480962" 
                }
            }, 
            {
                "ShardId": "shardId-000000000001", 
                "HashKeyRange": {
                    "EndingHashKey": "226854911280625642308916404954512140969", 
                    "StartingHashKey": "113427455640312821154458202477256070485" 
                }, 
                "SequenceNumberRange": {
                    "StartingSequenceNumber": "49560141242995946141908944627320522863676762108383461394" 
                }
            }, 
            {
                "ShardId": "shardId-000000000002", 
                "HashKeyRange": {
                    "EndingHashKey": "340282366920938463463374607431768211455", 
                    "StartingHashKey": "226854911280625642308916404954512140970" 
                }, 
                "SequenceNumberRange": {
                    "StartingSequenceNumber": "49560141243018246887107475250462058581949410469889441826" 
                }
            }
        ]
    }
}

[ec2-user@ip-10-0-1-118 ~]$ aws kinesis get-shard-iterator --shard-id shardId-000000000000 --shard-iterator-type TRIM_HORIZON --stream-name x-generation
{
    "ShardIterator": "AAAAAAAAAAHjv9hoYE4oKBt7V+7XqdhnwHMs5xzvmqd2ZT3W2CXNL8pcRalkeduHkN5GbSvgIk7XejDYfYEAYBS7yKgcqyeueW9dxO0VMqH4pFBHL15/kMlYUmixFOxQ5OwPc0h8ba0NJXA3qZopZE1eyiGcrLuiikVTqdS2U33dp0t/UpHCKEmhUCHOrOrqDjZaZrCX26jnT2HOPEg6OMAy/165AI0p" 
}
[ec2-user@ip-10-0-1-118 ~]$ aws kinesis get-records --shard-iterator AAAAAAAAAAHjv9hoYE4oKBt7V+7XqdhnwHMs5xzvmqd2ZT3W2CXNL8pcRalkeduHkN5GbSvgIk7XejDYfYEAYBS7yKgcqyeueW9dxO0VMqH4pFBHL15/kMlYUmixFOxQ5OwPc0h8ba0NJXA3qZopZE1eyiGcrLuiikVTqdS2U33dp0t/UpHCKEmhUCHOrOrqDjZaZrCX26jnT2HOPEg6OMAy/165AI0p
{
    "Records": [
        {
            "Data": "eyJyZW1vdGUiOiIzNi4xMi40Mi4zOCIsImhvc3QiOiItIiwidXNlciI6Ii0iLCJtZXRob2QiOiJHRVQiLCJwYXRoIjoiLyIsImNvZGUiOiIyMDAiLCJzaXplIjoiMzUiLCJyZWZlcmVyIjoiLSIsImFnZW50IjoiTW96aWxsYS81LjAgKE1hY2ludG9zaDsgSW50ZWwgTWFjIE9TIFggMTBfMTBfNSkgQXBwbGVXZWJLaXQvNTM3LjM2IChLSFRNTCwgbGlrZSBHZWNrbykgQ2hyb21lLzQ5LjAuMjYyMy44NyBTYWZhcmkvNTM3LjM2IiwiYXBwX3BhcmFtcyI6Ii0iLCJ0aW1lIjoiMjAxNi0wMy0xNlQxNzoyNzo1MloiLCJ0YWciOiJuZ2lueC5hY2Nlc3MifQ==", 
            "PartitionKey": "8de0c422-9452-41c6-afd1-314e651fd1a0", 
            "ApproximateArrivalTimestamp": 1458149273.237, 
            "SequenceNumber": "49560141242973645396710414004609364737187135656803958786" 
        }, 
        {
            "Data": "eyJyZW1vdGUiOiIzNi4xMi40Mi4zOCIsImhvc3QiOiItIiwidXNlciI6Ii0iLCJtZXRob2QiOiJHRVQiLCJwYXRoIjoiLyIsImNvZGUiOiIyMDAiLCJzaXplIjoiMzUiLCJyZWZlcmVyIjoiLSIsImFnZW50IjoiTW96aWxsYS81LjAgKE1hY2ludG9zaDsgSW50ZWwgTWFjIE9TIFggMTBfMTBfNSkgQXBwbGVXZWJLaXQvNTM3LjM2IChLSFRNTCwgbGlrZSBHZWNrbykgQ2hyb21lLzQ5LjAuMjYyMy44NyBTYWZhcmkvNTM3LjM2IiwiYXBwX3BhcmFtcyI6Ii0iLCJ0aW1lIjoiMjAxNi0wMy0xNlQxNzoyNzo1NVoiLCJ0YWciOiJuZ2lueC5hY2Nlc3MifQ==", 
            "PartitionKey": "364ca673-2676-40e1-86fd-38b3c544adbc", 
            "ApproximateArrivalTimestamp": 1458149275.969, 
            "SequenceNumber": "49560141242973645396710414004610573663006750423417618434" 
        }, 
        {
            "Data": "eyJyZW1vdGUiOiIzNi4xMi40Mi4zOCIsImhvc3QiOiItIiwidXNlciI6Ii0iLCJtZXRob2QiOiJHRVQiLCJwYXRoIjoiLyIsImNvZGUiOiIyMDAiLCJzaXplIjoiMzUiLCJyZWZlcmVyIjoiLSIsImFnZW50IjoiTW96aWxsYS81LjAgKE1hY2ludG9zaDsgSW50ZWwgTWFjIE9TIFggMTBfMTBfNSkgQXBwbGVXZWJLaXQvNTM3LjM2IChLSFRNTCwgbGlrZSBHZWNrbykgQ2hyb21lLzQ5LjAuMjYyMy44NyBTYWZhcmkvNTM3LjM2IiwiYXBwX3BhcmFtcyI6Ii0iLCJ0aW1lIjoiMjAxNi0wMy0xNlQxNzozMzoyN1oiLCJ0YWciOiJuZ2lueC5hY2Nlc3MifQ==", 
            "PartitionKey": "b2a9ac28-8274-4bb5-b038-7ab80ee0bab0", 
            "ApproximateArrivalTimestamp": 1458149608.751, 
            "SequenceNumber": "49560141242973645396710414004612991514646002496633307138" 
        }
    ], 
    "NextShardIterator": "AAAAAAAAAAF5L+Rz6/5rKnNfxwDj1ZLebwBXnWnYd515ENtLePVtziSX9gMLwKVeTyqvexKF0xhXyl0DXwQtrgFmwaaHOYPFC66onWtyuTD0arwk8bx1fZL1Cqh6XpKWEBA5Dqk8rnL95gE6ECUVPmYmEWPcnz04GUU6OIaPE+6K3t0o7861ooarcv9WBDoov4VvF94qzZZR30xwffEgKRfDmGkzUEMw", 
    "MillisBehindLatest": 0
}
[ec2-user@ip-10-0-1-118 ~]$ echo "eyJyZW1vdGUiOiIzNi4xMi40Mi4zOCIsImhvc3QiOiItIiwidXNlciI6Ii0iLCJtZXRob2QiOiJHRVQiLCJwYXRoIjoiLyIsImNvZGUiOiIyMDAiLCJzaXplIjoiMzUiLCJyZWZlcmVyIjoiLSIsImFnZW50IjoiTW96aWxsYS81LjAgKE1hY2ludG9zaDsgSW50ZWwgTWFjIE9TIFggMTBfMTBfNSkgQXBwbGVXZWJLaXQvNTM3LjM2IChLSFRNTCwgbGlrZSBHZWNrbykgQ2hyb21lLzQ5LjAuMjYyMy44NyBTYWZhcmkvNTM3LjM2IiwiYXBwX3BhcmFtcyI6Ii0iLCJ0aW1lIjoiMjAxNi0wMy0xNlQxNzoyNzo1MloiLCJ0YWciOiJuZ2lueC5hY2Nlc3MifQ==" | base64 -d
{"remote":"36.12.42.38","host":"-","user":"-","method":"GET","path":"/","code":"200","size":"35","referer":"-","agent":"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_10_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/49.0.2623.87 Safari/537.36","app_params":"-","time":"2016-03-16T17:27:52Z","tag":"nginx.access"}

セキュリティグループを戻す
xg-api-sg: 80 - 10.0.0.0/16

AMIイメージ作成

xg-api-img

ロードバランサー

xg-api-lb
ヘルスチェック: TCP/80ポート

動作確認
http://xg-api-lb-1846323725.ap-northeast-1.elb.amazonaws.com

Auto Scaling

https://docs.aws.amazon.com/ja_jp/AutoScaling/latest/DeveloperGuide/GettingStartedTutorial.html

Route53

https://docs.aws.amazon.com/ja_jp/ElasticLoadBalancing/latest/DeveloperGuide/using-domain-names-with-elb.html?icmpid=docs_elb_console

Elasticache

mongoDB

初期設定

時刻設定

[ec2-user@ip-10-0-1-218 ~]$ sudo cp /usr/share/zoneinfo/Japan /etc/localtime
[ec2-user@ip-10-0-1-218 ~]$ date
2016年  3月 11日 金曜日 13:25:16 JST
[ec2-user@ip-10-0-1-218 ~]$ sudo vi /etc/sysconfig/clock
ZONE="Japan" 
UTC=false

再起動して時刻反映されているか確認
[ec2-user@ip-10-0-1-218 ~]$ date
2016年  3月 11日 金曜日 15:28:48 JST

セキュリティ自動アップデートOFF、文字コード設定

[ec2-user@ip-10-0-1-218 ~]$ sudo vi /etc/cloud/cloud.cfg
repo_upgrade: none
locale: ja_JP.UTF-8

再起動して時刻反映されているか確認
[ec2-user@ip-10-0-1-218 ~]$ env
LANG=ja_JP.UTF-8

ulimit設定

[ec2-user@ip-10-0-1-62 ~]$ sudo vi /etc/security/limits.d/99-mongodb-nproc.conf
mongod - fsize unlimited
mongod - cpu unlimited
mongod - as unlimited
mongod - nofile 64000
mongod - rss unlimited
mongod - nproc 64000
mongod - memlock unlimited
[ec2-user@ip-10-0-1-62 ~]$ sudo service mongod restart
[ec2-user@ip-10-0-1-62 ~]$ ps ax | grep mongo
 2623 ?        Sl     0:01 /usr/bin/mongod -f /etc/mongod.conf
 2771 pts/1    S+     0:00 grep --color=auto mongo
[ec2-user@ip-10-0-1-62 ~]$ cat /proc/2623/limits 
Limit                     Soft Limit           Hard Limit           Units     
Max cpu time              unlimited            unlimited            seconds   
Max file size             unlimited            unlimited            bytes     
Max data size             unlimited            unlimited            bytes     
Max stack size            8388608              unlimited            bytes     
Max core file size        0                    unlimited            bytes     
Max resident set          unlimited            unlimited            bytes     
Max processes             64000                64000                processes 
Max open files            64000                64000                files     
Max locked memory         unlimited            unlimited            bytes     
Max address space         unlimited            unlimited            bytes     
Max file locks            unlimited            unlimited            locks     
Max pending signals       31877                31877                signals   
Max msgqueue size         819200               819200               bytes     
Max nice priority         0                    0                    
Max realtime priority     0                    0                    
Max realtime timeout      unlimited            unlimited            us       

再起動して設定されていることを確認

mongoインストール

mongodインストール

[ec2-user@ip-10-0-1-62 ~]$ sudo vi /etc/yum.repos.d/mongodb.repo 
[MongoDB]
name=MongoDB Repository
baseurl=https://repo.mongodb.org/yum/amazon/2013.03/mongodb-org/3.0/x86_64/
gpgcheck=0
enabled=1
[ec2-user@ip-10-0-1-62 ~]$ sudo yum install -y mongodb-org
[ec2-user@ip-10-0-1-62 ~]$ sudo service mongod start
Starting mongod:                                           [FAILED]
[ec2-user@ip-10-0-1-62 ~]$ sudo tail /var/log/mongodb/mongod.log
2016-02-15T10:42:57.985+0000 I STORAGE  [initandlisten] In File::open(), ::open for '/var/lib/mongo/journal/lsn' failed with errno:13 Permission denied

パーミッション変更

[ec2-user@ip-10-0-1-62 ~]$ sudo chown -R mongod:mongod /var/lib/mongo
[ec2-user@ip-10-0-1-62 ~]$ sudo service mongod start
Starting mongod:                                           [  OK  ]
[ec2-user@ip-10-0-1-62 ~]$ sudo chkconfig mongod on

WARNINGが出る

[ec2-user@ip-10-0-1-62 ~]$ sudo tail /var/log/mongodb/mongod.log 
2016-03-15T20:07:51.386+0900 I CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2016-03-15T20:07:51.386+0900 I CONTROL  [initandlisten] **        We suggest setting it to 'never'

transparent-hugepagesを無効にする
※参考
https://docs.mongodb.org/manual/tutorial/transparent-huge-pages/

[ec2-user@ip-10-0-1-62 ~]$ sudo vi /etc/init.d/disable-transparent-hugepages
[ec2-user@ip-10-0-1-62 ~]$ sudo chmod 755 /etc/init.d/disable-transparent-hugepages
[ec2-user@ip-10-0-1-62 ~]$ sudo chkconfig --add disable-transparent-hugepages
[ec2-user@ip-10-0-1-62 ~]$ sudo chkconfig disable-transparent-hugepages on

サーバー再起動

[ec2-user@ip-10-0-1-62 ~]$ cat /sys/kernel/mm/transparent_hugepage/defrag 
always madvise [never]
[ec2-user@ip-10-0-1-62 ~]$ sudo tail /var/log/mongodb/mongod.log 

WARNINGが消えていることを確認

セキュリティキー設定

[ec2-user@ip-10-0-1-62 ~]$ sudo service mongod stop
[ec2-user@ip-10-0-1-62 ~]$ openssl rand -base64 741 > /usr/local/mongodb/conf/mongod.key
[ec2-user@ip-10-0-1-62 ~]$ chmod 600 /usr/local/mongodb/conf/mongod.key
[ec2-user@ip-10-0-1-62 ~]$ sudo chown mongod:mongod /usr/local/mongodb/conf/mongod.key

レプリケーション指定

[ec2-user@ip-10-0-1-62 ~]$ sudo vi /etc/mongod.conf
#bind_ip=127.0.0.1

security:
  authorization: enabled
  javascriptEnabled: true
  keyFile: /usr/local/mongodb/conf/mongod.key

replication:
  replSetName: rs0
[ec2-user@ip-10-0-1-62 ~]$ sudo service mongod start
Starting mongod:                                           [  OK  ]

AMIイメージ作成

mongo replica構築

xg-mongo-1 / primary / AZ1
xg-mongo-2 / secondary / AZ2
xg-mongo-hidden / hidden / AZ1

Replica Setの最適な構成については下記参照
http://qiita.com/y13i/items/78c9f45acd07a9478d2d

設置マニュアル参考サイト
https://docs.mongodb.org/v3.0/tutorial/enable-internal-authentication/

以下、プライマリで実施

[ec2-user@primary]$ mongo
> use admin
> db.createUser( {
    user: "admin",
    pwd: "xgn8020",
    roles: [ { role: "userAdminAnyDatabase", db: "admin" } ]
  });
> db.createUser( {
    user: "root",
    pwd: "xgn8020",
    roles: [ { role: "root", db: "admin" } ]
  });
> rs.initiate();
{
    "info2" : "no configuration explicitly specified -- making one",
    "me" : "ip-10-0-0-165:27017",
    "ok" : 1
}
rs0:OTHER> rs.status();
{
    "set" : "rs0",
    "date" : ISODate("2016-02-15T11:56:09.281Z"),
    "myState" : 1,
    "members" : [
        {
            "_id" : 0,
            "name" : "ip-10-0-0-165:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 197,
            "optime" : Timestamp(1455537328, 1),
            "optimeDate" : ISODate("2016-02-15T11:55:28Z"),
            "electionTime" : Timestamp(1455537328, 2),
            "electionDate" : ISODate("2016-02-15T11:55:28Z"),
            "configVersion" : 1,
            "self" : true
        }
    ],
    "ok" : 1
}
rs0:PRIMARY> rs.conf();
{
    "_id" : "rs0",
    "version" : 1,
    "members" : [
        {
            "_id" : 0,
            "host" : "ip-10-0-0-165:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : 0,
            "votes" : 1
        }
    ],
    "settings" : {
        "chainingAllowed" : true,
        "heartbeatTimeoutSecs" : 10,
        "getLastErrorModes" : {

        },
        "getLastErrorDefaults" : {
            "w" : 1,
            "wtimeout" : 0
        }
    }
}

rs0:PRIMARY> rs.add({host: "ip-10-0-0-49.ap-northeast-1.compute.internal:27017"});
{ "ok" : 1 }
rs0:PRIMARY> rs.add({host: "ip-10-0-0-102.ap-northeast-1.compute.internal:27017", priority: 0, hidden: true});
{ "ok" : 1 }
rs0:PRIMARY> rs.conf();
{
    "_id" : "rs0",
    "version" : 3,
    "members" : [
        {
            "_id" : 0,
            "host" : "ip-10-0-0-165:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : 0,
            "votes" : 1
        },
        {
            "_id" : 1,
            "host" : "ip-10-0-0-49.ap-northeast-1.compute.internal:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : false,
            "priority" : 1,
            "tags" : {

            },
            "slaveDelay" : 0,
            "votes" : 1
        },
        {
            "_id" : 2,
            "host" : "ip-10-0-0-102.ap-northeast-1.compute.internal:27017",
            "arbiterOnly" : false,
            "buildIndexes" : true,
            "hidden" : true,
            "priority" : 0,
            "tags" : {

            },
            "slaveDelay" : 0,
            "votes" : 1
        }
    ],
    "settings" : {
        "chainingAllowed" : true,
        "heartbeatTimeoutSecs" : 10,
        "getLastErrorModes" : {

        },
        "getLastErrorDefaults" : {
            "w" : 1,
            "wtimeout" : 0
        }
    }
}
rs0:PRIMARY> rs.status();
{
    "set" : "rs0",
    "date" : ISODate("2016-02-15T12:01:59.975Z"),
    "myState" : 1,
    "members" : [
        {
            "_id" : 0,
            "name" : "ip-10-0-0-165:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 547,
            "optime" : Timestamp(1455537700, 1),
            "optimeDate" : ISODate("2016-02-15T12:01:40Z"),
            "electionTime" : Timestamp(1455537328, 2),
            "electionDate" : ISODate("2016-02-15T11:55:28Z"),
            "configVersion" : 3,
            "self" : true
        },
        {
            "_id" : 1,
            "name" : "ip-10-0-0-49.ap-northeast-1.compute.internal:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 30,
            "optime" : Timestamp(1455537700, 1),
            "optimeDate" : ISODate("2016-02-15T12:01:40Z"),
            "lastHeartbeat" : ISODate("2016-02-15T12:01:58.684Z"),
            "lastHeartbeatRecv" : ISODate("2016-02-15T12:01:57.971Z"),
            "pingMs" : 0,
            "syncingTo" : "ip-10-0-0-165:27017",
            "configVersion" : 3
        },
        {
            "_id" : 2,
            "name" : "ip-10-0-0-102.ap-northeast-1.compute.internal:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 19,
            "optime" : Timestamp(1455537700, 1),
            "optimeDate" : ISODate("2016-02-15T12:01:40Z"),
            "lastHeartbeat" : ISODate("2016-02-15T12:01:58.707Z"),
            "lastHeartbeatRecv" : ISODate("2016-02-15T12:01:58.762Z"),
            "pingMs" : 3,
            "configVersion" : 3
        }
    ],
    "ok" : 1
}

動作確認

プライマリ

[ec2-user@primary]$ mongo
rs0:PRIMARY> use mydb;
switched to db mydb
rs0:PRIMARY> for(var i=0; i<10000; i++) db.logs.insert(
...     { "uid":i, "value":Math.floor(Math.random()*10000+1) } )
WriteResult({ "nInserted" : 1 })
rs0:PRIMARY> db.logs.count()
10000

セカンダリ

[ec2-user@secondary]$ mongo
rs0:SECONDARY> db.getMongo().setSlaveOk()
rs0:SECONDARY> use admin
switched to db admin
rs0:SECONDARY> db.auth("root", "xgn8020");
1
rs0:SECONDARY> use mydb;
switched to db mydb
rs0:SECONDARY> db.logs.count()
10000

フェイルオーバーテスト

[ec2-user@primary]$ ps ax | grep mongo
 8295 ?        Sl     0:07 /usr/bin/mongod -f /etc/mongod.conf
 8592 pts/1    S+     0:00 grep --color=auto mongo
[ec2-user@primary]$ sudo kill -9 8295
[ec2-user@primary]$ ps ax | grep mongo
 8602 pts/1    S+     0:00 grep --color=auto mongo

[ec2-user@secondary]$ mongo
rs0:SECONDARY> use admin;
rs0:SECONDARY> db.auth("root", "xgn8020");
rs0:SECONDARY> rs.status();
{
    "set" : "rs0",
    "date" : ISODate("2016-02-15T12:24:24.870Z"),
    "myState" : 1,
    "members" : [
        {
            "_id" : 0,
            "name" : "ip-10-0-0-165:27017",
            "health" : 0,
            "state" : 8,
            "stateStr" : "(not reachable/healthy)",
            "uptime" : 0,
            "optime" : Timestamp(0, 0),
            "optimeDate" : ISODate("1970-01-01T00:00:00Z"),
            "lastHeartbeat" : ISODate("2016-02-15T12:24:23.413Z"),
            "lastHeartbeatRecv" : ISODate("2016-02-15T12:24:15.389Z"),
            "pingMs" : 0,
            "lastHeartbeatMessage" : "Failed attempt to connect to ip-10-0-0-165:27017; couldn't connect to server ip-10-0-0-165:27017 (10.0.0.165), connection attempt failed",
            "configVersion" : -1
        },
        {
            "_id" : 1,
            "name" : "ip-10-0-0-49.ap-northeast-1.compute.internal:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 1836,
            "optime" : Timestamp(1455538518, 1579),
            "optimeDate" : ISODate("2016-02-15T12:15:18Z"),
            "electionTime" : Timestamp(1455539060, 1),
            "electionDate" : ISODate("2016-02-15T12:24:20Z"),
            "configVersion" : 3,
            "self" : true
        },
        {
            "_id" : 2,
            "name" : "ip-10-0-0-102.ap-northeast-1.compute.internal:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 1362,
            "optime" : Timestamp(1455538518, 1579),
            "optimeDate" : ISODate("2016-02-15T12:15:18Z"),
            "lastHeartbeat" : ISODate("2016-02-15T12:24:23.407Z"),
            "lastHeartbeatRecv" : ISODate("2016-02-15T12:24:24.311Z"),
            "pingMs" : 0,
            "syncingTo" : "ip-10-0-0-49.ap-northeast-1.compute.internal:27017",
            "configVersion" : 3
        }
    ],
    "ok" : 1
}
rs0:PRIMARY>

旧primaryサーバー起動

[ec2-user@primary]$ sudo service mongod start
Starting mongod:                                           [  OK  ]
[ec2-user@primary]$ mongo
MongoDB shell version: 3.0.9
connecting to: test
rs0:SECONDARY> use admin;
switched to db admin
rs0:SECONDARY> db.auth("root", "xgn8020");
1
rs0:SECONDARY> rs.status();
{
    "set" : "rs0",
    "date" : ISODate("2016-02-15T12:26:02.276Z"),
    "myState" : 2,
    "members" : [
        {
            "_id" : 0,
            "name" : "ip-10-0-0-165:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 32,
            "optime" : Timestamp(1455538518, 1579),
            "optimeDate" : ISODate("2016-02-15T12:15:18Z"),
            "configVersion" : 3,
            "self" : true
        },
        {
            "_id" : 1,
            "name" : "ip-10-0-0-49.ap-northeast-1.compute.internal:27017",
            "health" : 1,
            "state" : 1,
            "stateStr" : "PRIMARY",
            "uptime" : 30,
            "optime" : Timestamp(1455538518, 1579),
            "optimeDate" : ISODate("2016-02-15T12:15:18Z"),
            "lastHeartbeat" : ISODate("2016-02-15T12:26:01.901Z"),
            "lastHeartbeatRecv" : ISODate("2016-02-15T12:26:01.568Z"),
            "pingMs" : 25,
            "electionTime" : Timestamp(1455539060, 1),
            "electionDate" : ISODate("2016-02-15T12:24:20Z"),
            "configVersion" : 3
        },
        {
            "_id" : 2,
            "name" : "ip-10-0-0-102.ap-northeast-1.compute.internal:27017",
            "health" : 1,
            "state" : 2,
            "stateStr" : "SECONDARY",
            "uptime" : 30,
            "optime" : Timestamp(1455538518, 1579),
            "optimeDate" : ISODate("2016-02-15T12:15:18Z"),
            "lastHeartbeat" : ISODate("2016-02-15T12:26:01.901Z"),
            "lastHeartbeatRecv" : ISODate("2016-02-15T12:26:00.495Z"),
            "pingMs" : 25,
            "syncingTo" : "ip-10-0-0-49.ap-northeast-1.compute.internal:27017",
            "configVersion" : 3
        }
    ],
    "ok" : 1
}

primaryとsecondaryが入れ替わっていることを確認

SDK用アカウント追加

rs0:PRIMARY> db.auth("root", "xgn8020");
rs0:PRIMARY> use sdk
rs0:PRIMARY> db.createUser( {
    user: "x-generation",
    pwd: "xgn8020",
    roles: [ { role: "readWrite", db: "sdk" } ]
  });

SDKデータストア

[root@ip-10-0-0-152 db]# mongodump --host 153.126.205.80 --username x-generation --password xgn8020 --db sdk --out ./mongo.dump
[root@ip-10-0-0-152 db]# mongorestore --host rs0/10.0.1.62:27017,10.0.0.15:27017,10.0.1.182:27017 --username x-generation --password xgn8020 --db sdk --drop ./mongo.dump/sdk

参考サイト

レプリケーション
http://gihyo.jp/dev/serial/01/mongodb/0004?page=1

mongo3 on aws
https://docs.mongodb.org/v3.0/tutorial/install-mongodb-on-amazon/

mongo3 auth
https://docs.mongodb.org/manual/tutorial/enable-authentication/

AWS上にハイパフォーマンスMongoDBを構築する方法
http://d.hatena.ne.jp/hiroppon/20151222/1450755422

mongodb3.0.x 設定ファイルテンプレ
http://qiita.com/crumbjp/items/135b76b0f9bca0c912f9

http://www.cyberagent.co.jp/technology/ca_tech/report/8987184.html

spark

http://spark.apache.org/docs/latest/ec2-scripts.html

インスタンス: xg-spark
インスタンスタイプ: m4.large
※メモリが少ないと起動時にエラーになるので最低4G程度必要

キーファイルコピー

[~]$ scp -i project/aws/xg.pem project/aws/xg.pem ec2-user@52.196.70.239:/home/ec2-user/

AWSアクセスキー設定

[ec2-user@ip-10-0-1-9 ~]$ vi ~/.bash_profile 
export AWS_ACCESS_KEY_ID=AKIAIYA7QQOGFGVKZITA
export AWS_SECRET_ACCESS_KEY=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
[ec2-user@ip-10-0-1-9 ~]$ source ~/.bash_profile

sparkダウンロード

[ec2-user@ip-10-0-1-9 ~]$ wget http://ftp.tsukuba.wide.ad.jp/software/apache/spark/spark-1.6.1/spark-1.6.1.tgz
[ec2-user@ip-10-0-1-9 ~]$ tar zxvf spark-1.6.1.tgz 
[ec2-user@ip-10-0-1-9 ~]$ cd spark-1.6.1/ec2/

sparkクラスター起動

[ec2-user@ip-10-0-1-9 ec2]$ ./spark-ec2 -k xg -i /home/ec2-user/xg.pem -s 1 -r ap-northeast-1 -z ap-northeast-1a -t c3.xlarge --vpc-id=vpc-7a31751f --subnet-id=subnet-78b4950f launch xg

Warning: SSH connection error. (This could be temporary.)
Host: ec2-52-196-75-32.ap-northeast-1.compute.amazonaws.com
SSH return code: 255
SSH output: ssh: connect to host ec2-52-196-75-32.ap-northeast-1.compute.amazonaws.com port 22: Connection refused

→なぜか初回起動時にインスタンス起動に失敗するので再起動

[ec2-user@ip-10-0-1-36 ec2]$ ./spark-ec2 -k xg -i /home/ec2-user/xg.pem -s 1 -r ap-northeast-1 -z ap-northeast-1a -t c3.xlarge --vpc-id=vpc-7a31751f --subnet-id=subnet-78b4950f launch --resume xg

sparkクラスター削除する場合

[ec2-user@ip-10-0-1-9 ec2]$ ./spark-ec2 -r ap-northeast-1 -z ap-northeast-1a destroy xg

masterへログイン

[ec2-user@ip-10-0-1-36 ec2]$ ./spark-ec2 -k xg -i /home/ec2-user/xg.pem -r ap-northeast-1 -z ap-northeast-1a login xg

ampcamp

http://ampcamp.berkeley.edu/3/exercises/launching-a-bdas-cluster-on-ec2.html

us-eastリージョンでkey pair作成

key pair: amplab-training:
key file: amplab-training.pem

[~]$ chmod 600 project/aws/amplab-training.pem
[~]$ scp -i project/aws/xg.pem project/aws/amplab-training.pem :/home/ec2-user/

[~]$ ssh -i project/aws/xg.pem
[ec2-user@ip-10-0-1-36 ~]$ sudo yum install git
[ec2-user@ip-10-0-1-36 ~]$ git clone git://github.com/amplab/training-scripts.git
[ec2-user@ip-10-0-1-36 ~]$ cd training-scripts/
[ec2-user@ip-10-0-1-36 training-scripts]$ ./spark-ec2 -k amplab-training -i /home/ec2-user/amplab-training.pem --copy launch amplab-training

subprocess.CalledProcessError: Command 'ssh -t -o StrictHostKeyChecking=no -i /home/ec2-user/amplab-training.pem 'mkdir -p ~/.ssh'' returned non-zero exit status 255

エラー発生するが起動は成功している

[ec2-user@ip-10-0-1-36 training-scripts]$ ./spark-ec2 -k xg -i /home/ec2-user/xg.pem get-master amplab-training

ローカルからログイン
[~]$ ssh -i project/aws/amplab-training.pem

kinesisサンプルテスト

root@ip-10-0-1-27 kinesis-test]$ sbt package
[info] Loading project definition from /root/kinesis-test/project
[info] Set current project to x-generation-aggregator (in build file:/root/kinesis-test/)
[info] Compiling 3 Scala sources to /root/kinesis-test/target/scala-2.10/classes...
[error] /root/kinesis-test/src/main/scala/KinesisWordCountASL.scala:33: object internal is not a member of package org.apache.spark
[error] import org.apache.spark.internal.Logging
[error]                         ^
[error] /root/kinesis-test/src/main/scala/KinesisWordCountASL.scala:77: not found: type Logging
[error] object KinesisWordCountASL extends Logging {
[error]                                    ^
[error] /root/kinesis-test/src/main/scala/KinesisWordCountASL.scala:264: not found: type Logging
[error] private[streaming] object StreamingExamples extends Logging {
[error]                                                     ^
[error] /root/kinesis-test/src/main/scala/KinesisWordCountASL.scala:271: not found: value logInfo
[error]       logInfo("Setting log level to [WARN] for streaming example." +
[error]       ^
[error] four errors found
[error] (compile:compileIncremental) Compilation failed
[error] Total time: 4 s, completed 2016/03/28 12:49:01

ファイル