nginx_unicorn_postgres_rails_rubber_ec2

Deploy com Rubber na EC2

Certifique-se que o pipeline de assets irá compilar adicionando o seguinte ao seu Gemfile:

gem 'therubyracer', :group => :assets

Na sua aplicação, instale o rubber:

bundle install
gem install rubber

Instale o template do rubber que virá com todas as configurações das dependências que sua aplicação precisa, no meu caso: Unicorn, Nginx e PostgreSQL:

rubber vulcanize complete_unicorn_nginx_postgresql

Durante a instalação o Rubber criará todos os arquivos de configuração de cada dependência. Agora vamos configurar o rubber.

Configurar a chave de acesso

Gere a chave da EC2 como pública:

mkdir ~/.ec2
cd ~/.ec2

cp chave.pem chave
chmod 600 chave
ssh-keygen -y -f chave.pem > chave.pub

Observação: A chave que você vai gerar como pública deve esta na mesma região onde o Rubber irá criar a instância, pois quando você gera uma chave na AWS, ela só é registrada para a região que se encontra. Ex: (sa-east-1)

Configurar o Rubber

Edite config/rubber/rubber.yml, que basicamente são as seguintes chaves a serem alteradas:

  • app_name
  • admin_email
  • web_tools_user
  • web_tools_password
  • timezone
  • domain
  • cloud_providers → aws → region
  • cloud_providers → aws → access_key
  • cloud_providers → aws → secret_access_key
  • cloud_providers → aws → account
  • cloud_providers → aws → key_name
  • cloud_providers → aws → key_file
  • cloud_providers → aws → image_type
  • cloud_providers → aws → image_id

Como estou usando o PostgreSQL (em produção), você precisa editar este arquivo config/rubber/rubber-postgresql.yml com a senha de acesso do teu banco. A configuração do banco de dados da app, em produção, será criado automaticamente pelo Rubber (config/database.yml).

  • db_pass

Fique atento se você usa bancos diferentes nos ambientes de desenvolvimento e produção, pois se estiver usando SQLite para desenvolvimento, não esqueça de adicionar a ” gem ‘pg’ ” no GemFile, para não ocorrer erros durante o deploy.

(A partir da versão Rails 4.1) Edite seu config/secrets.yml colocando um “secret” para production. Você pode gerar uma com o comando abaixo, pegue os 30 caracteres e cole em production->secret_key_base:

rake secret

Todo meu arquivo configurado encontra abaixo:

# REQUIRED: The name of your application
app_name: app

# REQUIRED: The system user to run your app servers as
app_user: app

# REQUIRED: Notification emails (e.g. monit) get sent to this address
#
admin_email: "root@#{full_host}"

# OPTIONAL: If not set, you won't be able to access web_tools
# server (graphite, graylog, monit status, haproxy status, etc)
web_tools_user: admin
web_tools_password: sekret

# REQUIRED: The timezone the server should be in
timezone: America/Fortaleza

# REQUIRED: the domain all the instances should be associated with
#
domain: app.com

# OPTIONAL: See rubber-dns.yml for dns configuration
# This lets rubber update a dynamic dns service with the instance alias
# and ip when they are created. It also allows setting up arbitrary
# dns records (CNAME, MX, Round Robin DNS, etc)

# OPTIONAL: Additional rubber file to pull config from if it exists. This file will
# also be pushed to remote host at Rubber.root/config/rubber/rubber-secret.yml
#
# rubber_secret: "#{File.expand_path('~') + '/.ec2' + (Rubber.env == 'production' ? '' : '_dev') + '/rubber-secret.yml' rescue 'rubber-secret.yml'}"

# OPTIONAL: Encryption key that was used to obfuscate the contents of rubber-secret.yml with "rubber util:obfuscation" 
# Not that much better when stored in here, but you could use a ruby snippet in here to fetch it from a key server or something
#
# rubber_secret_key: "XXXyyy=="

# REQUIRED All known cloud providers with the settings needed to configure them
# There's only one working cloud provider right now - Amazon Web Services
# To implement another, clone lib/rubber/cloud/aws.rb or make the fog provider 
# work in a generic fashion
#
cloud_providers:
 aws:
 # REQUIRED The AWS region that you want to use.
 #
 # Options include
 # ap-northeast-1 # Asia Pacific (Tokyo) Region
 # ap-southeast-1 # Asia Pacific (Singapore) Region
 # ap-southeast-2 # Asia Pacific (Sydney) Region
 # eu-west-1 # EU (Ireland) Region
 # sa-east-1 # South America (Sao Paulo) Region
 # us-east-1 # US East (Northern Virginia) Region
 # us-west-1 # US West (Northern California) Region
 # us-west-2 # US West (Oregon) Region
 #
 region: sa-east-1

 # REQUIRED The amazon keys and account ID (digits only, no dashes) used to access the AWS API
 #
 access_key: AAAAAAAAAAAAAAAAAAAA # -> https://console.aws.amazon.com/iam/home#security_credential -> Access Keys (Access Key ID and Secret Access Key)
 secret_access_key: bbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbbb # -> https://console.aws.amazon.com/iam/home#security_credential -> Access Keys (Access Key ID and Secret Access Key)
 account: '000000000000' # -> https://console.aws.amazon.com/billing/home?#/account

 # REQUIRED: The name of the amazon keypair and location of its private key
 #
 # NOTE: for some reason Capistrano requires you to have both the public and
 # the private key in the same folder, the public key should have the
 # extension ".pub". The easiest way to get your hand on this is to create the
 # public key from the private key: ssh-keygen -y -f gsg-keypair > gsg-keypair.pub
 #
 key_name: chave
 key_file: "#{Dir[(File.expand_path('~') rescue '/root') + '/.ssh/*' + cloud_providers.aws.key_name].first}" # No windows coloquei minha chave pem de acesso na pasta ~/.ssh ou C:\Users\seu_usuario\.ssh

 # OPTIONAL: Needed for bundling a running instance using rubber:bundle
 #
 # pk_file: "#{Dir[(File.expand_path('~') rescue '/root') + '/.ec2/pk-*'].first}"
 # cert_file: "#{Dir[(File.expand_path('~') rescue '/root') + '/.ec2/cert-*'].first}"
 # image_bucket: "#{app_name}-images"

 # OPTIONAL: Needed for backing up database to s3
 # backup_bucket: "#{app_name}-backups"

 # REQUIRED: the ami and instance type for creating instances
 # The Ubuntu images at http://alestic.com/ work well
 # Ubuntu 14.04.1 Trusty instance-store 64-bit: ami-92f569fa
 #
 # m1.small or m1.large or m1.xlarge
 image_type: t2.small
 image_id: ami-69d26774 # Ubuntu Server 14.04 LTS (HVM), SSD Volume Type , 64 bit

 # OPTIONAL: Provide fog-specific options directly. This should only be used if you need a special setting that
 # Rubber does not directly expose. Since these settings will be passed directly through to fog, we can't make any
 # guarantee about how they work (if fog renames an attribute, e.g., your config will break). Please see the fog
 # source code for the option names.
 # fog_options:
 # EBS I/O optimized instance
 # EBS-optimized instances deliver dedicated throughput between Amazon EC2 and Amazon EBS, with options
 # between 500 Mbps and 1000 Mbps depending on the instance type used.
 # Read more and make sure that your image_type supports ebs_optimized function at: http://aws.amazon.com/ec2/instance-types/
 # ebs_optimized: false

 # OPTIONAL: EC2 spot instance request support.
 #
 # Enables the creation of spot instance requests. Rubber will wait synchronously until the request is fulfilled,
 # at which point it will begin initializing the instance, unless spot_instance_request_timeout is set.
 # spot_instance: true
 #
 # The maximum price you would like to pay for your spot instance.
 # spot_price: "0.085"
 #
 # If a spot instance request can't be fulfilled in 3 minutes, fallback to on-demand instance creation. If not set,
 # the default is infinite.
 # spot_instance_request_timeout: 180

 digital_ocean:
 # REQUIRED: The Digital Ocean region that you want to use.
 #
 # Options include
 # New York 1
 # Amsterdam 1
 # San Francisco 1
 # New York 2
 # Amsterdam 2
 # Singapore 1
 #
 # These change often. Check https://www.digitalocean.com/droplets/new for the most up to date options.
 # Default to New York 2 since this is the only region that currently supports private networking
 region: New York 2

 # REQUIRED: The image name and type for creating instances.
 image_id: Ubuntu 14.04 x64
 image_type: 512MB

 # Optionally enable private networking for your instances.
 # This is currently only supported in New York 2.
 private_networking: true

 # Use an alternate cloud provider supported by fog. This doesn't fully work
 # yet due to differences in providers within fog, but gives you a starting
 # point for contributing a new provider to rubber. See rubber/lib/rubber/cloud(.rb)
 fog:
 credentials:
 provider: rackspace
 rackspace_api_key: 'XXX'
 rackspace_username: 'YYY'
 image_type: 123
 image_id: 123

# REQUIRED the cloud provider to use
#
cloud_provider: aws

# OPTIONAL: Where to store instance data.
#
# Allowed forms are:
# filesystem: "file:#{Rubber.root}/config/rubber/instance-#{Rubber.env}.yml"
# cloud storage (s3): "storage:#{cloud_providers.aws.backup_bucket}/RubberInstances_#{app_name}/instance-#{Rubber.env}.yml"
# cloud table (simpledb): "table:RubberInstances_#{app_name}_#{Rubber.env}"
#
# If you need to port between forms, load the rails console then:
# Rubber.instances.save(location)
# where location is one of the allowed forms for this variable
#
# instance_storage: "file:#{Rubber.root}/config/rubber/instance-#{Rubber.env}.yml"

# OPTIONAL: Where to store a backup of the instance data
#
# This is most useful when using a remote store in case you end up
# wiping the single copy of your instance data. When using the file
# store, the instance file is typically under version control with
# your project code, so that provides some safety.
#
# instance_storage_backup: "storage:#{cloud_providers.aws.backup_bucket}/RubberInstances_#{app_name}/instance-#{Rubber.env}-#{Time.now.strftime('%Y%m%d-%H%M%S')}.yml"

# OPTIONAL: Default ports for security groups
web_port: 80
web_ssl_port: 443
web_tools_port: 8080
web_tools_ssl_port: 8443

# OPTIONAL: Define security groups
# Each security group is a name associated with a sequence of maps where the
# keys are the parameters to the ec2 AuthorizeSecurityGroupIngress API
# source_security_group_name, source_security_group_owner_id
# ip_protocol, from_port, to_port, cidr_ip
# If you want to use a source_group outside of this project, add "external_group: true"
# to prevent group_isolation from mangling its name, e.g. to give access to graphite
# server to other projects
#
# security_groups:
# graphite_server:
# description: The graphite_server security group to allow projects to send graphite data
# rules:
# - source_group_name: yourappname_production_collectd
# source_group_account: 123456
# external_group: true
# protocol: tcp
# from_port: "#{graphite_server_port}"
# to_port: "#{graphite_server_port}"
#
security_groups:
 default:
 description: The default security group
 rules:
 - source_group_name: default
 source_group_account: "#{cloud_providers.aws.account}"
 - protocol: tcp
 from_port: 22
 to_port: 22
 source_ips: [0.0.0.0/0]
 web:
 description: "To open up port #{web_port}/#{web_ssl_port} for http server on web role"
 rules:
 - protocol: tcp
 from_port: "#{web_port}"
 to_port: "#{web_port}"
 source_ips: [0.0.0.0/0]
 - protocol: tcp
 from_port: "#{web_ssl_port}"
 to_port: "#{web_ssl_port}"
 source_ips: [0.0.0.0/0]
 web_tools:
 description: "To open up port #{web_tools_port}/#{web_tools_ssl_port} for internal/tools http server"
 rules:
 - protocol: tcp
 from_port: "#{web_tools_port}"
 to_port: "#{web_tools_port}"
 source_ips: [0.0.0.0/0]
 - protocol: tcp
 from_port: "#{web_tools_ssl_port}"
 to_port: "#{web_tools_ssl_port}"
 source_ips: [0.0.0.0/0]

# OPTIONAL: The default security groups to create instances with
assigned_security_groups: [default]
roles:
 web:
 assigned_security_groups: [web]
 web_tools:
 assigned_security_groups: [web_tools]

# OPTIONAL: Automatically create security groups for each host and role
# EC2 Classic doesn't allow one to change what groups an instance belongs to after
# creation, so it's good to have some empty ones predefined. EC2 with VPC, however,
# does allow changing security groups after instance creation and allows far fewer
# security groups per instance, so you shouldn't enable this setting if using VPC.
auto_security_groups: false

# OPTIONAL: Automatically isolate security groups for each appname/environment
# by mangling their names to be appname_env_groupname
# This makes it safer to have staging and production coexist on the same EC2
# account, or even multiple apps. NB: due to the security group limits per instance
# in EC2 with VPCs, this option should only be enabled if you're using EC2 Classic.
isolate_security_groups: false

# OPTIONAL: Prompts one to sync security group rules when the ones in amazon
# differ from those in rubber
prompt_for_security_group_sync: true

# OPTIONAL: A list of CIDR address blocks that represent private networks for your cluster.
# Set this to open up wide access to hosts in your network. Consequently, setting the CIDR block
# to anything other than a private, unroutable block would be a massive security hole.
private_networks: [10.0.0.0/8]

# OPTIONAL: The packages to install on all instances
# You can install a specific version of a package by using a sub-array of pkg, version
# For example, packages: [[rake, 0.7.1], irb]
packages: [postfix, build-essential, git-core, libxslt-dev, ntp]

# OPTIONAL: The package manager mirror to use for installation of primary packages (i.e., those not explicitly
# sourced from a different repository). If not specified, whatever mirror configured by your server image
# will be used.
#
# Note that Ubuntu has a special URL that can be used to auto-select the mirror based upon geoip. To use
# it, specify 'mirror://mirrors.ubuntu.com/mirrors.txt' as the value.
# package_manager_mirror: 'mirror://mirrors.ubuntu.com/mirrors.txt'

# OPTIONAL: The command used to identify your particular OS version. This will be used for configurations
# in Rubber templates that are parameterized by OS version (e.g., package lists). If not specified, Ubuntu
# will be assumed.
os_version_cmd: 'lsb_release -sr'

# OPTIONAL: gem sources to setup for rubygems
# gemsources: ["https://rubygems.org"]

# OPTIONAL: The gems to install on all instances
# You can install a specific version of a gem by using a sub-array of gem, version
# For example, gem: [[rails, 2.2.2], open4, aws-s3]
gems: [open4, aws-s3, bundler, [rubber, "#{Rubber.version}"]]

# OPTIONAL: A string prepended to shell command strings that cause multi
# statement shell commands to fail fast. You may need to comment this out
# on some platforms, but it works for me on linux/osx with a bash shell
#
stop_on_error_cmd: "function error_exit { exit 99; }; trap error_exit ERR"

# OPTIONAL: The default set of roles to use when creating a staging instance
# with "cap rubber:create_staging". By default this uses all the known roles,
# excluding slave roles, but this is not always desired for staging, so you can
# specify a different set here
#
# staging_roles: "web,app,db:primary=true"
# Auto detect staging roles
staging_roles: "#{known_roles.reject {|r| r =~ /slave/ || r =~ /^db$/ }.join(',')}"

# OPTIONAL: Lets one assign amazon elastic IPs (static IPs) to your instances
# You should typically set this on the role/host level rather than
# globally , unless you really do want all instances to have a
# static IP
#
# use_static_ip: true

# OPTIONAL: Specifies an instance to be created in the given availability zone
# Availability zones are sepcified by amazon to be somewhat isolated
# from each other so that hardware failures in one zone shouldn't
# affect instances in another. As such, it is good to specify these
# for instances that need to be redundant to reduce your chance of
# downtime. You should typically set this on the role/host level
# rather than globally. Use cap rubber:describe_zones to see the list
# of zones
# availability_zone: us-east-1a

# OPTIONAL: If you want to use Elastic Block Store (EBS) persistent
# volumes, add them to host specific overrides and they will get created
# and assigned to the instance. On initial creation, the volume will get
# attached _and_ formatted, but if your host disappears and you recreate
# it, the volume will only get remounted thereby preserving your data
#
# hosts:
# my_host:
# availability_zone: us-east-1a
# volumes:
# - size: 100 # size of vol in GBs
# zone: us-east-1a # zone to create volume in, needs to match host's zone
# device: /dev/sdh # OS device to attach volume to
# mount: /mnt/mysql # The directory to mount this volume to
# filesystem: ext4 # the filesystem to create on volume
#
# # OPTIONAL: Provide fog-specific options directly. This should only be used if you need a special setting that
# # Rubber does not directly expose. Since these settings will be passed directly through to fog, we can't make any
# # guarantee about how they work (if fog renames an attribute, e.g., your config will break). Please see the fog
# # source code for the option names.
# fog_options:
# type: gp2 # type of volume, standard (EBS magnetic), io1 (provisioned IOPS - SSD), or gp2 (general purpose - SSD).
# iops: 500 # The number of I/O operations per second (IOPS) that the volume supports.
# # Required when the volume type is io1; not used with non-provisioned IOPS volumes.
# - size: 10
# zone: us-east-1a
# device: /dev/sdi
# mount: /mnt/logs
# filesystem: ext4
# fog_options:
# type: io1
# iops: 500
#
# # volumes without mount/filesystem can be used in raid arrays
#
# - size: 50
# zone: us-east-1a
# device: /dev/sdx
# fog_options:
# type: gp2
# iops: 500
# - size: 50
# zone: us-east-1a
# device: /dev/sdy
# fog_options:
# type: gp2
# iops: 500
#
# # Use some ephemeral volumes for raid array
# local_volumes:
# - partition_device: /dev/sdb
# zero: false # zeros out disk for improved performance
# - partition_device: /dev/sdc
# zero: false # zeros out disk for improved performance
#
# # for raid array, you'll need to add mdadm to packages. Likewise,
# # xfsprogs is needed for xfs filesystem support
# #
# packages: [xfsprogs, mdadm]
# raid_volumes:
# - device: /dev/md0 # OS device to to create raid array on
# mount: /mnt/fast # The directory to mount this array to
# mount_opts: 'nobootwait' # Recent Ubuntu versions require this flag or SSH will not start on reboot
# filesystem: xfs # the filesystem to create on array
# filesystem_opts: -f # the filesystem opts in mkfs
# raid_level: 0 # the raid level to use for the array
# # if you're using Ubuntu 11.x or later (Natty, Oneiric, Precise, etc)
# # you will want to specify the source devices in their /dev/xvd format
# # see https://bugs.launchpad.net/ubuntu/+source/linux/+bug/684875 for
# # more information.
# # NOTE: Only make this change for raid source_devices, NOT generic
# # volume commands above.
# source_devices: [/dev/sdx, /dev/sdy] # the source EBS devices we are creating raid array from (Ubuntu Lucid or older)
# source_devices: [/dev/xvdx, /dev/xvdy] # the source EBS devices we are creating raid array from (Ubuntu Natty or newer)
#
# # for LVM volumes, you'll need to add lvm2 to packages. Likewise,
# # xfsprogs is needed for xfs filesystem support
# packages: [xfsprogs, lvm2]
# lvm_volume_groups:
# - name: vg # The volume group name
# physical_volumes: [/dev/sdx, /dev/sdy] # Devices used for LVM group (you can use just one, but you can't stripe then)
# extent_size: 32 # Size of the volume extent in MB
# volumes:
# - name: lv # Name of the logical volume
# size: 999.9 # Size of volume in GB (slightly less than sum of all physical volumes because LVM reserves some space)
# stripes: 2 # Count of stripes for volume
# filesystem: xfs # The filesystem to create on the logical volume
# filesystem_opts: -f # the filesystem opts in mkfs
# mount: /mnt/large_work_dir # The directory to mount this LVM volume to

# OPTIONAL: You can also define your own variables here for use when
# transforming config files, and they will be available in your config
# templates as <%%= rubber_env.var_name %>
#
# var_name: var_value

# All variables can also be overridden on the role, environment and/or host level by creating
# a sub level to the config under roles, environments and hosts. The precedence is host, environment, role
# e.g. to install mysql only on db role, and awstats only on web01:

# OPTIONAL: Role specific overrides
# roles:
# somerole:
# packages: []
# somerole2:
# myconfig: someval

# OPTIONAL: Environment specific overrides
# environments:
# staging:
# myconfig: otherval
# production:
# myconfig: val

# OPTIONAL: Host specific overrides
# hosts:
# somehost:
# packages: []

Depois de configurar vamos criar a instância. Durante a instalação será criado uma instância t2.micro Ubuntu, na região de São Paulo, em seguida instalar todas as dependências da aplicação: banco, servidor http, ruby, gems em seguida configuração de deploy de sua app, que é feito via SFTP.

Após enviado pela primeira vez a app, é configurado acesso ao banco, Nginx, Monit e Graphite:

cap rubber:create_staging

Erros que podem ocorrer

Se ocorrer algum problema no Capistrano como esse: “Undefined method `instance’ for Capistrano::Configuration:Class”, basta você desinstalar a gem do capistrano e bundle install na sua app:

gem uninstall capistrano
bundle install

Após configurar toda a instância e quiser fazer deploy novamente de sua aplicação, é só fazer:

cap deploy

Se ocorrer o erro de ausência de gem durante o bundle do deploy no servidor, a solução é apagar o Gemfile.lock e executar “bundle install” na sua app e repetir o deploy. =]

Se acusar este erro:  exception while rolling back: “Capistrano::ConnectionError, connection failed for: production.app.com (Timeout::Error: execution expired)“, provavelmente o Capistrano está tentando se conectar a uma instância que não existe, verifique se o IP da instância para deploy está correto no arquivo config/deploy/production.rb

# Extended Server Syntax
# ======================
# This can be used to drop a more detailed server definition into the
# server list. The second argument is a, or duck-types, Hash and is
# used to set extended properties on the server.

# This is used in the Nginx VirtualHost to specify which domains
# the app should appear on. If you don't yet have DNS setup, you'll
# need to create entries in your local Hosts file for testing.

set :server_name, "55.666.7.888"

set :stage, :productioon
set :branch, "master"

# User in case we're deploying multiple versions of the same
# app side by side. Also provides quick sanity checks when looking
# at filepaths
set :full_app_name, "#{fetch(:application)}_#{fetch(:stage)}"

server '55.666.7.888', user: 'root', roles: %w{web app}, ssh_options: {
 user: 'root',
 password: 'XXXXXXX',
 forward_agent: true,
 port: 22,
 verbose: :debug
}

Qualquer dúvida é só comentar! =]

Fontes: http://railscasts.com/episodes/347-rubber-and-amazon-ec2?view=asciicast

https://github.com/rubber/rubber/wiki/Quick-Start

http://liggat.org/2014/12/13/a-full-AWS-rails-stack-provision-and-deployment-with-the-rubber-gem/

zend-aws

Zend Framework 1.11.X na EC2

1° Instale o LAMP – http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/install-LAMP.html

2° Envie sua aplicação para a pasta /var/www/html. Se for via FTP, use o Filezilla, este tutorial ensina como se conectar na EC2: https://www.youtube.com/watch?v=e9BDvg42-JI

3° Configurar o /etc/httpd/conf/httpd.conf com o caminho da pasta do seu projeto e os requisitos exigidos pelo Zend. Altere as linhas:

293: 
DocumentRoot "/var/www/html/sua_app"

318:
<Directory "/var/www/html/sua_app">
  Options Indexes Multiviews FollowSymLinks
  AllowOverride All
  Order allow, deny
  Allow from all
</Directory>

4° Reinicie o HTTPD:

sudo service httpd restart

Se deseja acessar o seu banco de dados por algum programa, como HeidiSQL, lembre-se de abrir uma porta 3306 no SecurityGroup da sua instância EC2. Você pode abri-lá, também, após a criação de sua instância.

Tutorial ensinando como se conectar com o HeidiSQL na EC2: http://vtvlab.wordpress.com/2011/12/13/connect-to-mysql-server-from-heidisql-with-ssh/

 

Pronto! =] … Qualquer dúvida comentem.

ec2xs3-nginx

Criando um webserver em Nginx para website estáticos na AWS

Houve uma demanda por sites estáticos e fomos experimentar usar o S3 da Amazon e encontramos algumas vantagens e desvantagens em relação ao EC2:

Vantagens

  • O S3 possui uma interface bastante amigável de gerenciamento de arquivos, sem necessitar ter um conhecimento técnico de administração de servidores para o envio de arquivos via SFTP;
  • É muito fácil para gerenciar o domínio junto ao S3 e o Route 53 (Você pode ver através desse tutorial, é completo: http://chadthompson.me/2013/05/static-web-hosting-with-amazon-s3/);

Desvantagens

  • O S3 não permite você configurar o servidor HTTP com isso não é possível ativar compactação GZIP, Cache e entre outros recursos que são muito importantes para a performance de um site que pode reduzir em média 50% o tempo de carregamento;
  • Não permite personalizar o servidor, ao contrário do EC2 que te oferece total autonomia para configurar;

Como sempre procuramos performance, ainda vamos manter nossos sites no EC2, mesmo sendo estáticos, simples e que necessitam de um pouco mais de esforço, garantimos maior performance aos nossos trabalhos.

Presumo que você já possua sua instância criada na EC2. Estou usando para esse exemplo Ubuntu 14.04.

Então vamos começar:

1° Instale o Nginx

sudo apt-get install nginx

2° Crie as pastas onde serão armazenados os arquivos do site. Como estamos utilizando o usuário ubuntu, vamos sempre dar permissão para este.

sudo mkdir /var/www/seusite
sudo chown ubuntu:ubuntu /var/www/seusite
sudo chmod 774 /var/www/seusite

3° Vamos editar o arquivo de configuração do Nginx:

sudo vim /etc/nginx/nginx.conf

Dentro do arquivo você vai configurar da seguinte forma:

user ubuntu ubuntu;
worker_processes 4;
pid /run/nginx.pid;

events {
 worker_connections 1024;
 # multi_accept on;
}

http {

 ##
 # Basic Settings
 ##

 sendfile on;
 tcp_nopush on;
 tcp_nodelay on;
 keepalive_timeout 60;
 keepalive_requests 100000;
 types_hash_max_size 2048;
 # server_tokens off;
 ignore_invalid_headers on;
 send_timeout 60;

 access_log off;

 server_names_hash_bucket_size 128;
 # server_name_in_redirect off;

 include /etc/nginx/mime.types;
 default_type application/octet-stream;

 ##
 # Logging Settings
 ##

 access_log /var/log/nginx/access.log;
 error_log /var/log/nginx/error.log;

 ##
 # Gzip Settings
 ##

 gzip on;
 gzip_disable "msie6";

 # gzip_vary on;
 gzip_proxied any;
 gzip_comp_level 5;
 # gzip_buffers 16 8k;
 gzip_http_version 1.1;
 gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;

 ##
 # nginx-naxsi config
 ##
 # Uncomment it if you installed nginx-naxsi
 ##

 #include /etc/nginx/naxsi_core.rules;

 ##
 # nginx-passenger config
 ##
 # Uncomment it if you installed nginx-passenger
 ##

 #passenger_root /usr;
 #passenger_ruby /usr/bin/ruby;

 ##
 # Virtual Host Configs
 ##

 include /etc/nginx/conf.d/*.conf;
 include /etc/nginx/sites-enabled/*;

 open_file_cache max=200000 inactive=20s;
 open_file_cache_valid 30s;
 open_file_cache_min_uses 2;
 open_file_cache_errors on;
}

Nessa configuração já estão atribuídos várias dicas de performance, que você também pode acompanhar por aqui: http://dak1n1.com/blog/12-nginx-performance-tuning

5° Vamos garantir que o nosso usuário tem acesso aos arquivos de configuração:

sudo chown ubuntu:ubuntu /var/log/nginx/error.log
sudo chown ubuntu:ubuntu /etc/nginx/nginx.conf

6° Ao final teste se a sintaxe do nginx está ok:

nginx -t

Possivelmente pode aparecer alguns erros de permissão, mas não se preocupe, o importante é a sintaxe está correta.

nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:1 nginx: the configuration file /etc/nginx/nginx.conf syntax is ok 
nginx: [emerg] open() "/run/nginx.pid" failed (13: Permission denied) 
nginx: configuration file /etc/nginx/nginx.conf test failed

7° Agora vamos criar nosso virtual host:

cd /etc/nginx/conf.d/
sudo vim virtual.conf

No arquivo virtual.conf coloque a configuração para a pasta raiz do seu site:

server{
 listen 80;
 server_name seusite.com www.seusite.com;
 root /var/www/seusite;
}

Lembrando que isso só vai funcionar se a sua instância está configurada com Route 53, sendo o DNS do seu domínio também administrado pela Amazon.

É muito simples configurar um server Nginx na Amazon. Depois posso criar um bash para automatizar esse processo.

Referências:

http://wbotelhos.com/ruby-unicorn-e-nginx-na-amazon-ec2

http://charles.lescampeurs.org/2008/11/14/fix-nginx-increase-server_names_hash_bucket_size

http://www.tuicool.com/articles/jQFvma

http://omakoleg.blogspot.com.br/2012/04/install-nginx-on-amazom-ec-2-yum.html

http://dak1n1.com/blog/12-nginx-performance-tuning

http://chadthompson.me/2013/05/static-web-hosting-with-amazon-s3/

header_nav-abde493c8e96f0654446243c82f50560

RailsConf 2014 – Chicago

Todas as apresentações e os vídeos do RailsConf. =]

Slides

Concerns, Decorators, Presenters, Service Objects, Helpers, Help Me Decide! – Justin Gordon
https://www.icloud.com/iw/#keynote/BALu9Dy-Dcbu1PvWluyB_G-jq5C6URGmij2F/RailsConf-2014-Concerns-Decorators-Presenters-Service-Objects-Helpers-Help-Me-Decide-April-22-2014

GitHub repo for on boarding juniors devs
https://github.com/heddle317/onboarding

An Ode to 17 Databases in 33 minutes
http://slides.com/tobyhede/an-ode-to-17-database-in-33-minutes-railsconf-2014

Front-End: Fun, Not Frustration
http://roy.io/railsconf

Service Oriented Authentication – Jeremy Green
http://jagthedrummer.github.io/service_oriented_authentication/#/

Vídeos

Web applications with Ruby (not Rails)

 

Todos os vídeos: http://www.justin.tv/confreaks/b/523059070

Configurar Route 53 para usar o e-mail do Google

1° Acessa a sua conta da AWS, vá em Route 53 e selecione seu domínio:

1

2° Crie um nove Record Set do tipo MS e insira os valores do Google

2

3° Teste se deu certo.  Descubra o nome dos seus servidores através deste site http://www.kloth.net/services/nslookup.php . Insira o domínio e escolha a informação que você quer encontrar, no nosso caso escolha MX e veja se o resultado é o mesmo do Google.

3

 

Pronto =]

Permission denied public key bitbucket

Problema para fazer o push para seu repositório aqui está algumas dicas para resolver (em Windows):

1°: Encontre sua SSH id_rsa.pub. Geralmente ela se encontra no Windows nessa pasta C:\Users\seu-usuario\.ssh.

2° Copie o conteúdo da chave, que será semelhante a este:

ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAwrUAM8H5EsDBtaMy2/3w2sK9fIR2OLR2HyM/QHwX/KbKSq1B0t+3JMwVrihGOlQYV5hI8frPUR56ACbIwCHkYmORPnM/HZaonhWY2DMYLMohYPOAh6tFwNByV9v7R++15y1PvyXJv3DMnpaqorlautL3ffpZ2UwZH6oo9OqxbeZv4okSQb1LX45/PMMkQJ6NaLSDoYP0fT66qVeNyEpxdxz5HOGZJdR4fc7PlC72IOwcsDYuyXMUiqyS2e+NwyT6+NcpRE4xPOPbZ39o4x2uq5FuOvsbalWQlY0/nV37EYyLjjj/pD4+AbC5irV8E45Lagtp0vrKVO4aoxOQitu62Q== xxxxxxxxxxxxxxxxxxxxxxxxx

3° Acesse sua conta de usuário no Bitbucket. Manage Account > SSH Keys

4° Clique em Add Keys ou Adicionar Chave, crie um título e Cole o conteúdo da sua chave id_rsa.pub no campo Key. Em seguida salve.

5° Teste a conexão via terminal (eu uso o Git Bash).

ssh -T git@bitbucket.org

Apresentará esse resultado

logged in as seu-usuario

Pronto já pode realizar o push normalmente =] …

mal-renderizado-font

Melhorando a má renderização de fontes web e ícones

Gosto muito de usar em meus projetos o Google Web Fonts e os ícones do Icomoon  , mas o interessante que no Chrome as mesmas ficam mal renderizadas (serrilhados/pixelados), como a imagem abaixo:

mal-renderizado

Uma das soluções que encontrei foi usar acrescentar essa linha no meu “body” do CSS:

body{
  -webkit-text-stroke: 0.2px;
}

Ela dará um pequeno desfoque no contorno dos textos tornando com curvas mais delineadas e menos serrilhadas.

No caso dos ícones acrescentei estas linhas no final do arquivo em que importa as fontes dos ícones:

@media screen and (-webkit-min-device-pixel-ratio:0) {
 @font-face {
 font-family: 'icomoon';
 src: url('fonts/icomoon.svg#icomoon') format('svg');
 }
 }

Veja o resultado final:

otimizado

 

Fonte: http://www.dev-metal.com/fix-ugly-font-rendering-google-chrome/