I spend a lot of time doing freelance / consulting work for a variety of businesses. One thing that I see more often than not is git repositories that are littered with passwords, access keys, secret credentials, etc. It’s bad practice from a security standpoint, so it’s usually one of the first things I try to address. There are a number of ways this can be done:
- You can use environment variables set on the server directory.
- AWS Elastic Beanstalk, you can set your environment variables in the Beanstalk console.
- EC2 instance, and then write some custom code to grab them from the userdata and convert them to local environment variables on the server.
- Pass them on the command line when starting the application
- And so on, and so on…
All the above options will work, and you’ll find a ton of different posts on the internet that will give more options. Personally, I like to store them in s3 with encryption and then grab them as needed. I use the s3cmd package for this to make it easy to work with s3. For Ubuntu, it’s easy to get setup:
1 2 |
sudo apt-get install s3cmd sudo s3cmd --configure |
Terraform Example
Terraform, which is a great for managing your infrastructure, requires aws access keys to be used with aws. In your terraform configuration, you might also need to have various passwords that you need for doing some post server deploy setup.
Here’s how I tackle it. First, I don’t want anyone to just run the terraform command directly, because it won’t have the environment variables that we need. So, I move the terraform command to something else, like exec_terraform, or whatever you want to call it.
I then create a terraform.env file, and add it to my .gitignore. The file contents look something like this:
1 2 3 4 |
export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXx export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXX export TF_VAR_SPECIAL_USERNAME=XXXXXXXXXX export TF_VAR_SPECIAL_PASSWORD=XXXXXXXXXX |
Then I create the terraform command that we actually want to run. Sinple little bash script:
1 2 3 4 5 6 7 8 9 10 11 12 |
#!/bin/bash # # Grab env variables from s3 # source env variables # execute terraform with the renamed command # remove env variable file # s3cmd -q --force get s3://your_env_bucket/terraform.env . terraform.env exec_terraform "$@" rm terraform.env |
Docker Example
Say I have a docker-compose.yml file for a laravel setup, which has a bunch of environment variables for the php container and the mysql container. For these, the env_file: directive comes into play:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 |
version: '3' services: nginx: build: ./nginx container_name: nginx ports: - "4000:80" links: - php volumes: - ./php/laravel:/var/www/html - /var/www/html/vendor working_dir: /var/www/html php: build: ./php container_name: php ports: - "4001:9000" links: - db volumes: - ./php/laravel:/var/www/html - /var/www/html/vendor working_dir: /var/www/html env_file: - admin_web.env db: image: mysql restart: always container_name: db ports: - "4306:3306" env_file: - admin_mysql.env volumes: - db_data:/var/lib/mysql volumes: db_data: |
You’ll notice we have 2 env_file directives, admin_web.env and admin_mysql.env. Those files are not added to the git repo, so we store them in our s3 bucket for environment variable files, and then retrieve them before running our docker-compose up -d
1 2 3 |
#!/bin/bash s3cmd --force get s3://your_env_bucket/admin_mysql.env . s3cmd --force get s3://your_env_bucket/admin_web.env . |
Wrapup
These are just a couple of options for getting your credentials and passwords out of your git repos and into environment variables. As I mentioned, utilizing s3 is just one of the many options available to you. It may take some time depending on how many of these you have scattered over your applications, but from a security standpoint, it is certainly worth that time investment.
Trackbacks/Pingbacks