Easy solutions and ideas found after long googling or hard coding
#!/usr/bin/env bash
####Usage
# ./vault.sh encrypt
# ./vault.sh dencrypt
# ./vault.sh encrypt /full/path/to/file.yml
######
set -euo pipefail
cd `dirname $0`
if [ -z "$PASSWORD" ]; then
read -s -p "Enter Password: " PASSWORD
fi
VAULT_FILE=vault_key
echo "${PASSWORD}" > "${VAULT_FILE}"
ACTION=decrypt
if [ "$1" != "" ]; then
ACTION="$1"
fi
FILES=(group_vars/prod/*.yml)
if [ ! -z "${2-}" ]; then
FILES=("$2")
fi
for FILE in "${FILES[@]}"
do
if [ "${ACTION}" = "encrypt" ]; then
echo "Encrypting ${FILE}"
ansible-vault encrypt "${FILE}.decrypted" --output=$FILE --vault-password-file "${VAULT_FILE}"
else
echo "Decrypting ${FILE}"
ansible-vault decrypt $FILE --output="${FILE}.decrypted" --vault-password-file "${VAULT_FILE}"
fi
done
rm -rf "${VAULT_FILE}"
Working example here
Recently I came across a big MongoDB database which needed pagination. But it was surprising how getting to page number 2000 ended up in a timeout. A quick research led me to MongoDB documentation:
The cursor.skip() method is often expensive because it requires the server to walk from the beginning of the collection or index to get the offset or skip position before beginning to return results. As the offset (e.g. pageNumber above) increases, cursor.skip() will become slower and more CPU intensive. With larger collections, cursor.skip() may become IO bound.
So “skip” clearly won’t work for a big set which led to the sort and gt solution.
Sorting and $gt:Simply sorting by a created_at field then continuing from the last known created_at date by finding the next set would solve the problem.
ids = db.c.find({}, {_id:1}).map(function(item){ return item._id; });
docs = db.c.find( { _id: { $in: ids.slice(2000,2050) } } );
Development teams need to keep track of builds on Jenkins and sometimes an email alert is not flashy enough. Using this simple javascript on your team dashboard you can easily track failed builds and urge developers to fix it.
The script uses Jenkins JSON API with JSONP method to request the latest failed builds with author name and last commit that caused the failure. You will need to add the domain of the page where the request is sent to your “Jenkins” “Configure Global Security” “Domains from which to allow requests” configuration. Also the cookie from Jenkins is needed on the same browser so you will need to login to Jenkins from the same browser before running the script or proxy the request and use Basic Authorization using your API token to do the request.
Service Discovery is a simple PHP command to collect and store AWS information such as EC2s and RDSs in the current region and save them with their credentials into an encrypted JSON file on S3. The script later notifies each service via SSH and executes the service discovery client on each instance. Each client downloads the JSON file and uses it to configure different applications. It can easily be automated through Rundeck or Jenkins to be executed after each deploy.
Service Discovery is part of AWS PHP Commands.
Usage:
> php console.php aws:services:discover -h
Usage:
aws:services:discover [options]
Options:
-f, --forceNotify[=FORCENOTIFY] Force Notify [default: false]
-e, --notifyOnly[=NOTIFYONLY] Notify only one of dev,prod [default: false]
-c, --continueOnError[=CONTINUEONERROR] Continue to next EC2 on client failure [default: false]
-h, --help Display this help message
-q, --quiet Do not output any message
-V, --version Display this application version
--ansi Force ANSI output
--no-ansi Disable ANSI output
-n, --no-interaction Do not ask any interactive question
-v|vv|vvv, --verbose Increase the verbosity of messages: 1 for normal output, 2 for more verbose output and 3 for debug
Help:
Discovers services information and credentials.
#!/usr/bin/env bash
set -e
mkdir -p /root/tmp
#migrate "original db params" "new db params" "new db name"
function migrate(){
mysqldump --skip-lock-tables --single-transaction --add-drop-table $1 > /root/tmp/${3}.sql
echo "CREATE DATABASE IF NOT EXISTS ${3};" | mysql $2
mysql --max_allowed_packet=1000M $2 $3 < /root/tmp/${3}.sql
rm -f /root/tmp/${3}.sql
}
migrate "-uolduser -poldpassword -h oldhost olddbname" "-unewuser -pnewpassword -h newhost" "newdbname"
#more migrates here ...
Fork here