Topics

Setup postgreSQL RDS using Ansible

Setup postgreSQL RDS using Ansible

Setting up PostgreSQL on RDS using ansible is a bit tricky because the main user on RDS is not a SUPERUSER and roles membership is not automatically granted for ex: “ERROR: must be member of role ..” is quite common. Here is a working solution:

Cachita is a golang file and memory cache library

Cachita

Cachita is a golang file and memory cache library

Build Status
GoDoc

  • Simple caching with auto type assertion included.
  • In memory file cache index to avoid unneeded I/O.
  • Msgpack based binary serialization using msgpack library for file caching.

API docs: https://godoc.org/github.com/gadelkareem/cachita.

Examples: https://godoc.org/github.com/gadelkareem/cachita#pkg-examples.

Installation

Install:

go get -u github.com/gadelkareem/cachita

Ansible vault encrypt/decrypt shell script

#!/usr/bin/env bash

####Usage
# ./vault.sh encrypt
# ./vault.sh dencrypt
# ./vault.sh encrypt /full/path/to/file.yml
######

set -euo pipefail

cd `dirname $0`



if [ -z "$PASSWORD" ]; then
    read -s -p "Enter Password: " PASSWORD
fi

VAULT_FILE=vault_key
echo "${PASSWORD}" > "${VAULT_FILE}"

ACTION=decrypt
if [ "$1" != "" ]; then
    ACTION="$1"
fi

FILES=(group_vars/prod/*.yml)
if [ ! -z "${2-}" ]; then
    FILES=("$2")
fi


for FILE in "${FILES[@]}"
do
    if [ "${ACTION}" = "encrypt" ]; then
        echo "Encrypting ${FILE}"
        ansible-vault encrypt "${FILE}.decrypted" --output=$FILE --vault-password-file "${VAULT_FILE}"
    else
        echo "Decrypting ${FILE}"
        ansible-vault decrypt $FILE --output="${FILE}.decrypted" --vault-password-file "${VAULT_FILE}"
    fi
done

rm -rf "${VAULT_FILE}"

Workaround for the slow pagination using skip in MongoDB

Recently I came across a big MongoDB database which needed pagination. But it was surprising how getting to page number 2000 ended up in a timeout. A quick research led me to MongoDB documentation:

The cursor.skip() method is often expensive because it requires the server to walk from the beginning of the collection or index to get the offset or skip position before beginning to return results. As the offset (e.g. pageNumber above) increases, cursor.skip() will become slower and more CPU intensive. With larger collections, cursor.skip() may become IO bound.

So “skip” clearly won’t work for a big set which led to the sort and gt solution.

Sorting and $gt:

Simply sorting by a created_at field then continuing from the last known created_at date by finding the next set would solve the problem.

ids = db.c.find({}, {_id:1}).map(function(item){ return item._id; });
docs = db.c.find( { _id: { $in: ids.slice(2000,2050) } } );