Developing on Staxmanade

Slightly modified “CD” Command for Powershell: Even better dot.dot.dot.dot...

(Comments)

There's this little "CD" utility that I iterate on every once in while but has become one of my favorite PowerShell tools on Windows.

Not because it's really that great, but more because there are some navigation habits I acquire over on a Mac in a ZSH terminal that become challenging to not have on a Windows PowerShell terminal and each time I iterate on it, it synchronizes my workflows on both Mac and Windows environments.

In my previous update I added support for typing cd ...N where cd .. will go up 1 directory, so cd .... will go up 3 directories.

Well today I found out that I can declare a function in powershell with a name .. - WHO KNEW?

For example if you pasted the following into you're PowerShell terminal: function ..() { echo "HELLO"; }; ..

This would define the function .. as well as run it and print out HELLO.

This was a fantastic stumbling on my part because on my Mac I often go up 1-n directories by typing .. at the terminal or ..... <-- however many I want to go up.

So today I updated the Change-Directory.ps1 with the following shortcuts:

function ..() { cd .. }
function ...() { cd ... }
function ....() { cd .... }
function .....() { cd ..... }
function ......() { cd ...... }
function .......() { cd ....... }
function ........() { cd ........ }

If you're interested in the evolution of this CD tool:

SUPER SWEET!

Happy CD'ing!

(Comments)

Better Compact JIRA Board U.I.

(Comments)

We use JIRA's kanban board for daily workflow of tasks. I started having an issue with the default JIRA board layout where the cards do not show enough of a task's title, especially when you pseudo-group tickets by prefixing their title with some context.

Given that, I decided to hack the CSS of JIRA's board to improve this. Take a look at an example before/after of the tweaks below, then I'll walk through how you can get it if you so desire.

Before

Jira card before css hack

After CSS Hack

Jira card before css hack

How did you do that?

  1. Install a CSS style plugin into you're browser. This post all use Stylebot for the Chrome web browser.
  • Just install the plugin from the Chrome store by clicking this link and selecting [+ ADD TO CHROME] button on the upper right of the page.
  1. Once the plugin is installed. Close all tabs and re-open a tab to the JIRA kanban board.

  2. Click the new Stylebot plugin CSS button and the Open Stylebot... option in you're chrome browser toolbar which will open up a U.I. that allow you to mess around with the page's style.

  3. At the bottom of the Stylebot click Edit CSS which will give you a blank text box you can write custom CSS into.

  4. Paste in the following CSS and hit Save

.ghx-band-1 .ghx-issue .ghx-avatar {
    right: auto;
    top: -3px;
    left: 100px;
}

.ghx-issue .ghx-flags {
    left: 20px;
    top: 5px;
}

.ghx-issue .ghx-type {
    left: 5px;
    top: 5px;
}

.ghx-issue-content {
    padding: 5px;
    font-size: 12px;
    margin-top: 3px;
}

.ghx-issue-fields .ghx-key {
    margin-left: 30px;
}

.ghx-avatar-img {
    width: 15px;
    height: 15px;
}

.ghx-band-3 .ghx-issue .ghx-avatar {
    right: auto;
    top: 0px;
    left: 145px;
}

.ghx-issue.ghx-has-avatar .ghx-issue-fields, .ghx-issue.ghx-has-corner .ghx-issue-fields {
    padding-right: 0px;
}

Now you should get a little bit more visible data on the page and be able to avoid hovering over titles to get enough context of the ticket to immediately know what it is.

Happy (as best you can) JIRA'ing

(Comments)

How to Update a Single Running docker-compose Container

(Comments)

As a newbie to the tooling, docker-compose it's great for getting started. To bring up all the service containers with a simple docker-compose up starts everything. However, what if you want to replace an existing container without tearing down the entire suite of containers?

For example: I have a docker-compose project that has the following containers.

  1. Node JS App
  2. CouchDB
  3. Redis Cache

I had a small configuration change within the CouchDB container that I wanted to update and re-start to get going but wasn't sure how to do that.

Here's how I did it with little down time.

I'm hoping there are better ways to go about this (I'm still learning), but the following steps are what I used to replace a running docker container with the latest build.

  1. Make the necessary change to the container (in my case update the couchdb config).
  2. Run docker-compose build couchdb (docker-compose build <service_name> where service_name is the name of the docker container defined in you're docker-compose.yml file.)

Once the change has been made and container re-built, we need to get that new container running (without affecting the other containers that were started by docker-compose).

  1. docker-compose stop <service_name> <-- If you want to live on the edge and have the shut-down go faster, try docker-compose kill <service_name>
  2. docker-compose up -d --no-deps <service_name> <-- this brings up the service using the newly built container.

The -d is Detached mode: Run containers in the background, print new container names.

The --no-deps will not start linked services.

That's it... at least for me, it's worked to update my running containers with the latest version without tearing down the entire docker-compose set of services.

Again, if you know of a faster/better way to go about this, I'd love to hear it. Or if you know of any down-sides to this approach, I'd love to hear about it before I have to learn the hard way on a production environment.

UPDATE:

Thanks to Vladimir in the comments below - you can skip several steps above and do it with a single command

docker-compose up -d --no-deps --build <service_name>

I tested this and was able to avoid the build, kill, and up commands with this one-liner.

Happy Container Updating!

(Comments)

Run Multiple Docker Environments (qa, stage, prod) from the Same docker-compose File.

(Comments)

So, I'm playing around with some personal projects and looking to deploy some simple things with Docker to DigitalOcean. This personal project is a small site, and I'd like to set myself up with a repeatable deployment solution (that may be automated as much as possible) so I don't trip over myself with server hosting as I build out the web application.

I'm not really strong with server infrastructure and some of this is "figure it out as I go", while more of it is asking for help from a good friend @icecreammatt who has been a HUGE help as I stumble through this.

But at the end of this tutorial our goal is to satisfy the following requirements.

High level requirements:

Below are some core requirements this walk-through should help address. There is likely room for improvement, and I'd love to hear any feedback you have along the way to simplify things or make them more secure. But hopefully you find this useful.

I want there to be some semblance of a release process with various deployment environments. Push changes to qa regularly, push semi-frequently to stage and when things are good, ship a version to production.

  1. Have access to environments through various domain names. EX: if prod was my-docker-test-site.com then I would also have qa.my-docker-test-site.com and stage.my-docker-test-site.com
  2. Ability to run multiple "environments": qa, stage, prod in the same system. (prob not that many environments - but you get the picture)
  3. Deploy to various environments without affecting other environments. (Ship updates to qa)
  4. It'd be great if I can figure out a mostly zero-downtime deployment. (Not looking for perfect, but the less downtime the better)
  5. Keep costs low. For a small site - running all environments on say a small DigitalOcean droplet. (Is this even possible? We'll see...)
  6. Build various environments, test them out locally and then deploy them to the cloud (cloud)

While I'd like the ability to run as many environments as I listed above, I will likely use a qa and prod for my small site, but I think the pattern is such that we could easily setup whatever environments we need.

Structure of post:

What we want to do is essentially walk through how I'm thinking about accomplishing the above high level requirements using a simple node js hello world application. This app is a basic node app that renders some environment information just to prove that we can correctly configure and deploy various docker containers for environments such as qa, stage or prod.

In the end, we should end up with something that I like to imagine looks a bit like this diagram:

diagram of nginx-proxy in front of web app containers on a DigitalOcean droplet

I'm going to use DigitalOcean as the cloud provider in this case, but I think this pattern could be to other docker hosting environments.

Example Web App Structure

Below is a basic view of the file structure of our site. If you're following along, go ahead and create this structure with empty files, we can fill them in as we go...

.
|____app
| |____Dockerfile
| |____server.js
|____docker-compose.yml

Let's start with the ./app/* files:

Simple NodeJS Hello World app

This is a simple nodejs server that we can use to show that deployment environment variables are passed through and we are running the correct environment. As well as showing a functioning web server.

File: ./app/server.js

var http = require('http');

var server = http.createServer(function(req, res){
    res.writeHead(200, {"Content-Type": "text/plain"});
    res.end(`
Hello World!

VIRTUAL_HOST: ${process.env.VIRTUAL_HOST}
NODE_ENV: ${process.env.NODE_ENV}
PORT: ${process.env.PORT}

`.split('/n').join('<br>'));
});

server.listen(80);

The goal of this file is to run a nodejs web server that will return a text document with Hello World! along with the environment variables that the current container are running under such as qa.

This could easily be replaced with a python, php or ruby, or a whatever web server. Just keep in mind the rest of the article may assume it's a node environment (like the Dockerfile up next). So adjust accordingly.

The Dockerfile

Below is pretty basic and says, load up and run our nodejs server.js web app on port 80.

File: ./app/Dockerfile

# Start from a standard nodejs image
FROM node

# Copy in the node app to the container
COPY ./server.js /app/server.js
WORKDIR /app

# Allow http connections to the server
EXPOSE 80

# Start the node server
CMD ["node", "server.js"]

How to get the qa, prod domain mappings

So now that we have a basic application defined, we can't host multiple versions of the app all using port 80 without issue. One approach we can take would be to place an Nginx proxy in front of our containers to allow translation of incoming domain name requests to our various web app containers which we'll use docker to host on different ports.

The power here is we don't have to change the port within the docker container (as shown in the Dockerfile above) but we can use the port mapping feature when starting up the docker container to specify different ports for different environments.

For example I'd like my-docker-test-site.com to map to the production container, qa.my-docker-test-site.com the qa container of my site, etc... I'd rather not access my-docker-test-site.com:7893 or some port for qa, stage, etc...

To accomplish this we are going to use jwilder/nginx-proxy. Check out his introductory blog post on the project. We'll be using the pre-build container directly.

To spin this up on our local system let's issue the following command:

docker run -d -p 80:80 --name nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy

This project is great, now, as we add new or remove containers they will automatically be added/removed to the proxy and we should be able to access their web servers through a VIRTUAL_HOST. (more on how specifically below)

Networking

Before we get too far into the container environment of our app, we need to consider how the containers will be talking to each other.

We can do this using the docker network commands. So we're going to create a new network and then allow the nginx-proxy to communicate via this network.

First we'll create a new network and give it a name of service-tier:

docker network create service-tier

Next we'll configure our nginx-proxy container to have access to this network:

docker network connect service-tier nginx-proxy

Now when we spin up new containers we need to be sure they are also connected to this network or the proxy will not be able to identify them as they come online. This is done in a docker-compose file as seen below.

Put the two together

Now that we've defined our application with the server.js and Dockerfile and we have a nginx-proxy ready to proxy to our environment-specific docker http servers, we're going to use docker-compose to help build our container and glue the parts together as well as pass environment variables through to create multiple deployment environments.

Save this file as docker-compose.yml

version: '2'

services:
  web:
    build: ./app/
    environment:
      - NODE_ENV=${NODE_ENV}
      - PORT=${PORT}
      - VIRTUAL_HOST=${VIRTUAL_HOST}
      - VIRTUAL_PORT=${PORT}
    ports:
      - "127.0.0.1:${PORT}:80"

networks:
  default:
    external:
      name: service-tier

This file is all about:

  1. The build: ./app/ is the directory where our Dockerfile build is.
  2. The list of environment variables are important. The VIRTUAL_HOST and VIRTUAL_PORT are used by the nginx-proxy to know what port to proxy requests for and at what host/domain name. (We'll show an example later) You can see an earlier exploratory post I wrote explaining more about environment vars.
  3. The ports example is also important. We don't want to access the container by going my-docker-test-site.com:8001 or whatever port we're actually running the container on because we want to use the VIRTUAL_HOST feature of nginx-proxy to allow us to say qa.my-docker-test-site.com. This configuration sets it up to only listen on the loopback network so the nginx-proxy can proxy to these containers but they aren't accessible from the inter-webs.
  4. Lastly the networks: we define a default network for the web app to use the service-tier that we setup earlier. This allows the nginx-proxy and our running instances of the web container to correctly talk to each other. (I actually have no idea what I'm saying here - but it is simple enough to setup and I think it's all good - so I'm going with it for now...)

Now what?

So with all of these pieces in place, all we need to do now is run some docker-compose commands to spin up our necessary environments.

Below is an example script that can be used to spin up qa, and prod environments.

BASE_SITE=my-docker-test-site.com

# qa
export NODE_ENV=qa
export PORT=8001
export VIRTUAL_HOST=$NODE_ENV.$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d


# prod
export NODE_ENV=production
export PORT=8003
export VIRTUAL_HOST=$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d

This script is setting some environment variables that are then used by the docker-compose command where and we're also setting a unique project name with -p ${VIRTUAL_HOST}.

Project Name

What we said just before this headline is a key part to this. What enables us to run essentially the same project (docker-compose/Dockerfile) but with different environment variables that define things like qa vs prod is when we run docker-compose we're also passing in a -p or --project-name parameter. This allows us to create multiple container instances with different environment variables that all run on different ports and in theory isolate themselves from the other environments.

The thinking here is you could have a single docker-compose.yml file that has multiple server definitions like say a nodejs web, couchdb, and redis database all running isolated within their environment. You can then use the environment variables to drive various items such as feature-toggling newly developed features in a qa environment, but are not necessarily ready to run in a production environment.

Running/testing this out locally

You probably want to play with this idea and test it out locally before trying to push it to a remote system.

One easy way to do this is to modify your /etc/hosts file (on *nix) or follow this on windows to map the specific domain names you have setup for you're environments to the actual service running docker. This will allow the nginx-proxy to do it's magic.

I'm currently still using docker-machine to run my docker environment in a VirtualBox VM so my /etc/hosts file looks like this.

##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
127.0.0.1   localhost
255.255.255.255 broadcasthost
::1             localhost

192.168.99.100 qa.my-docker-test-site.com
192.168.99.100 stage.my-docker-test-site.com
192.168.99.100 my-docker-test-site.com

If you have the docker containers running that we've worked through so far (for all environments) we should be able to visit qa.my-docker-test-site.com in the browser and hopefully get this:

Hello World!

VIRTUAL_HOST: qa.my-docker-test-site.com
NODE_ENV: qa
PORT: 8001

Also try out the production environment at my-docker-test-site.com to verify it is working as expected.

THIS IS AWESOME :) I was actually quite happy to traveled this far in this exploration. But now let's try to take it up a notch and deploy what we just built locally to the DigitalOcean in the cloud (cloud).

Deploy to the Cloud!

Now how do we get this locally running multi-environment system up to a server in the cloud?

Just tonight while researching options I found this simple set of steps to get it going on DigitalOcean. I say simple because you should the original steps I was going to try and use to deploy this... sheesh.

These are the steps we're going to walk through.

  1. Get an Account @ DigitalOcean
  2. Create a Docker Droplet (this was way-cool)
  3. Build and Deploy our nginx-proxy.
  4. Build and Deploy our App
  5. Configure our DNS (domain name)
  6. Profit!

Tonight I discovered this blog post on docker that describes using docker-machine with the digitalocean driver to do basically everything we did above - but IN THE CLOUD cloud - kind of blew me away actually.

Get an Account

First make sure you've signed up with a DigitalOcean and are signed in.

Create a Docker Droplet (this was way-cool)

Next we're going to use a cool feature of docker-machine where we can leverage the DigitalOcean driver to help us create and manage our docker images.

Complete Step 1 and Step 2 in the following post DigitalOcean example to acquire a DigitalOcean personal access token.

Now that you have you're DigitalOcean api token, you do right? Either pass it direclty into the below command (in place of $DOTOKEN or set a local var as demonstrated.

DOTOKEN=XXXX <-- you're token there...
docker-machine create --driver digitalocean --digitalocean-access-token $DOTOKEN docker-multi-environment

NOTE: There was an issue where (this used to work but stopped with the the docker-machine DigitalOcean default image reported here). To work around this try using a different image name

EX:

DIGITALOCEAN_IMAGE="ubuntu-16-04-x64"
docker-machine create --driver digitalocean --digitalocean-access-token $DOTOKEN docker-multi-environment

If you refreshed you're DigitalOcean's droplets page you should see a new droplet called docker-multi-environment.

We can now configure our local terminal to allow all docker commands to run against this remotely running docker environment on our newly created droplet.

eval $(docker-machine env docker-multi-environment)

If you ran docker ps it should be empty, but this is literally listing the containers that are running up at DigitalOceain in our droplet. How awesome is that?

Build and Deploy our nginx-proxy.

Now that we can just speak docker in the cloud, run the following commands - these are all assuming we're executing them against the DigitalOcean droplet in the cloud!!

  1. Spin up our remote nginx-proxy on the remote droplet

    docker run -d -p 80:80 --name nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
    
  2. Create our network

    docker network create service-tier
    
  3. Tell nginx-proxy about this network

    docker network connect service-tier nginx-proxy
    

Build and Deploy our App

I know this post has gotten a bit long, but if you've made it this far we're almost there...

If you were to run the following script in our local project's folder where we have the docker-compose.yml file:

Be sure to update BASE_SITE=my-docker-test-site.com with you're domain name or sub-domain like BASE_SITE=multi-env.my-docker-test-site.com

BASE_SITE=my-docker-test-site.com

# qa
export NODE_ENV=qa
export PORT=8001
export VIRTUAL_HOST=$NODE_ENV.$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d


# prod
export NODE_ENV=production
export PORT=8003
export VIRTUAL_HOST=$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d

You should now be able to run docker ps or docker-compose ps and see 3 containers running. The nginx-proxy, you're qa site and also the prod site.

All that's left is to make sure DNS is configured and pointing to our nginx-proxy front-end...

While playing with this - I kept tearing down droplets and re-building them as I worked through this tutorial and I kept forgetting to adjust my DNS settings. However right in the middle of writing this tutorial DigitalOcean came out with Floating Ip's which wasn't perfect, but definitely made this easier to work with. I didn't have to always update the IP address of my droplet, but instead just update the floating ip to point to the newly created droplet.

Configure our DNS (domain name)

I'm assuming you've already purchased a domain name that you can setup and configure on DigitalOcean. So I don't want to go too far into this process.

I also think DNS is out of scope for this post (as there are many others who can do a better job) but I used some great resources such as these while configuring my DigitalOcean setup.

Environment All Things

If you've made it this far, you hopefully have a DigitalOcean droplet that is now serving qa and prod http requests.

NICE!!!

Now the most important thing - how to seamlessly update an environment with a new build...

Let's make an update to QA.

Now that we've deployed our site to QA and to walk through this a little further, let's make a modification to our qa site and see if we can get it deployed without causing any downtime especially to the prod site, but maybe we can also get an in-place deployment done and have little-to-no down time in qa as well.

I wrote that paragraph above the other night near bedtime and as I'm learning some of this on the fly had no idea if this would be easy enough to accomplish, but to my surprise deploying an update to qa was a piece of cake.

For this test I made a simple change to my node web server code so I could easily see that the change was deployed (or not).

I turned Hello World! into Hello World! V2 below.

File: ./app/server.js

var http = require('http');

var server = http.createServer(function(req, res){
    res.writeHead(200, {"Content-Type": "text/plain"});
    res.end(`
Hello World!

VIRTUAL_HOST: ${process.env.VIRTUAL_HOST}
NODE_ENV: ${process.env.NODE_ENV}
PORT: ${process.env.PORT}

`.split('/n').join('<br>'));
});

server.listen(80);

I then used docker-compose to bring up another "environment" but using the same qa VIRTUAL_HOST as before.

BASE_SITE=my-docker-test-site.com

export NODE_ENV=qa
export PORT=8004
export VIRTUAL_HOST=$NODE_ENV.$BASE_SITE
docker-compose -p ${VIRTUAL_HOST}x2 up -d

NOTICE: how the -p parameter we added x2 (just to give it a different project name (for being a different version)).

This will bring up another docker container with our updated web application and to my surprise the nginx-proxy automatically chose this new container to send requests to.

So if you docker ps you should see 4 containers running. 1 nginx-proxy, 1 prod container, 2 qa containers (with different names).

You can think about leaving both containers running for the moment while you test out the new release.

One neat thing you can think about with this is if there was something seriously wrong with the new qa release you could just stop the new container and (docker stop <new_container_id>) and the proxy will start redirecting back to the old qa container. (That only works of course if you're deployment was immutable - meaning you didn't have the new container run some one way database migration script... but that's not something I want to think about or cover in this post).

Once you're comfortable running the new version you can now bring down and cleanup the older version.

docker ps # to list the containers running
docker stop <old_qa_container_id>

docker images # to list the images we have on our instance
docker rmi <old_qa_image_id>

Now let completely remove our test...

You probably don't want to run the sample node script from above forever as you'll be charged some money from DigitalOcean for this and I'd feel bad if you received a bill for this little test beyond a few pennies as you test it out...

The following command will completely remove the droplet from DigitalOcean.

docker-machine rm docker-multi-environment

Wrap Up and What's Next?

I feel like I've done enough learning and sharing in this post. But there is still more to do...

If you want to check out the snippets above combined into a sample github repository I've put it up here.

Future thinking...

I don't know if I'll blog about these, but I definitely want to figure them out. If you find a way to extend my sample above to include the following I'd love to hear about it...

  • SSL (consider cloudflare or letsencrypt?)
  • Easy way to secure the qa/stage environments?

Happy Docker Enviornment Building!

(Comments)

Easily simulate slow async calls using JavaScript async/await

(Comments)

Recently I wanted to manually and visually test some U.I. that I couldn't easily see because an async operations was happening two fast. (first world problems)

I've really been enjoying the new async/await syntax that can be leveraged in recent JavaScript transpires such as Babel and TypeScript but when it happens so fast on a local development environment, some U.I. interactions could get tough to test out. So how can we slow this down or more appropriately add some stall time?

As an example, let's say you have the following async javascript method


var doSomething = async () => {
  var data = await someRemoteOperation();
  await doSomethingElse(data);
}

If the first or second asynchronous methods in the example above were running too fast (as were in my case), I was pleased to see how easy it was to inject a bit of code to manually stall the operation.

The snippet below is short and sweet... this just delays continuous execution of an async operation for 3 seconds.

  await new Promise(resolve => setTimeout(resolve, 3000));

The little baby staller helper

Give it a name and make it a bit more re-usable.

async function stall(stallTime = 3000) => {
  await new Promise(resolve => setTimeout(resolve, stallTime));
}

The above helper can be used by just awaiting it (see below)


var doSomething = async () => {

  var data = await someRemoteOperation();

  await stall(); // stalls for the default 3 seconds
  await stall(500); // stalls for 1/2 a second

  await doSomethingElse(data);

}

Now we have an async function that runs slower - isn't that what you always wanted?

Happy Stalling!

(Comments)

One Programmers Letter to his Wife

(Comments)

One of my core natures is as a builder and creator. I'm strongly introverted so much of my time is spent in my own head. My best and sometimes worst times are regularly spent by myself. I know you already know many things about introverts, but as a refresher I'd like you to read this.

Caring for your introvert

As a creator, one of the happiest moments we can experience is getting into a state of "flow".

In positive psychology, flow, also known as the zone, is the mental state of operation in which a person performing an activity is fully immersed in a feeling of energized focus, full involvement, and enjoyment in the process of the activity. https://en.wikipedia.org/wiki/Flow_(psychology)

I've heard this flow state described as a process where the mind is so focus on the task at hand, so engulfed in the spirit of the process that all other external processing of our environment and even our own bodily needs can be ignored. The brain puts so much energy and focus into this, that things like the need to eat, sleep, or sometimes even ignoring the restroom (for as long as possible - waiting until my bladder is SCREAMING at me).

I'm quite happy when I'm making progress on my creation(s) as they can often invoke this flow state. While the opposite of the "enjoyment" can certainly happen while working on projects as they can frustrate the heck out of me sometimes and if it ever bleeds into our relationship I'm sorry for that.

I wish I could convey the highs I can experience while in "flow" as strongly as you've likely see my frustrations about the lows. Sadly, without the lows, struggle, up hill battles, cussing at the computer I could possibly never really experience the feelings of success or overcoming that struggle and enjoy them as much as I do.

Between work, family time, children, shopping, housework, sleep and whatever else we fill our days with, it often times feels like I get to apply very little time to this thing that I am truly driven (maybe slightly addicted to) and excited about.

I know you try to give me time to work on these things. There are times you think you've given a Saturday morning or an evening for me to work on my thing. However, sadly for it to truly be a successful session, I need time and space with room to concentrate. An hour before bedtime makes me feel like I shouldn't even try, because it could take at least 30-40 min to get back into the project leaving so little time to be productive that it's not even worth starting. These are times when I decide to blow any amount of time I've been given and just waste it watching a show on Netflix. Not because I don't want to work on my thing, but because I know the amount of effort it will take to get into the flow state will take far too long to make it worth it. If I were to get into flow, I'm then going to want to stay there and likely push past my bed time (which is getting harder and harder to recover from).

I don't want this to sound like this creation/building thing is more important than my family. In fact it's not. If you look at my actions and track record, the amount of time I have pushed aside so I could help you with your endeavors by watching kids, taking on extra shopping trips, house duties as well as the financial obligation (and strain), and still finding time to spend with you in the evenings at the expense of this thing I want to do should prove that my commitment to the family (and you) is still a priority.

I don't know how to close this out an wrap it up, other than to say I love you. I love my children. I also love what I build. I would like to work with you to find a way to balance these items a little better.

(Comments)

Reusable React Component with Overridable Inline CSS Styles

(Comments)

React's Component model is great at allowing the component creator to define the interface that consumers of the component interact with. (What is exposed vs what is abstracted away).

If you're building a component and using any in-line styles and you're not careful you can lock the consumer of you're component out of potential customizations they may require for their specific use-case (that you can't think of or foresee). Trying to build components to be reusable and a little more OCP can be challenging especially with how difficult it can be to get css layouts the way you (or the consumer of you're component) may want...

As an example, let's create simple img component to illustrate the point.

Let's say we have the following image component.

import React from 'react';

export default class Image extends React.Component {

  render() {
    return (
      <div>
        <img src={this.props.src} />
      </div>
    );
  }

}

The above component is very simple and very specific.

Now let's say we allow our consumers to customize the height or width of the image. You may think, ok, simple we'll just allow the consumer to specify height and width as props to the component.

So the consumer could just go <Image height="20" width="20" src="someimage.png" />.

And you end up with something that could look like this.

import React from 'react';

export default class Image extends React.Component {

  render() {
    let imageStyle = {
      height: this.props.height,
      width: this.props.width
    };
    return (
      <div>
        <img src={this.props.src} style={imageStyle} />
      </div>
    );
  }
}

Now this works for a while, the consumers of you're component are happy they can control the height and width and everyone's humming along merrily.

Then someone comes to you and says they are having some layout issues and need to control something like float, or margin, or padding... This idea of extending the component with more props could become cumbersome if we have to do this for each and every potential layout option available.

How could we extend this generic pattern into something that allows the component to define a general set of happy defaults, while still giving the consumer complete control over layout?

One Possible Solution

We can use something like Object.assign() to easily accomplish this.

We can allow the consumers to pass in their own style={...} property and provide a set of sensible defaults for the component, but allow the consumer of our component to completely override a style if necessaryl

We can update our:

    let imageStyle = {
      height: this.props.height,
      width: this.props.width
    };

to the following pattern:

    let imageStyle = Object.assign(
      {},                               // target (starting with)
      { ...sensible defaults... },  // some pre-defined default React inline-style for the component
      this.props.style              // allow consumers to override properties
    );

Now if the consumer calls the component with <Image style={{height: "21px", width: "21px"}} src="someImage.png" /> the component's consumers' values will override any defaults provided. And they can extend the style with anything else they may need.

Happy Componentization!

(Comments)

Strange error on docker-compose up: oci runtime error: exec format error

(Comments)

I ran into a non-intuitive error while mucking around with docker-compose recently on an example.

docker-compose up

Building some_server
Step 1 : FROM alpine
 ---> 13e1761bf172
Step 2 : ENV DEMO_VAR WAT
 ---> Using cache
 ---> 378dbaa4a048
Step 3 : COPY docker-entrypoint.sh /
 ---> e5962cef9382
Removing intermediate container 43fa24c31444
Step 4 : ENTRYPOINT /docker-entrypoint.sh
 ---> Running in 5a2e19bf7a45
 ---> 331d2648d969
Removing intermediate container 5a2e19bf7a45
Successfully built 331d2648d969
Recreating exampleworkingdockercomposeenvironmentvars_some_server_1

The Error

ERROR: for some_server  rpc error: code = 2 desc = "oci runtime error: exec format error"
Traceback (most recent call last):
  File "<string>", line 3, in <module>
  File "compose/cli/main.py", line 63, in main
AttributeError: 'ProjectError' object has no attribute 'msg'
docker-compose returned -1

The Actual Problem and Solution:

I had a Dockerfile that used an entrypoint that looked like ENTRYPOINT ["/docker-entrypoint.sh"].

The real problem was the docker-entrypoint.sh script was missing a #shebang.

So changing this

echo "ENV Var Passed in: $DEMO_VAR"

to this

#!/bin/sh
echo "ENV Var Passed in: $DEMO_VAR"

solved my issue!

Also note it'll depend on the base image FROM <some linux distro> that may chagne what you're required #shebang should be.

Whew!

(Comments)

How to Get Environment Variables Passed Through docker-compose to the Containers

(Comments)

I've been playing with a little toy that uses docker-compose to bring together a web app, couchdb, and redis container into an easy-ier-ish cohesive unit.

While working on it (and to make it a bit more generic), my next step was to find a way to pass the database admin user/pass (and other configuraiton options) into the containers as environment variables which took me way longer to figure out than it should have...

Hopefully this posts helps it click for you a little faster than it (didn't) for me :)

If you land here, you've likely already poured over the different parts of documentation for docker, docker-compose and environment variables.

Things like:

In case things drift in the product or docs, this post was written using docker-compose version 1.7.1, build 0a9ab35 so keep that in mind...

I think the difficult thing for me was piecing the various ways you can get environment variables defined and the necessary mapping required within the docker-compose file.

Environment Variable Setup Stages.

For me it didn't click until I was able to think about the stages that needed to exist for an environment variable to go from the development computer -> to the -> docker container.

For now I'm thinking of using the following model...


 ------------------------       --------------------       ------------------
|   Env Source           |     | docker-compose.yml |     | Docker Container |
|                        |     |                    |     |                  |
|   A) .env file         | --> | map env vars using | --> | echo $DEMO_VAR   |
|   B) run-time terminal |     | enterpolation      |     |                  |
|       env var          |     | in this file.      |     |                  |
 ------------------------      ---------------------       ------------------

A working example.

If you want to see all of this in one place check out this github example which is outline below.

The example above is layed out like so...

.
|____.env
|____docker-compose.yml
|____env-file-test
| |____docker-entrypoint.sh
| |____Dockerfile
|____README.md

The .env file:

This is where you can place each of the environment variables you need in here.

DEMO_VAR=Test value from .env file!

As the docs say you can use # as comments and blank lines in the file - all other lines must be in the format of ENV_VAR=ENV_VALUE.

warning environment variables in you're terminal's context will take presedent over the values in the .env file. warning

The docker-compose.yml:

version: "2"
services:
  some_server:
    build: ./env-file-test
    environment:
     - DEMO_VAR=${DEMO_VAR}

The above file is the part where I got tripped up, and once I added the environment: section it all clicked.

You likely don't want every one of you're development or production server's environment variables to show up inside you're container. This file acts a bit like the docker run -e ENV_VAR=FOO option and allows you to select specific environment variables that are to be passed into the container.

I like the declaritive approach of this file as it makes environment variable dependencies explicit.

The env-file-test/Dockerfile:

FROM alpine

ENV DEMO_VAR WAT

COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]

Pretty standard Dockerfile, but one thing I learned is you can setup default environment variables using the docker ENV directive. But these will be overriden by the .env file or variables in you're terminal's environment.

The env-file-test/docker-entrypoint.sh

#!/bin/sh
echo "ENV Var Passed in: $DEMO_VAR"

This was just a sample script to print out the environment variable.

Some other things I learned

warning The docs say you can specify you're own env-file or even multiple files, however I could not get that working. It always wanted to choose the .env file.

warning Also note: that if you have an environment variable specified in you're terminal that also exists in your're .env file the terminal's environment takes presedence over the .env file. warning

Happy Environment Setup!

(Comments)

Configuring Git to Use Different Name and Email Depending on Folder Context

(Comments)

Thought I'd share how I'm configuring user.name and user.email for git on my work computer. This is really just a post so when I forget how I did in the future I can google my own blog and be reminded...

I have always struggled with accidentally committing to an OSS project my work name/email or visa-versa, committing to a work git repo with my personal name/email.

For most, user.name shouldn't change, unless you're company ties your user.name to something specific to the company like a username. (Contrast: user.name = Jason Jarrett and user.name = jjarrett).

When I clone projects I always clone them into a folder structure that looks like

|____~/code
| |____personal/  <--- this is where I would put some OSS projects that I may be working on or contributing to.
| |____work/      <--- obviously work code goes in here

Thanks to this post where I learned about direnv and followed the last option I basically used these steps...

Setup

  1. Install direnv - brew install direnv (What about Windows? see this github issue and help make it work)

  2. Create .envrc file for each profile needing to be setup with the following content

    export GIT_AUTHOR_EMAIL=<your email>
    export GIT_AUTHOR_NAME=<your name>
    export GIT_COMMITTER_EMAIL=<your email>
    export GIT_COMMITTER_NAME=<your name>
    
  3. After installing and setting the .envrc files direnv will prompt to use the env file which we accept by running direnv allow.

Now I should have the following structure

|____~/code
| |____personal/
|    |____.envrc   <-- env settings with personal git user/email
| |____work/
|    |____.envrc   <-- env settings with work git user/email

What did this do?

Each time we cd into either a personal/ or work/ folder direnv will setup our shell with environment variables contained in that folder's .envrc file. This will then allow Git which respects these env vars and now we don't have to think about committing the wrong name/email to the wrong Git repositories.

Happy Gitting!

(Comments)