Developing on Staxmanade

How to Access Two Mac Accounts at the Same Time

(Comments)

From a single Mac if you wanted to access two unique accounts at the same time, I found out through a neat little trick how to accomplish this.

This allows you to log in to a second unique Mac user while already being logged into the first account already. It can be done without having to logout/login to each one individually (one at a time).

Why would I need to do this?

The reasons could vary but here are a couple examples:

  • If you use one user for work, and one for personal to keep some separate context, but while at work maybe need to access a file or email from the personal account.
  • You'd like to access a separate iMessage account without it getting mixed into yours. Say you want to spy on the kid. (Not saying whether this is ethical or not - depends on your parenting style - just proposing a reason for using this tool).

Disclaimer

To accomplish this we're going to be turning on some services/features that have the potential to open security vulnerabilities so please use with caution and learn/know your risks.

Setup/Configuration

To accomplish this your Mac needs to have the proper permissions and configuration in place to allow this to happen.

First we need to access the system preferences:

access mac system preferences

Then open the Sharing preferences:

mac sharing preferences

Then enable Screen Sharing and don't forget to add the specific users you want to allow screen to be shared for.

Note: I blocked out this specific user-name - but assume the blacked out user is the Mac account's user that I want to log into using the Screen Sharing application

screen sharing preference

I had to enable enable remote login to allow the up-coming ssh command to run. Here is the configuration I used:

remote login preference

Startup an SSH Session

From the currently logged in session, open a Terminal and run the following command:

ssh -NL 5901:localhost:5900 localhost

The -L has this to say in ssh's man pages

     -L [bind_address:]port:host:hostport
     -L [bind_address:]port:remote_socket
     -L local_socket:host:hostport
     -L local_socket:remote_socket
             Specifies that connections to the given TCP port or Unix socket on the local (client) host are to be forwarded to the
             given host and port, or Unix socket, on the remote side.  This works by allocating a socket to listen to either a TCP port
             on the local side, optionally bound to the specified bind_address, or to a Unix socket.  Whenever a connection is made to
             the local port or socket, the connection is forwarded over the secure channel, and a connection is made to either host
             port hostport, or the Unix socket remote_socket, from the remote machine.

             Port forwardings can also be specified in the configuration file.  Only the superuser can forward privileged ports.  IPv6
             addresses can be specified by enclosing the address in square brackets.

             By default, the local port is bound in accordance with the GatewayPorts setting.  However, an explicit bind_address may be
             used to bind the connection to a specific address.  The bind_address of ``localhost'' indicates that the listening port be
             bound for local use only, while an empty address or `*' indicates that the port should be available from all interfaces.

For -N:

     -N      Do not execute a remote command.  This is useful for just forwarding ports.

Here's what it looks like when I ran it locally:

> ssh -NL 5901:localhost:5900 localhost
The authenticity of host 'localhost (::1)' can't be established.
ECDSA key fingerprint is SHA256:ytfRv5WDPuTjGbBugJjmc8gOhsHga7ozGqNgjOXpdRM.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Password: <I entered my account/admin password>

Use Screen Sharing to login

Once that ssh command above is up and running, we're now ready to log into the other account using Screen Sharing.

Open the Screen Sharing Mac app located in: /System/Library/CoreServices/Screen Sharing.app. You can also use CMD+<Space> (Spotlight) and type Screen Sharing to open the app.

Then enter localhost:5901 to start the process.

It should look like this:

screen sharing app startup view

In the below screen entered the username/password that you want to login as. (Not the current account - the other one)

screen sharing app login view

Now select that you want to login as "yourself" where "yourself" is really "other account":

screen sharing app access user strategy view

...and boom, you should now able to use two separate accounts on a single Mac session.

Happy Spying (wink wink)!

(Comments)

How to block git commit if webpack bundle has changed

(Comments)

I sometimes write little web app utilities that are often statically hosted. But to use some of the new ES features means I need to drop in a build process that creates a bundle before the code gets checked in.

There are two parts to this.

  1. The raw source code. The ES6+7+next and all the nifty new js features I want to leverage today.
  2. The bundled es5 output.

In some of these projects it means I also check in the bundled code into git. A common use case is to be able to use github's pages feature to host this content (err bundled output) as well as the raw source.

The problem we can run into is if you make a change to the source, commit and push - nothing happened... Because the bundled code didn't get re-bundled, the github hosted pages page doesn't pull the latest changes in.

I'm a fan of using git pre-commit hooks to catch you early in the development life cycle to on things like test errors (or in this case a bundle issues).

So I came up with an example that allows me to make code changes and catch myself from committing when the raw source has changed, but the bundle did not reflect that.

So what is this thing?

The gist is it's a .js script that runs a set of actions and tests for the bundle.js currently vs what's about to be committed. Failing to commit if the current bundle doesn't match what the previous bundle is... Meaning, when we run our build (webpack in this case) if the bundle.js didn't change, we can commit.

This ensures that whatever bundle.js is committed is tied to the code-change the original source. Avoiding "fixing" something in the source and it not actually getting deployed because the bundle is out of date.

First get a pre-commit tool

There are some good options in the npm/node world for pre-commit hooks. Check out husky or pre-commit. However you get your precommit hook setup - great...

In my case I used husky and here are the relevant bits to my package.json.

{
  ...

  "devDependencies": {
+    "husky": "^0.13.4"
  }
  "scripts": {
+    "precommit": "node ./pre-commit-build.js"
  }
}

The pre-commit-build.js script I used

The below is the short, but complete pre-commit script I use to enforce this workflow.

var crypto = require('crypto');
var fs = require('fs');
var exec = require('child_process').exec;

var bundleFileName = './dist/bundle.js';

var getShaOfBundle = function () {
    var distFile = fs.readFileSync(bundleFileName);
    var sha = crypto.createHash('sha1').update(distFile).digest("hex");
    return sha;
}

// make sure we only bundle/build what is staged to get a proper
// view of what will be committed
exec('git stash --keep-index');

// Get a snapshot of the original bundle
var beforeSha = getShaOfBundle();

// run our build
exec('./node_modules/.bin/webpack');

// snapshot the bundle after the build
var afterSha = getShaOfBundle();

// reset anything that was stashed
exec('git stash pop');

if (beforeSha !== afterSha) {
    throw new Error("Need to bundle before committing");
}

Now whenever I make a change to the raw source code - this pre-commit script makes sure that the dist/bundle.js is correctly mapped to the raw source.

Happy committing!

(Comments)

How to Download and Convert an Image to Base64 Data Url

(Comments)

Today I was playing around with a side-oss-project and had the desire to take an image from the web and base64 data encode it. While I'm sure there is a ton written out there, I didn't immediately find the specific steps in a snippet of code. (Also, I can't say I looked very hard).

So what does any programmer do when he's not satisfied with what he can't find? He writes his own. Now that I've put it together I'm just going to post this here so I can find it again down the road when I need to. :)

Below is a utility function that I threw together that will take an imageUrl parameter and will return through a Promise a base64 encoded image data string. You can see from the example usage below that we're just setting an HTML image .src property to the result of this function and it should just work.

NOTE: also if you look at the code that accomplish this below, it may look a bit like "futuristic" JavaScript right now. With all the async/await, arrow functions, http fetch and all, but the cool thing is you should be able to just copy/paste the below into a JSBin/Plnkr/Codepen without issue (in Chrome). No need for Babel/TypeScript/transpiler if you're just prototyping something (as I was). Don't rely on this to work in all browsers yet but I'm sure when I'll be googling myself for this snippet in the future, this should work in most browsers so I'll just leave this here for now.

async function getBase64ImageFromUrl(imageUrl) {
  var res = await fetch(imageUrl);
  var blob = await res.blob();

  return new Promise((resolve, reject) => {
    var reader  = new FileReader();
    reader.addEventListener("load", function () {
        resolve(reader.result);
    }, false);

    reader.onerror = () => {
      return reject(this);
    };
    reader.readAsDataURL(blob);
  })
}

An example usage of the above:

getBase64ImageFromUrl('http://approvaltests.com/images/logo.png')
    .then(result => testImage.src = result)
    .catch(err => console.error(err));

As an alternative to base64 encoded FileReader approach above, you can also also use the URL.createObjectURL return URL.createObjectURL(blob);

There's of course a number of online tools that do this for you already.

Hope this helps future me find the implementation a min or two faster some day.

Happy Encoding!

(Comments)

Porting a Large JSPM BabelJS project to Typescript

(Comments)

I've been working on a project for over a year now that was originally written in ES6 with (some async/await ES7) using BabelJS to transpile and really enjoying the experience of using the latest javascript.

ES6/7 was great at the beginning, when I had 1, then 2... then 5 files. The project was small, I could pretty much load the entire application into my own head at once, reason, tweak, plow forward.

It didn't take too long before the project grew beyond a single load into memory. Now I'm mentally paging out components from different files as I work on various sections of the application.

Just like anything, once the project grew to this point, it became a little more difficult to maintain. Now, I consider myself a fairly solid developer, and that's likely how I made it this far without a compiler as my components were small, surface areas tight and the interactions between them were well managed. I also had a decent number of in-app unit and integration tests because generally (but not always) I'm a test-first kind of guy.

However, that didn't stop me from breaking things, making mistakes or just out-right screwing up a javascript file here and there along the way.

While working on this project, it always niggled me that the project would keep growing without the ability for the most basic of unit-tests to run (the compiler). Almost a year ago I even remember trying to use Typescript but using it with JSPM and without having VisualStudio Code all together, it just never came together (or I just didn't try hard enough).

But this past week, I gave it another go, and while I'm not totally there (or where I'd like to end up), I'm quite happy with the results I've made so far and am impressed and quite happily working in a project that has completely been ported to Typescript from ES6/7 using BabelJS.

First some idea about the project.

Now, when it comes to large software projects, I'm pretty sure I shouldn't be calling this project a "large" as the subject of this post seems to label it... But for a system built only by me in some nights and weekends, it is the largest single app I've built alone, so that's where I'm defining "Large".

The project has just about a hundred javascript files/components/classes/modules and comes in just above 12,000 lines of code. That's not counting of course all the dependencies pulled in through JSPM (using both NPM and Github). In fact I really need to look at my dependencies and see where I can trim some fat, but that's not the subject of this post.

Porting from Babel to TypeScript

With some context about the project this is coming from out of the way, I thought it would be helpful to outline the steps (or stumbles) I took along the way to get my project up and running using TypeScript with JSPM.

Pre-migration steps

Below are steps I took to get this thing going. I doubt they're perfect or even apply to your or anyone elses projects, but here's hoping they're helpful

  1. In a fresh temp folder, I used the jspm init command to setup a fresh new project and selected the Typescript transpiler option.

this allowed me to inspect what a "fresh" project from JSPM would look like with Typescript setup.

  1. The next thing I did was review the angular getting started guide to see what Typescript specific configurations they used.

Now, my project isn't Angular (it's actually React based), but I thought I could learn a little something along the way. I don't know if I actually gleaned anything while doing this (as I'm writing this post a ways after I actually did the work, but as an FYI, you might learn something reading it)

What steps did I take to port the project?

Looking back at the series of commits during my port, here's basically what I did. In some cases order doesn't matter below, but I left this list in the order of my projects git commit log.

  1. Renaming each file with the .jsx extension to .tsx (Typescript's variant of JSX) (note: not renaming anything but code I wrote - so don't touch anything in jspm_packages or node_modules folders etc.
  2. jspm install ts <-- installing the Typescript jspm plugin
  3. Updated my jspm.config.js transpiler flag with the following:
-  transpiler: "plugin-babel",
+  transpiler: "Typescript",
+  TypescriptOptions: {
+    "tsconfig": true // indicates that a tsconfig exists that should be used
+  },

Then I updated my jspm.config.js's app section with the following.

   packages: {
-    "app": {
-      "defaultExtension": false,
-      "main": "bootstrap.jsx",
-       "meta": {
-        "*": {
-          "babelOptions": {
-            "plugins": [
-              "babel-plugin-transform-react-jsx",
-              "babel-plugin-transform-decorators-legacy"
-            ]
-          }
-        }
-      }
-    },
+    "app": { // all files within the app folder
+      "main": "bootstrap.tsx", // main file of the package (will be important later)
+      "format": "system", // module format
+      "defaultExtension": "ts", // default extension of all files
+      "meta": {
+        "*.ts": { // all ts files will be loaded with the ts loader
+          "loader": "ts"
+        },
+        "*.tsx": { // all ts files will be loaded with the ts loader
+          "loader": "ts"
+        },
+      }
+    },
  1. Created a tsconfig.json file
{
 "compilerOptions": {
    "target": "es5",                /* target of the compilation (es5) */
    "module": "system",             /* System.register([dependencies], function) (in JS)*/
    "moduleResolution": "node",     /* how module gets resolved (needed for Angular 2)*/
    "emitDecoratorMetadata": true,  /* needed for decorators */
    "experimentalDecorators": true, /* needed for decorators (@Injectable) */
    "noImplicitAny": false,         /* any has to be written explicitly*/
    "jsx": "react"
  },
  "exclude": [   /* since compiling these packages could take ages, we want to ignore them*/
    "jspm_packages",
    "node_modules"
  ],
  "compileOnSave": false        /* on default the compiler will create js files */
  1. Renamed *.js files to *.ts. (Similar to step 1 above with jsx -> tsx but now just the plain JavaScript files)
  2. In all of my source code where I used to do this: import foo from './foo.js' I removed the .js extensions like import foo from './foo'
  3. I did NOT remove the .jsx extension in my import statements - but renamed them to tsx so import foo from './foo.jsx' became import foo from './foo.tjs'
  4. Next I added a file at the root of my client project called globalTypes.d.ts, this is where I could hack in some type definitions that I use globally in the project.
  5. Then I started adding my type definitions...

I used the typings tool to search for TypeScript type definitions. And if I found one, I would typically try to install them from npm.

For example: searching for react like typings search react shows me that there is a DefinitelyTyped version of the type defs and I now know that we can use NPM to install them by typing npm install --save @types/react

So I installed a ton of external library typings.

  1. Next, started looking around my editor VisualStudio Code in hopes to see a bunch of typing errors reported, and was surprised to see very few. No, not because I'm so good at JavaScript that my TypeScript was perfect. Far from it... The problem I had was the tsconfig.json file was not at the root of my project (was at the root of my client site) - but it was nested several folders down from the root of my project. For some reason the editor wasn't picking it up until I opened the editor from the location the tsconfig.json file was rooted, things didn't work.

Honestly, I don't know what the above was about - but was something I ran into. I can't say for certain if it is still an issue - I think I'm starting to see editor features load up regardless of what folder I open things - so your mileage may vary.

  1. Once the TypeScript editor features started lighting up in VS Code, my next steps were start to take the TypeScript's feedback and implement either typing work-arounds or fixing actual bugs the compiler found.

THE END - ish

Where am I now?

The above steps were really all I went through to port this project over to TypeScript and it was relatively seamless. That's not to say it was simple or easy, but definitely do-able, and worth it.

It's been a few weeks since I ported the project to TypeScript and I'm really kicking myself for not doing it sooner.

The editor assist with intellisense of function calls from internal and external modules and their usage/function signatures saves time researching external documentation.

Other observations since the move.

  1. Builds seem to be a little faster with TypeScript than Babel. I can't say I can prove this. I didn't do any actual tests on this, but just a feeling I got after migration.
  2. Sourcemaps seem to actually work. Whenever I used BabelJS, debugging and stepping through async/await it just never seemed to line up right for me. This was likely user-error or in-proper configuration of babel on my part, so who knows... but having working source-maps is AMAZING, especially with the async/await feature.
  3. One area of concern that I haven't yet worked through. Is the JSPM typescript loading up in the browser - or running jspm bundle app at the command line doesn't report any typescript errors - or fail any builds. However, I'm glad it doesn't because something isn't quite rite with my configuration as every import of a .tsx file reports an error. So, for now I'm just relying on the red squigglies in my VS Code editor to help me catch typing errors.

If you go for this port in your own project, I hope this was helpful, and that your port goes well.

Happy TypeScripting!

(Comments)

TVJS TVHelpers DirectionalNavigation and Adapting/Hacking some WinJS Focus Management

(Comments)

So, Microsoft created what really turned out to be an amazing set of HTML/JS/CSS controls when they released the WinJS library. Not to go too much into the history, but honestly I hated it when I first had to use it. But, let me clarify. It wasn't until this last year when I learned that I didn't hate the WinJS controls by themselves, but I despised the way you declared their usage using the specialized win-* html attributes. It felt like a total hack to get an app up and running by littering semantic html with these attributes.

Then along comes a little toy project they created called react-winjs and all of a sudden the WinJS "Components" made total sense. When looking at them through the lense of WinJS through ReactJS components was the first time that I not only clicked with WinJS, but I actually fell in lov... (well I won't go that far), but was excited enough about them to pick them as the primary U.I. control suite while building out a little side-project.

Fast forward a year of development, and Microsoft essentially bailed on WinJS but at least they left it out in the open so I could hack on it and continue to depend on my own fork for the time being.

Then, they announce a NEW & SHINY library that can be used to help develop UWP and TV/Xbox One apps which is great. Except, WinJS doesn't work with this new library out-of-the-box, and since Microsoft isn't adding new features to WinJS, they likely never will build-in compatibility with the new & shiny library.

Guess that means we (I) have to figure it out on my own. And although I write this knowing that I'm probably the ONLY developer on the planet using this combination of libraries, I wanted to put out some of the hacks/code I've thrown together to get some WinJS controls to play nice with TVJS with regards to focus management.

What is focus management you say?

In the context of an Xbox app, the idea is to take your'e web-page-app and get rid of the ugly mouse-like-cursor you'd see if you didn't do this and replace it with a controller navigable approach - so up/down/left/right on the controller moves the visible "focus" around the application and the A button "presses enter" (or invokes) the control/button/etc.

What IS provided by TVJS

The TVJS library has a helper within it called DirectionalNavigation and is great in that it provides a focused and specific API to enable focus management while developing a Xbox App UWP Javascript (& C#) apps.

Just dropping the library in is enough to get much of the basics to work with most web apps.

However, the conflict between this and WinJS comes into play because WinJS also tried to implement some of their own focus management and the mix of these two just doesn't quite cut it.

Get rid of mouse cursor

Well, this isn't really a hack:

If you're looking at building a UWP JavaScript app for the Xbox, and tried to run your app on the Xbox (in dev mode), you may have noticed that your app behaves almost like it was just another web-page and doesn't default the cursor focus the way other xbox apps work. You're app just has a mouse-like cursor.

The way to deal with this is just by accessing the browser's gamepad api. Now, the Microsoft TVJS TVHelpers DirectionalNavigation library automatically does this for you, but for a better experience if you don't want to wait for the browser to download this library, you can manually access the api to hide the mouse cursor by throwing this at the top of your start page EX: index.html

    <script>
        // Hide the Xbox/Edge mouse cursor during load.
        try {
            navigator.getGamepads();
        } catch(err) {
            console && console.error('Error with navigator.getGamepads()', err);
        }
    </script>

Just by calling navigator.getGamepads(), this tells the browser/hosted web app that you are going to take control of the app's focus management and to hide the mouse cursor.

Once you've done this and your app loads up with the TVJS DirectionalNavigation library and in my case some WinJS controls, focus management mostly works (sort-of).

Completely Remove XYFocus built-in to WinJS:

This is about as ugly as they get...

The below code is bascially looking for the XYFocus handlers that WinJS is trying add to the document and we wan to not allow it to get added.

This XYFocus handler really creates havoc when we add the XYFocus handler from TVSJ DirectionalNavigation.

// HacktyHackHack
// The goal of this is to remove XYFocus management from WinJS
(function() {
  var totalRemovedHandlers = 0;
  var checkRemovedHandler = function() {
    totalRemovedHandlers++;
    if (totalRemovedHandlers > 2) {
      console.error("EEEK, removing more than 2 handlers... be sure to validate that we're removing the right ones...");
    }
  };
  var realAddEventListener = document.addEventListener;
  document.addEventListener = function(eventName, handler, c){
    if (handler.toString().indexOf('function _handleKeyEvent(e)') >= 0) {
      console.warn("Ignoring _handleKeyEvent...", eventName, handler, c);
      checkRemovedHandler();
      return;
    }
    if (handler.toString().indexOf('function _handleCaptureKeyEvent(e)') >= 0) {
      console.warn("Ignoring _handleCaptureKeyEvent...", eventName, handler, c);
      checkRemovedHandler();
      return;
    }
    return realAddEventListener(eventName, handler, c);
  };
}());

By not allowing WinJS to add it's XYFocus handlers, we can avoid many of the issues that I worked through below...

Dealing with a WinJS Pivot control

For my app, the first control I ran into trouble with was the WinJS Pivot control. This control already does some focus management all by itself, and it's own management style contradicts the way the DirectionalNavigation helper works. So we basically have to detect focus on it, turn of TVJS focus management and handle it internally (until we leave focus of the Pivot).

To work through that, I created the following helper function:


WinJS.UI.Pivot.prototype._headersKeyDown = function (e) {
    if (this.locked) {
        return;
    }
    if (e.keyCode === Keys.leftArrow ||
        e.keyCode === Keys.pageUp ||
        e.keyCode === Keys.GamepadDPadLeft ||
        e.keyCode === Keys.GamepadLeftThumbstickLeft) {
        this._rtl ? this._goNext() : this._goPrevious();
        e.preventDefault();
    } else if (e.keyCode === Keys.rightArrow ||
               e.keyCode === Keys.pageDown ||
               e.keyCode === Keys.GamepadDPadRight ||
               e.keyCode === Keys.GamepadLeftThumbstickRight) {
        this._rtl ? this._goPrevious() : this._goNext();
        e.preventDefault();
    }
};

function handlePivotNavigation(pivotElement) {
  console.log("handlePivotNavigation", pivotElement);
  if (!pivotElement) {
    throw new Error("handlePivotNavigation cannot use pivotElement as it wasn't passed in");
  }

  var pivotHeader = pivotElement.querySelector('.win-pivot-headers')

  if (!pivotHeader) {
    let msg = "handlePivotNavigation cannot find .win-pivot-headers in";
    console.error(msg, pivotElement);
    throw new Error(msg);
  }


  pivotHeader.addEventListener('focus', function() {
    console.log("pivotHeader focus");
    DirectionalNavigation.enabled = false;
  });
  pivotHeader.addEventListener('keyup', function(eventInfo) {
    console.log('pivot keyup ', eventInfo.keyCode, eventInfo.key);

    switch(eventInfo.keyCode) {
      case 204: // gampead down
      case 40: // keyboard down
        DirectionalNavigation.enabled = true;
        var target = DirectionalNavigation.findNextFocusElement('down');
        if (target) {
          target.focus();
          eventInfo.preventDefault();
        }
        break;
      case 203: // gamepad up
        // since the Pivot is at the top of the page - we won't release
        // control, or try to navigate up??? (maybe consider flowing up from the bottom of the page?)
        break;
      // case 205: // gamepad left arrow
      // case 211: // gamepad 211 GamepadLeftThumbstickUp
      // case 200: // gamepad left bumper
      //   pivotElement.winControl._goPrevious();
      //   eventInfo.preventDefault();
      //   break;
      // case 206: // gamepad right arrow
      // case 213: // gamepad 213 GamepadLeftThumbstickRight
      // case 199: // gamepad 199 GamepadRightShoulder
      //   pivotElement.winControl._goNext();
      //   eventInfo.preventDefault();
      //   break;
    }
  });
}

And use it by doing the following in my React page:

    componentDidMount() {
        var pivot = ReactDOM.findDOMNode(this.refs.pivot);
        handlePivotNavigation(pivot);
    }

Or if you're not using React you can likely just go:

    var pivot = document.getElementById('my-pivot-id');
    handlePivotNavigation(pivot);

It's not pretty, but has been working for me so far.

Now when I navigate around using an Xbox controller I can properly navigate around the WinJS Pivot.

Next up are ItemContainers.

UPDATE:

With the added (remove XYFocus above - I removed the below hack)

This one is a total hack, and I look forward to a better solution, but for now it's been working.

The issue I was seeing was with WinJS ItemContainers and the TVJS library applying a separate forced "click" on the element when the control itself has already "clicked/invoked" the element.

The real fix would likely to figure out how to get the ItemContainer to event.preventDefault() and/or event.stopPropagation() and avoid the bubbling up to the document keyup event handler that DirectionalNavigation has under it's control, but WinJS ItemControl management is just so complicated that this hack was easier to figure out at the time I threw it together.

So what does this do?

It's basically hijacking the DirectionalNavigation._handleKeyUpEvent function, and re-writing it with one that ignores the keyup event if the currently focused element is an ItemContainer.

// Hack to avoid Item containers getting double click
var originalDNKeyUp = TVJS.DirectionalNavigation._handleKeyUpEvent
TVJS.DirectionalNavigation._handleKeyUpEvent = function (e) {
console.log("Check for itemContaner", event.target.className)
  if (e.target.className.split(" ").indexOf("win-itemcontainer") >= 0) {
    console.log("MonkeyHack on DirectionalNavigation - SKIPPING CLICK");
    return;
  }
  return originalDNKeyUp.apply(null, arguments);
}
document.removeEventListener("keyup", originalDNKeyUp);
document.addEventListener("keyup", TVJS.DirectionalNavigation._handleKeyUpEvent);

It's not pretty, but meh, is working so far.

ItemContainers within a ContentDialog

UPDATE

I gave up on ContentDialog, and just started using react-modal

That's just a big mess from what I could figure out. I was able to get it working by using the ContentDialog but manually creat my own buttons as the ItemContainer in combination with the dialog kept swallowing events that didn't allow focus navigation to be sucessful. The internals of what was holding me back didn't appear to be monkey-patch-able from what I could tell... ugh...

Next up is a ListView hack,

This one is a hack proposed by Todd over on the GitHub issues.

I've essentially taken the original implementation of WinJS.UI.ListView.prototype._onFocusIn, and if you look for the line starting with /* JJ */ below you can see the change there.

Don't know what this actually could mean from other scenarios, but for now it's allowing the ListView to focus properly on my initial xbox testing.

var _Constants = WinJS.UI;
var _UI = WinJS.UI;

WinJS.UI.ListView.prototype._onFocusIn = function ListView_onFocusIn(event) {
                    this._hasKeyboardFocus = true;
                    var that = this;
                    function moveFocusToItem(keyboardFocused) {
                        that._changeFocus(that._selection._getFocused(), true, false, false, keyboardFocused);
                    }
                    // The keyboardEventsHelper object can get focus through three ways: We give it focus explicitly, in which case _shouldHaveFocus will be true,
                    // or the item that should be focused isn't in the viewport, so keyboard focus could only go to our helper. The third way happens when
                    // focus was already on the keyboard helper and someone alt tabbed away from and eventually back to the app. In the second case, we want to navigate
                    // back to the focused item via changeFocus(). In the third case, we don't want to move focus to a real item. We differentiate between cases two and three
                    // by checking if the flag _keyboardFocusInbound is true. It'll be set to true when the tab manager notifies us about the user pressing tab
                    // to move focus into the listview.
                    if (event.target === this._keyboardEventsHelper) {
                        if (!this._keyboardEventsHelper._shouldHaveFocus && this._keyboardFocusInbound) {
                            moveFocusToItem(true);
                        } else {
                            this._keyboardEventsHelper._shouldHaveFocus = false;
                        }
                    } else if (event.target === this._element) {
                        // If someone explicitly calls .focus() on the listview element, we need to route focus to the item that should be focused
                        moveFocusToItem();
                    } else {
                        if (this._mode.inboundFocusHandled) {
                            this._mode.inboundFocusHandled = false;
                            return;
                        }

                        // In the event that .focus() is explicitly called on an element, we need to figure out what item got focus and set our state appropriately.
                        var items = this._view.items,
                            entity = {},
                            element = this._getHeaderOrFooterFromElement(event.target),
                            winItem = null;
                        if (element) {
                            entity.index = 0;
                            entity.type = (element === this._header ? _UI.ObjectType.header : _UI.ObjectType.footer);
                            this._lastFocusedElementInGroupTrack = entity;
                        } else {
                            element = this._groups.headerFrom(event.target);
                            if (element) {
                                entity.type = _UI.ObjectType.groupHeader;
                                entity.index = this._groups.index(element);
                                this._lastFocusedElementInGroupTrack = entity;
                            } else {
                                entity.index = items.index(event.target);
                                entity.type = _UI.ObjectType.item;
                                element = items.itemBoxAt(entity.index);
                                winItem = items.itemAt(entity.index);
                            }
                        }

                        // In the old layouts, index will be -1 if a group header got focus
                        if (entity.index !== _Constants._INVALID_INDEX) {
/* JJ */                            /*if (this._keyboardFocusInbound || this._selection._keyboardFocused())*/ {
                                if ((entity.type === _UI.ObjectType.groupHeader && event.target === element) ||
                                        (entity.type === _UI.ObjectType.item && event.target.parentNode === element)) {
                                    // For items we check the parentNode because the srcElement is win-item and element is win-itembox,
                                    // for header, they should both be the win-groupheader
                                    this._drawFocusRectangle(element);
                                }
                            }
                            if (this._tabManager.childFocus !== element && this._tabManager.childFocus !== winItem) {
                                this._selection._setFocused(entity, this._keyboardFocusInbound || this._selection._keyboardFocused());
                                this._keyboardFocusInbound = false;
                                if (entity.type === _UI.ObjectType.item) {
                                    element = items.itemAt(entity.index);
                                }
                                this._tabManager.childFocus = element;

                                if (that._updater) {
                                    var elementInfo = that._updater.elements[uniqueID(element)],
                                        focusIndex = entity.index;
                                    if (elementInfo && elementInfo.newIndex) {
                                        focusIndex = elementInfo.newIndex;
                                    }

                                    // Note to not set old and new focus to the same object
                                    that._updater.oldFocus = { type: entity.type, index: focusIndex };
                                    that._updater.newFocus = { type: entity.type, index: focusIndex };
                                }
                            }
                        }
                    }
                }

One big improvement could be to consider setting up a unit-test that could take the original "string" value of the entire function code, and comparing it to the current version of WinJS library you're using and fail if they're even one character different. This would allow you to detect if say a fix were applied, or you need to update our local hacked version with some remote changes... It's not pretty, but one way to avoid over-writing possible working WinJS code with our potentially not-so-future-proof hacked version.

Next one is the WinJS [ToggleSwitch].

This control just seemed to have all behavior wrong for me. So I hacked the keyDownHandler and simplified it's implementation which seems to have really made it more usable (for me).

var _ElementUtilities = WinJS.Utilities

WinJS.UI.ToggleSwitch.prototype._keyDownHandler =  function ToggleSwitch_keyDown(e) {
    if (this.disabled) {
        return;
    }

    // Toggle checked on spacebar
    if (e.keyCode === _ElementUtilities.Key.space ||
        e.keyCode === _ElementUtilities.Key.GamepadA ||
        e.keyCode === _ElementUtilities.Key.enter) {
        e.preventDefault();
        this.checked = !this.checked;
    }

}

The original had up/down/left/right configured to toggle the switch on/off which meant focus in/out was nearly impossible, it also only listened to space as a toggle option. So by removing the up/down/left/right we can navigate in/around the control and we wanted to listen to space, GamepadA, and enter to toggle the control on/off.

What else?

The WinJS control set is quite large, and I certainly haven't worked with each control in this manor, however, it's a step forward eh and if you managed to come across this random post on the interweb I hope it was useful?

(Comments)

Slightly modified “CD” Command for Powershell: Even better dot.dot.dot.dot...

(Comments)

There's this little "CD" utility that I iterate on every once in while but has become one of my favorite PowerShell tools on Windows.

Not because it's really that great, but more because there are some navigation habits I acquire over on a Mac in a ZSH terminal that become challenging to not have on a Windows PowerShell terminal and each time I iterate on it, it synchronizes my workflows on both Mac and Windows environments.

In my previous update I added support for typing cd ...N where cd .. will go up 1 directory, so cd .... will go up 3 directories.

Well today I found out that I can declare a function in powershell with a name .. - WHO KNEW?

For example if you pasted the following into your PowerShell terminal: function ..() { echo "HELLO"; }; ..

This would define the function .. as well as run it and print out HELLO.

This was a fantastic stumbling on my part because on my Mac I often go up 1-n directories by typing .. at the terminal or ..... <-- however many I want to go up.

So today I updated the Change-Directory.ps1 with the following shortcuts:

function ..() { cd .. }
function ...() { cd ... }
function ....() { cd .... }
function .....() { cd ..... }
function ......() { cd ...... }
function .......() { cd ....... }
function ........() { cd ........ }

If you're interested in the evolution of this CD tool:

SUPER SWEET!

Happy CD'ing!

(Comments)

Better Compact JIRA Board U.I.

(Comments)

We use JIRA's kanban board for daily workflow of tasks. I started having an issue with the default JIRA board layout where the cards do not show enough of a task's title, especially when you pseudo-group tickets by prefixing their title with some context.

Given that, I decided to hack the CSS of JIRA's board to improve this. Take a look at an example before/after of the tweaks below, then I'll walk through how you can get it if you so desire.

Before

Jira card before css hack

After CSS Hack

Jira card before css hack

How did you do that?

  1. Install a CSS style plugin into your browser. This post all use Stylebot for the Chrome web browser.
  • Just install the plugin from the Chrome store by clicking this link and selecting [+ ADD TO CHROME] button on the upper right of the page.
  1. Once the plugin is installed. Close all tabs and re-open a tab to the JIRA kanban board.

  2. Click the new Stylebot plugin CSS button and the Open Stylebot... option in your chrome browser toolbar which will open up a U.I. that allow you to mess around with the page's style.

  3. At the bottom of the Stylebot click Edit CSS which will give you a blank text box you can write custom CSS into.

  4. Paste in the following CSS and hit Save


/** Version 2.0
 ** Copyright Jason Jarrett
 **/

.ghx-avatar-img {
    font-size: 15px;
    height: 15px;
    line-height: 15px;
    width: 15px;
}

.ghx-band-1 .ghx-issue .ghx-avatar {
    left: 100px;
    right: auto;
    top: -3px;
}

.ghx-band-3 .ghx-issue .ghx-avatar {
    top: 0px;
}

.ghx-issue .ghx-extra-fields {
    margin-top: 5px;
}

.ghx-issue .ghx-flags {
    left: 20px;
    top: 5px;
}

.ghx-issue .ghx-highlighted-fields {
    margin-top: 5px;
}

.ghx-issue .ghx-type {
    left: 5px;
    top: 5px;
}

.ghx-issue-content {
    font-size: 12px;
    margin-top: 3px;
    padding: 5px;
}

.ghx-issue-fields .ghx-key {
    margin-left: 30px;
}

.ghx-issue.ghx-has-avatar .ghx-issue-fields, .ghx-issue.ghx-has-corner .ghx-issue-fields {
    padding-right: 0px;
}

/* the below adjusts the backlog view */

.ghx-backlog-column .ghx-plan-extra-fields.ghx-row {
    float: right;
    position: relative;
    right: 70px;
    margin: 0;
    margin-top: -15px;
    height: 18px;
}

.ghx-backlog-column .ghx-issue-content, .ghx-backlog-column .ghx-end.ghx-row {
    padding: 0;
    margin: 0;
}

/* filters */
.js-quickfilter-button {
    padding: 0;
}
.js-sprintfilter {
    white-space: nowrap;
}
.js-sprintfilter > span {
    padding: 0;
}
dl dt, dd {
    margin: 0;
    padding: 0;
}

Now you should get a little bit more visible data on the page and be able to avoid hovering over titles to get enough context of the ticket to immediately know what it is.

Happy (as best you can) JIRA'ing

(Comments)

How to Update a Single Running docker-compose Container

(Comments)

As a newbie to the tooling, docker-compose it's great for getting started. To bring up all the service containers with a simple docker-compose up starts everything. However, what if you want to replace an existing container without tearing down the entire suite of containers?

For example: I have a docker-compose project that has the following containers.

  1. Node JS App
  2. CouchDB
  3. Redis Cache

I had a small configuration change within the CouchDB container that I wanted to update and re-start to get going but wasn't sure how to do that.

Here's how I did it with little down time.

I'm hoping there are better ways to go about this (I'm still learning), but the following steps are what I used to replace a running docker container with the latest build.

  1. Make the necessary change to the container (in my case update the couchdb config).
  2. Run docker-compose build couchdb (docker-compose build <service_name> where service_name is the name of the docker container defined in your docker-compose.yml file.)

Once the change has been made and container re-built, we need to get that new container running (without affecting the other containers that were started by docker-compose).

  1. docker-compose stop <service_name> <-- If you want to live on the edge and have the shut-down go faster, try docker-compose kill <service_name>
  2. docker-compose up -d --no-deps <service_name> <-- this brings up the service using the newly built container.

The -d is Detached mode: Run containers in the background, print new container names.

The --no-deps will not start linked services.

That's it... at least for me, it's worked to update my running containers with the latest version without tearing down the entire docker-compose set of services.

Again, if you know of a faster/better way to go about this, I'd love to hear it. Or if you know of any down-sides to this approach, I'd love to hear about it before I have to learn the hard way on a production environment.

UPDATE:

Thanks to Vladimir in the comments below - you can skip several steps above and do it with a single command

docker-compose up -d --no-deps --build <service_name>

I tested this and was able to avoid the build, kill, and up commands with this one-liner.

Happy Container Updating!

(Comments)

Run Multiple Docker Environments (qa, stage, prod) from the Same docker-compose File.

(Comments)

So, I'm playing around with some personal projects and looking to deploy some simple things with Docker to DigitalOcean. This personal project is a small site, and I'd like to set myself up with a repeatable deployment solution (that may be automated as much as possible) so I don't trip over myself with server hosting as I build out the web application.

I'm not really strong with server infrastructure and some of this is "figure it out as I go", while more of it is asking for help from a good friend @icecreammatt who has been a HUGE help as I stumble through this.

But at the end of this tutorial our goal is to satisfy the following requirements.

High level requirements:

Below are some core requirements this walk-through should help address. There is likely room for improvement, and I'd love to hear any feedback you have along the way to simplify things or make them more secure. But hopefully you find this useful.

I want there to be some semblance of a release process with various deployment environments. Push changes to qa regularly, push semi-frequently to stage and when things are good, ship a version to production.

  1. Have access to environments through various domain names. EX: if prod was my-docker-test-site.com then I would also have qa.my-docker-test-site.com and stage.my-docker-test-site.com
  2. Ability to run multiple "environments": qa, stage, prod in the same system. (prob not that many environments - but you get the picture)
  3. Deploy to various environments without affecting other environments. (Ship updates to qa)
  4. It'd be great if I can figure out a mostly zero-downtime deployment. (Not looking for perfect, but the less downtime the better)
  5. Keep costs low. For a small site - running all environments on say a small DigitalOcean droplet. (Is this even possible? We'll see...)
  6. Build various environments, test them out locally and then deploy them to the cloud (cloud)

While I'd like the ability to run as many environments as I listed above, I will likely use a qa and prod for my small site, but I think the pattern is such that we could easily setup whatever environments we need.

Structure of post:

What we want to do is essentially walk through how I'm thinking about accomplishing the above high level requirements using a simple node js hello world application. This app is a basic node app that renders some environment information just to prove that we can correctly configure and deploy various docker containers for environments such as qa, stage or prod.

In the end, we should end up with something that I like to imagine looks a bit like this diagram:

diagram of nginx-proxy in front of web app containers on a DigitalOcean droplet

I'm going to use DigitalOcean as the cloud provider in this case, but I think this pattern could be to other docker hosting environments.

Example Web App Structure

Below is a basic view of the file structure of our site. If you're following along, go ahead and create this structure with empty files, we can fill them in as we go...

.
|____app
| |____Dockerfile
| |____server.js
|____docker-compose.yml

Let's start with the ./app/* files:

Simple NodeJS Hello World app

This is a simple nodejs server that we can use to show that deployment environment variables are passed through and we are running the correct environment. As well as showing a functioning web server.

File: ./app/server.js

var http = require('http');

var server = http.createServer(function(req, res){
    res.writeHead(200, {"Content-Type": "text/plain"});
    res.end(`
Hello World!

VIRTUAL_HOST: ${process.env.VIRTUAL_HOST}
NODE_ENV: ${process.env.NODE_ENV}
PORT: ${process.env.PORT}

`.split('/n').join('<br>'));
});

server.listen(80);

The goal of this file is to run a nodejs web server that will return a text document with Hello World! along with the environment variables that the current container are running under such as qa.

This could easily be replaced with a python, php or ruby, or a whatever web server. Just keep in mind the rest of the article may assume it's a node environment (like the Dockerfile up next). So adjust accordingly.

The Dockerfile

Below is pretty basic and says, load up and run our nodejs server.js web app on port 80.

File: ./app/Dockerfile

# Start from a standard nodejs image
FROM node

# Copy in the node app to the container
COPY ./server.js /app/server.js
WORKDIR /app

# Allow http connections to the server
EXPOSE 80

# Start the node server
CMD ["node", "server.js"]

How to get the qa, prod domain mappings

So now that we have a basic application defined, we can't host multiple versions of the app all using port 80 without issue. One approach we can take would be to place an Nginx proxy in front of our containers to allow translation of incoming domain name requests to our various web app containers which we'll use docker to host on different ports.

The power here is we don't have to change the port within the docker container (as shown in the Dockerfile above) but we can use the port mapping feature when starting up the docker container to specify different ports for different environments.

For example I'd like my-docker-test-site.com to map to the production container, qa.my-docker-test-site.com the qa container of my site, etc... I'd rather not access my-docker-test-site.com:7893 or some port for qa, stage, etc...

To accomplish this we are going to use jwilder/nginx-proxy. Check out his introductory blog post on the project. We'll be using the pre-build container directly.

To spin this up on our local system let's issue the following command:

docker run -d -p 80:80 --name nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy

This project is great, now, as we add new or remove containers they will automatically be added/removed to the proxy and we should be able to access their web servers through a VIRTUAL_HOST. (more on how specifically below)

Networking

Before we get too far into the container environment of our app, we need to consider how the containers will be talking to each other.

We can do this using the docker network commands. So we're going to create a new network and then allow the nginx-proxy to communicate via this network.

First we'll create a new network and give it a name of service-tier:

docker network create service-tier

Next we'll configure our nginx-proxy container to have access to this network:

docker network connect service-tier nginx-proxy

Now when we spin up new containers we need to be sure they are also connected to this network or the proxy will not be able to identify them as they come online. This is done in a docker-compose file as seen below.

Put the two together

Now that we've defined our application with the server.js and Dockerfile and we have a nginx-proxy ready to proxy to our environment-specific docker http servers, we're going to use docker-compose to help build our container and glue the parts together as well as pass environment variables through to create multiple deployment environments.

Save this file as docker-compose.yml

version: '2'

services:
  web:
    build: ./app/
    environment:
      - NODE_ENV=${NODE_ENV}
      - PORT=${PORT}
      - VIRTUAL_HOST=${VIRTUAL_HOST}
      - VIRTUAL_PORT=${PORT}
    ports:
      - "127.0.0.1:${PORT}:80"

networks:
  default:
    external:
      name: service-tier

This file is all about:

  1. The build: ./app/ is the directory where our Dockerfile build is.
  2. The list of environment variables are important. The VIRTUAL_HOST and VIRTUAL_PORT are used by the nginx-proxy to know what port to proxy requests for and at what host/domain name. (We'll show an example later) You can see an earlier exploratory post I wrote explaining more about environment vars.
  3. The ports example is also important. We don't want to access the container by going my-docker-test-site.com:8001 or whatever port we're actually running the container on because we want to use the VIRTUAL_HOST feature of nginx-proxy to allow us to say qa.my-docker-test-site.com. This configuration sets it up to only listen on the loopback network so the nginx-proxy can proxy to these containers but they aren't accessible from the inter-webs.
  4. Lastly the networks: we define a default network for the web app to use the service-tier that we setup earlier. This allows the nginx-proxy and our running instances of the web container to correctly talk to each other. (I actually have no idea what I'm saying here - but it is simple enough to setup and I think it's all good - so I'm going with it for now...)

Now what?

So with all of these pieces in place, all we need to do now is run some docker-compose commands to spin up our necessary environments.

Below is an example script that can be used to spin up qa, and prod environments.

BASE_SITE=my-docker-test-site.com

# qa
export NODE_ENV=qa
export PORT=8001
export VIRTUAL_HOST=$NODE_ENV.$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d


# prod
export NODE_ENV=production
export PORT=8003
export VIRTUAL_HOST=$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d

This script is setting some environment variables that are then used by the docker-compose command where and we're also setting a unique project name with -p ${VIRTUAL_HOST}.

Project Name

What we said just before this headline is a key part to this. What enables us to run essentially the same project (docker-compose/Dockerfile) but with different environment variables that define things like qa vs prod is when we run docker-compose we're also passing in a -p or --project-name parameter. This allows us to create multiple container instances with different environment variables that all run on different ports and in theory isolate themselves from the other environments.

The thinking here is you could have a single docker-compose.yml file that has multiple server definitions like say a nodejs web, couchdb, and redis database all running isolated within their environment. You can then use the environment variables to drive various items such as feature-toggling newly developed features in a qa environment, but are not necessarily ready to run in a production environment.

Running/testing this out locally

You probably want to play with this idea and test it out locally before trying to push it to a remote system.

One easy way to do this is to modify your /etc/hosts file (on *nix) or follow this on windows to map the specific domain names you have setup for your environments to the actual service running docker. This will allow the nginx-proxy to do it's magic.

I'm currently still using docker-machine to run my docker environment in a VirtualBox VM so my /etc/hosts file looks like this.

##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting.  Do not change this entry.
##
127.0.0.1   localhost
255.255.255.255 broadcasthost
::1             localhost

192.168.99.100 qa.my-docker-test-site.com
192.168.99.100 stage.my-docker-test-site.com
192.168.99.100 my-docker-test-site.com

If you have the docker containers running that we've worked through so far (for all environments) we should be able to visit qa.my-docker-test-site.com in the browser and hopefully get this:

Hello World!

VIRTUAL_HOST: qa.my-docker-test-site.com
NODE_ENV: qa
PORT: 8001

Also try out the production environment at my-docker-test-site.com to verify it is working as expected.

THIS IS AWESOME :) I was actually quite happy to traveled this far in this exploration. But now let's try to take it up a notch and deploy what we just built locally to the DigitalOcean in the cloud (cloud).

Deploy to the Cloud!

Now how do we get this locally running multi-environment system up to a server in the cloud?

Just tonight while researching options I found this simple set of steps to get it going on DigitalOcean. I say simple because you should the original steps I was going to try and use to deploy this... sheesh.

These are the steps we're going to walk through.

  1. Get an Account @ DigitalOcean
  2. Create a Docker Droplet (this was way-cool)
  3. Build and Deploy our nginx-proxy.
  4. Build and Deploy our App
  5. Configure our DNS (domain name)
  6. Profit!

Tonight I discovered this blog post on docker that describes using docker-machine with the digitalocean driver to do basically everything we did above - but IN THE CLOUD cloud - kind of blew me away actually.

Get an Account

First make sure you've signed up with a DigitalOcean and are signed in.

Create a Docker Droplet (this was way-cool)

Next we're going to use a cool feature of docker-machine where we can leverage the DigitalOcean driver to help us create and manage our docker images.

Complete Step 1 and Step 2 in the following post DigitalOcean example to acquire a DigitalOcean personal access token.

Now that you have your DigitalOcean api token, you do right? Either pass it direclty into the below command (in place of $DOTOKEN or set a local var as demonstrated.

DOTOKEN=XXXX <-- your token there...
docker-machine create --driver digitalocean --digitalocean-access-token $DOTOKEN docker-multi-environment

NOTE: There was an issue where (this used to work but stopped with the the docker-machine DigitalOcean default image reported here). To work around this try using a different image name

EX:

DIGITALOCEAN_IMAGE="ubuntu-16-04-x64"
docker-machine create --driver digitalocean --digitalocean-access-token $DOTOKEN docker-multi-environment

If you refreshed your DigitalOcean's droplets page you should see a new droplet called docker-multi-environment.

We can now configure our local terminal to allow all docker commands to run against this remotely running docker environment on our newly created droplet.

eval $(docker-machine env docker-multi-environment)

If you ran docker ps it should be empty, but this is literally listing the containers that are running up at DigitalOceain in our droplet. How awesome is that?

Build and Deploy our nginx-proxy.

Now that we can just speak docker in the cloud, run the following commands - these are all assuming we're executing them against the DigitalOcean droplet in the cloud!!

  1. Spin up our remote nginx-proxy on the remote droplet

    docker run -d -p 80:80 --name nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
    
  2. Create our network

    docker network create service-tier
    
  3. Tell nginx-proxy about this network

    docker network connect service-tier nginx-proxy
    

Build and Deploy our App

I know this post has gotten a bit long, but if you've made it this far we're almost there...

If you were to run the following script in our local project's folder where we have the docker-compose.yml file:

Be sure to update BASE_SITE=my-docker-test-site.com with your domain name or sub-domain like BASE_SITE=multi-env.my-docker-test-site.com

BASE_SITE=my-docker-test-site.com

# qa
export NODE_ENV=qa
export PORT=8001
export VIRTUAL_HOST=$NODE_ENV.$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d


# prod
export NODE_ENV=production
export PORT=8003
export VIRTUAL_HOST=$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d

You should now be able to run docker ps or docker-compose ps and see 3 containers running. The nginx-proxy, your qa site and also the prod site.

All that's left is to make sure DNS is configured and pointing to our nginx-proxy front-end...

While playing with this - I kept tearing down droplets and re-building them as I worked through this tutorial and I kept forgetting to adjust my DNS settings. However right in the middle of writing this tutorial DigitalOcean came out with Floating Ip's which wasn't perfect, but definitely made this easier to work with. I didn't have to always update the IP address of my droplet, but instead just update the floating ip to point to the newly created droplet.

Configure our DNS (domain name)

I'm assuming you've already purchased a domain name that you can setup and configure on DigitalOcean. So I don't want to go too far into this process.

I also think DNS is out of scope for this post (as there are many others who can do a better job) but I used some great resources such as these while configuring my DigitalOcean setup.

Environment All Things

If you've made it this far, you hopefully have a DigitalOcean droplet that is now serving qa and prod http requests.

NICE!!!

Now the most important thing - how to seamlessly update an environment with a new build...

Let's make an update to QA.

Now that we've deployed our site to QA and to walk through this a little further, let's make a modification to our qa site and see if we can get it deployed without causing any downtime especially to the prod site, but maybe we can also get an in-place deployment done and have little-to-no down time in qa as well.

I wrote that paragraph above the other night near bedtime and as I'm learning some of this on the fly had no idea if this would be easy enough to accomplish, but to my surprise deploying an update to qa was a piece of cake.

For this test I made a simple change to my node web server code so I could easily see that the change was deployed (or not).

I turned Hello World! into Hello World! V2 below.

File: ./app/server.js

var http = require('http');

var server = http.createServer(function(req, res){
    res.writeHead(200, {"Content-Type": "text/plain"});
    res.end(`
Hello World!

VIRTUAL_HOST: ${process.env.VIRTUAL_HOST}
NODE_ENV: ${process.env.NODE_ENV}
PORT: ${process.env.PORT}

`.split('/n').join('<br>'));
});

server.listen(80);

I then used docker-compose to bring up another "environment" but using the same qa VIRTUAL_HOST as before.

BASE_SITE=my-docker-test-site.com

export NODE_ENV=qa
export PORT=8004
export VIRTUAL_HOST=$NODE_ENV.$BASE_SITE
docker-compose -p ${VIRTUAL_HOST}x2 up -d

NOTICE: how the -p parameter we added x2 (just to give it a different project name (for being a different version)).

This will bring up another docker container with our updated web application and to my surprise the nginx-proxy automatically chose this new container to send requests to.

So if you docker ps you should see 4 containers running. 1 nginx-proxy, 1 prod container, 2 qa containers (with different names).

You can think about leaving both containers running for the moment while you test out the new release.

One neat thing you can think about with this is if there was something seriously wrong with the new qa release you could just stop the new container and (docker stop <new_container_id>) and the proxy will start redirecting back to the old qa container. (That only works of course if your deployment was immutable - meaning you didn't have the new container run some one way database migration script... but that's not something I want to think about or cover in this post).

Once you're comfortable running the new version you can now bring down and cleanup the older version.

docker ps # to list the containers running
docker stop <old_qa_container_id>

docker images # to list the images we have on our instance
docker rmi <old_qa_image_id>

Now let completely remove our test...

You probably don't want to run the sample node script from above forever as you'll be charged some money from DigitalOcean for this and I'd feel bad if you received a bill for this little test beyond a few pennies as you test it out...

The following command will completely remove the droplet from DigitalOcean.

docker-machine rm docker-multi-environment

Wrap Up and What's Next?

I feel like I've done enough learning and sharing in this post. But there is still more to do...

If you want to check out the snippets above combined into a sample github repository I've put it up here.

Future thinking...

I don't know if I'll blog about these, but I definitely want to figure them out. If you find a way to extend my sample above to include the following I'd love to hear about it...

  • SSL (consider cloudflare or letsencrypt?)
  • Easy way to secure the qa/stage environments?

Happy Docker Enviornment Building!

(Comments)

Easily simulate slow async calls using JavaScript async/await

(Comments)

Recently I wanted to manually and visually test some U.I. that I couldn't easily see because an async operations was happening two fast. (first world problems)

I've really been enjoying the new async/await syntax that can be leveraged in recent JavaScript transpires such as Babel and TypeScript but when it happens so fast on a local development environment, some U.I. interactions could get tough to test out. So how can we slow this down or more appropriately add some stall time?

As an example, let's say you have the following async javascript method


var doSomething = async () => {
  var data = await someRemoteOperation();
  await doSomethingElse(data);
}

If the first or second asynchronous methods in the example above were running too fast (as were in my case), I was pleased to see how easy it was to inject a bit of code to manually stall the operation.

The snippet below is short and sweet... this just delays continuous execution of an async operation for 3 seconds.

  await new Promise(resolve => setTimeout(resolve, 3000));

The little baby staller helper

Give it a name and make it a bit more re-usable.

async function stall(stallTime = 3000) {
  await new Promise(resolve => setTimeout(resolve, stallTime));
}

The above helper can be used by just awaiting it (see below)


var doSomething = async () => {

  var data = await someRemoteOperation();

  await stall(); // stalls for the default 3 seconds
  await stall(500); // stalls for 1/2 a second

  await doSomethingElse(data);

}

Now we have an async function that runs slower - isn't that what you always wanted?

Happy Stalling!

(Comments)