Today's lesson or re-lesson. Don't just write your tests, write it so you can understand the intricacies a year from now.
The other day I was pairing on a what we first thought was a bug in our software. I was explaining something we saw.. "oh yea, that's a bug, oh and look at these tests, they look wrong also. Why is this so wrong? How did we not catch this last year? Something is off here... etc..."
While I had some tests to cover a business requirement a year ago when we wrote the software, I had not properly described the business case within the test to document it's purpose.
The subtle, bug-look-a-likes turned out to be proper behavior. If only I had not just documented the expected behavior via a test, but had properly describe WHY this behavior existed within the test, we could have avoided potentially breaking it (by "trying to fix it") and the test could have better communicated the WHY of this strange case.
This is just a reminder to myself to write better tests. Doing tests, great. Well documented, business explanation of the WHY in the tests? Way better!
Happy testing!
]]>EX:
myFunction("val1", 1, true, false, false, true);
Looking at the above, we can sort of tell that the first param is some string, maybe we can even tell what it's purpose is, the second a number maybe we even know why if we understand the codebase. But the booleans that follow? What are those representing? How do you know you're providing the right value for the right argument?
One approach is to refactor those parameters into variable names to help give them meaning.
EX:
var firstName = "val1";
var age = 1;
var isCool = true;
var isHungry = false;
var isSleeping = false;
var isHappy = true;
myFunction(firstName, age, isCool, isHungry, isSleeping, isHappy);
Now at least when we look at each parameter we can see it's name and that helps give some context of what it represents.
But as this parameter list grows it can be difficult to maintain. We could argue it's already too long.
What if you have to add a parameter in the middle?
How can you be sure all the callers get updated correctly?
Another approach would be to refactor this list of parameters into an object of properties.
This comes with some benefits like, you can add/remove properties without worrying about parameter ordering, as long as they can be optional and have solid defaults.
One cool thing I found with ES6 destructuring and enhanced object literals is if our function definition AND our caller follow a similar pattern. Meaning the variable names used in the caller are the same as the parameter names within the function.
EX:
function myFunction(firstName, age, isCool, isHungry, isSleeping, isHappy) {}
//**
var firstName = "val1";
var age = 1;
var isCool = true;
var isHungry = false;
var isSleeping = false;
var isHappy = true;
myFunction(firstName, age, isCool, isHungry, isSleeping, isHappy);
To refactor this we can simply have the caller pass an object, and have the function use destructuring of that object. It simply becomes.
function myFunction({firstName, age, isCool, isHungry, isSleeping, isHappy}) {}
var firstName = "val1";
var age = 1;
var isCool = true;
var isHungry = false;
var isSleeping = false;
var isHappy = true;
myFunction({firstName, age, isCool, isHungry, isSleeping, isHappy});
This is cool because it's simple, requires very little work, and now we can gain the benefits of using an object instead of an ordered set of parameters.
This opens the door to quit a few other organization options.
Happy Cleanup!
]]>Say I want to quickly run some Mocha test in a browser. Like say I'm spiking something, wanting to use tests in something like jsbin or plunkr and just get code running quickly.
Most browsers now support tye script=module tag which allows you to use import
statements withing browser script tags. O_O
So how could I use this to import mocha
and write some quick tests? I'm so glad you asked.
id=mocha
so let's add that.<!DOCTYPE html>
<html>
<body>
+ <div id="mocha"></div>
</body>
</html>
<!DOCTYPE html>
<html>
<body>
<div id="mocha"></div>
+ <script type="module">
+ import "https://dev.jspm.io/mocha"
+ console.log("Mocha:", Mocha);
+ </script>
</body>
</html>
This package is loaded using the import
functionality built into our browsers now and JSPM which allows us to load some npm
packages right through the browser. MAGIC!
WARNING: DO NOT do this for a production-type environment. This is just a quick prototype tool leveraging JSPM's dev hosted servers that allow us to use import to load packages via NPM through their servers. Thank you JSPM.
<!DOCTYPE html>
<html>
<body>
<div id="mocha"></div>
<script type="module">
import "https://dev.jspm.io/mocha"
- console.log("Mocha:", Mocha);
+ mocha.setup('bdd');
+
+ mocha.run();
</script>
</body>
</html>
<!DOCTYPE html>
<html>
<body>
<div id="mocha"></div>
<script type="module">
import "https://dev.jspm.io/mocha"
mocha.setup('bdd');
+ describe("some awesome tests", function () {
+ it("Should pass", function (){
+ console.log("passing test");
+ });
+ it("Should fail", function (){
+ throw new Error("OH NO!!!");
+ });
+ });
mocha.run();
</script>
</body>
</html>
THAT'S IT - using almost nothing but our browser's capabilities (and maybe a complicated JSPM back-end tool) we can easily write some in-browser tests to spike something, or just play around.
That's true, let's bring in some styles
<!DOCTYPE html>
<html>
<body>
<div id="mocha"></div>
+ <link rel="stylesheet" href="https://dev.jspm.io/mocha/mocha.css">
<script type="module">
import "https://dev.jspm.io/mocha"
mocha.setup('bdd');
describe("some awesome tests", function () {
it("Should pass", function (){
console.log("passing test");
});
it("Should fail", function (){
throw new Error("OH NO!!!");
});
});
mocha.run();
</script>
</body>
</html>
NOTE: Prob better to use a CDN version instead of pounding JSPM servers (sorry).
<!DOCTYPE html>
<html>
<body>
<div id="mocha"></div>
<link rel="stylesheet" href="https://dev.jspm.io/mocha/mocha.css">
<script type="module">
import "https://dev.jspm.io/mocha"
mocha.setup('bdd');
describe("some awesome tests", function () {
it("Should pass", function (){
console.log("passing test");
});
it("Should fail", function (){
throw new Error("OH NO!!!");
});
});
mocha.run();
</script>
</body>
</html>
Cake!
I've done this a few times lately, and am almost able to do it without looking up api and syntax O_O. I also don't need webpack, gulp, servers, or anything but a wee-bit of html/js and some luck (that JSPM servers will hang in there).
Happy Testing!
]]>Sorting before saving the object to a file like in approvals makes it much easier to diff two JSON documents that may not originally be in the same order.
So I threw together this quick little helper. It's far from perfect and won't suit all needs. But if all you're trying to do is sort and serialize an object it seems to be working well for now.
Happy Dictionary Reading!
]]>'
or double "
quotes comes up often enough that I thought I'd create a bit of an amalgamation of items around the topic for future reference.
Today, it came up at work when someone sent a PR to our internal standard eslint config repository and there were some various points brought up with no real final solid winner or resolution.
In this post, I'm not (sadly) going to write anything interesting or profound or even original. Just wanted to aggregate some pros/cons as I've seen splattered in various locations around the web on the topic.
"Double quotes with a single ( ' ) quote in the middle"
'Single quotes with a double ( " ) quote in the middle'
"double quotes ( \" ) must escape a double quote"
'single quotes ( \' ) must escape a single quote'
<Shift>
for single quote vs double).var html = '<div id="some_div"></div>'
"
EX: var html = "<div id=\"some_div\"></div>"
which can get annoying. Or you could use single quotes within the html string, but I don't want to cover quotes for html here (that would just hurt)...''
vs ""
(too many little marks in the latter and can be difficult read - ahhh)Only real con I can come up with is copying/pasting between JSON and JavaScript files - single quotes are not supported within JSON files, so you'd have to do a host of search/replace (and escaping of double quotes)...
<Shift>
when wanting to use double quotesI recommend single quotes as a solid standard. Unless you are copying JSON objects in JavaScript and pasting them into JS files a ton - it's generally my personal preference.
Happy Quoting!
]]>This allows you to log in to a second unique Mac user while already being logged into the first account already. It can be done without having to logout/login to each one individually (one at a time).
The reasons could vary but here are a couple examples:
To accomplish this we're going to be turning on some services/features that have the potential to open security vulnerabilities so please use with caution and learn/know your risks.
To accomplish this your Mac needs to have the proper permissions and configuration in place to allow this to happen.
First we need to access the system preferences:
Then open the Sharing
preferences:
Then enable Screen Sharing
and don't forget to add the specific users you want to allow screen to be shared for.
Note: I blocked out this specific user-name - but assume the blacked out user is the Mac account's user that I want to log into using the Screen Sharing application
I had to enable enable remote login to allow the up-coming ssh
command to run. Here is the configuration I used:
From the currently logged in session, open a Terminal
and run the following command:
ssh -NL 5901:localhost:5900 localhost
The -L
has this to say in ssh's man pages
-L [bind_address:]port:host:hostport
-L [bind_address:]port:remote_socket
-L local_socket:host:hostport
-L local_socket:remote_socket
Specifies that connections to the given TCP port or Unix socket on the local (client) host are to be forwarded to the
given host and port, or Unix socket, on the remote side. This works by allocating a socket to listen to either a TCP port
on the local side, optionally bound to the specified bind_address, or to a Unix socket. Whenever a connection is made to
the local port or socket, the connection is forwarded over the secure channel, and a connection is made to either host
port hostport, or the Unix socket remote_socket, from the remote machine.
Port forwardings can also be specified in the configuration file. Only the superuser can forward privileged ports. IPv6
addresses can be specified by enclosing the address in square brackets.
By default, the local port is bound in accordance with the GatewayPorts setting. However, an explicit bind_address may be
used to bind the connection to a specific address. The bind_address of ``localhost'' indicates that the listening port be
bound for local use only, while an empty address or `*' indicates that the port should be available from all interfaces.
For -N
:
-N Do not execute a remote command. This is useful for just forwarding ports.
Here's what it looks like when I ran it locally:
> ssh -NL 5901:localhost:5900 localhost
The authenticity of host 'localhost (::1)' can't be established.
ECDSA key fingerprint is SHA256:ytfRv5WDPuTjGbBugJjmc8gOhsHga7ozGqNgjOXpdRM.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'localhost' (ECDSA) to the list of known hosts.
Password: <I entered my account/admin password>
Once that ssh command above is up and running, we're now ready to log into the other account using Screen Sharing.
Open the Screen Sharing
Mac app located in: /System/Library/CoreServices/Screen Sharing.app
. You can also use CMD+<Space>
(Spotlight) and type Screen Sharing
to open the app.
Then enter localhost:5901
to start the process.
It should look like this:
In the below screen entered the username/password that you want to login as. (Not the current account - the other one)
Now select that you want to login as "yourself" where "yourself" is really "other account":
...and boom, you should now able to use two separate accounts on a single Mac session.
Happy Spying (wink wink)!
]]>There are two parts to this.
In some of these projects it means I also check in the bundled code into git. A common use case is to be able to use github's pages feature to host this content (err bundled output) as well as the raw source.
The problem we can run into is if you make a change to the source, commit and push - nothing happened... Because the bundled code didn't get re-bundled, the github hosted pages page doesn't pull the latest changes in.
I'm a fan of using git pre-commit hooks to catch you early in the development life cycle to on things like test errors (or in this case a bundle issues).
So I came up with an example that allows me to make code changes and catch myself from committing when the raw source has changed, but the bundle did not reflect that.
The gist is it's a .js
script that runs a set of actions and tests for the bundle.js currently vs what's about to be committed. Failing to commit if the current bundle doesn't match what the previous bundle is... Meaning, when we run our build (webpack in this case) if the bundle.js didn't change, we can commit.
This ensures that whatever bundle.js is committed is tied to the code-change the original source. Avoiding "fixing" something in the source and it not actually getting deployed because the bundle is out of date.
There are some good options in the npm/node world for pre-commit hooks. Check out husky or pre-commit. However you get your precommit hook setup - great...
In my case I used husky
and here are the relevant bits to my package.json
.
{
...
"devDependencies": {
+ "husky": "^0.13.4"
}
"scripts": {
+ "precommit": "node ./pre-commit-build.js"
}
}
pre-commit-build.js
script I usedThe below is the short, but complete pre-commit script I use to enforce this workflow.
var crypto = require('crypto');
var fs = require('fs');
var exec = require('child_process').exec;
var bundleFileName = './dist/bundle.js';
var getShaOfBundle = function () {
var distFile = fs.readFileSync(bundleFileName);
var sha = crypto.createHash('sha1').update(distFile).digest("hex");
return sha;
}
// make sure we only bundle/build what is staged to get a proper
// view of what will be committed
exec('git stash --keep-index');
// Get a snapshot of the original bundle
var beforeSha = getShaOfBundle();
// run our build
exec('./node_modules/.bin/webpack');
// snapshot the bundle after the build
var afterSha = getShaOfBundle();
// reset anything that was stashed
exec('git stash pop');
if (beforeSha !== afterSha) {
throw new Error("Need to bundle before committing");
}
Now whenever I make a change to the raw source code - this pre-commit script makes sure that the dist/bundle.js
is correctly mapped to the raw source.
Happy committing!
]]>So what does any programmer do when he's not satisfied with what he can't find? He writes his own. Now that I've put it together I'm just going to post this here so I can find it again down the road when I need to. :)
Below is a utility function that I threw together that will take an imageUrl
parameter and will return through a Promise a base64 encoded image data string. You can see from the example usage below that we're just setting an HTML image .src
property to the result of this function and it should just work.
NOTE: also if you look at the code that accomplish this below, it may look a bit like "futuristic" JavaScript right now. With all the
async/await
,arrow functions
,http fetch
and all, but the cool thing is you should be able to just copy/paste the below into a JSBin/Plnkr/Codepen without issue (in Chrome). No need for Babel/TypeScript/transpiler if you're just prototyping something (as I was). Don't rely on this to work in all browsers yet but I'm sure when I'll be googling myself for this snippet in the future, this should work in most browsers so I'll just leave this here for now.
async function getBase64ImageFromUrl(imageUrl) {
var res = await fetch(imageUrl);
var blob = await res.blob();
return new Promise((resolve, reject) => {
var reader = new FileReader();
reader.addEventListener("load", function () {
resolve(reader.result);
}, false);
reader.onerror = () => {
return reject(this);
};
reader.readAsDataURL(blob);
})
}
An example usage of the above:
getBase64ImageFromUrl('http://approvaltests.com/images/logo.png')
.then(result => testImage.src = result)
.catch(err => console.error(err));
As an alternative to base64 encoded FileReader approach above, you can also also use the URL.createObjectURL return URL.createObjectURL(blob);
There's of course a number of online tools that do this for you already.
Hope this helps future me find the implementation a min or two faster some day.
Happy Encoding!
]]>ES6/7 was great at the beginning, when I had 1, then 2... then 5 files. The project was small, I could pretty much load the entire application into my own head at once, reason, tweak, plow forward.
It didn't take too long before the project grew beyond a single load into memory. Now I'm mentally paging out components from different files as I work on various sections of the application.
Just like anything, once the project grew to this point, it became a little more difficult to maintain. Now, I consider myself a fairly solid developer, and that's likely how I made it this far without a compiler as my components were small, surface areas tight and the interactions between them were well managed. I also had a decent number of in-app unit and integration tests because generally (but not always) I'm a test-first kind of guy.
However, that didn't stop me from breaking things, making mistakes or just out-right screwing up a javascript file here and there along the way.
While working on this project, it always niggled me that the project would keep growing without the ability for the most basic of unit-tests to run (the compiler). Almost a year ago I even remember trying to use Typescript but using it with JSPM and without having VisualStudio Code all together, it just never came together (or I just didn't try hard enough).
But this past week, I gave it another go, and while I'm not totally there (or where I'd like to end up), I'm quite happy with the results I've made so far and am impressed and quite happily working in a project that has completely been ported to Typescript from ES6/7 using BabelJS.
Now, when it comes to large software projects, I'm pretty sure I shouldn't be calling this project a "large" as the subject of this post seems to label it... But for a system built only by me in some nights and weekends, it is the largest single app I've built alone, so that's where I'm defining "Large".
The project has just about a hundred javascript files/components/classes/modules and comes in just above 12,000
lines of code. That's not counting of course all the dependencies pulled in through JSPM (using both NPM and Github). In fact I really need to look at my dependencies and see where I can trim some fat, but that's not the subject of this post.
With some context about the project this is coming from out of the way, I thought it would be helpful to outline the steps (or stumbles) I took along the way to get my project up and running using TypeScript with JSPM.
Below are steps I took to get this thing going. I doubt they're perfect or even apply to your or anyone elses projects, but here's hoping they're helpful
jspm init
command to setup a fresh new project and selected the Typescript
transpiler option.this allowed me to inspect what a "fresh" project from JSPM would look like with Typescript setup.
Now, my project isn't Angular (it's actually React based), but I thought I could learn a little something along the way. I don't know if I actually gleaned anything while doing this (as I'm writing this post a ways after I actually did the work, but as an FYI, you might learn something reading it)
Looking back at the series of commits during my port, here's basically what I did. In some cases order doesn't matter below, but I left this list in the order of my projects git commit log.
.jsx
extension to .tsx
(Typescript's variant of JSX) (note: not renaming anything but code I wrote - so don't touch anything in jspm_packages
or node_modules
folders etc.jspm install ts
<-- installing the Typescript jspm pluginjspm.config.js
transpiler
flag with the following:- transpiler: "plugin-babel",
+ transpiler: "Typescript",
+ TypescriptOptions: {
+ "tsconfig": true // indicates that a tsconfig exists that should be used
+ },
Then I updated my jspm.config.js
's app
section with the following.
packages: {
- "app": {
- "defaultExtension": false,
- "main": "bootstrap.jsx",
- "meta": {
- "*": {
- "babelOptions": {
- "plugins": [
- "babel-plugin-transform-react-jsx",
- "babel-plugin-transform-decorators-legacy"
- ]
- }
- }
- }
- },
+ "app": { // all files within the app folder
+ "main": "bootstrap.tsx", // main file of the package (will be important later)
+ "format": "system", // module format
+ "defaultExtension": "ts", // default extension of all files
+ "meta": {
+ "*.ts": { // all ts files will be loaded with the ts loader
+ "loader": "ts"
+ },
+ "*.tsx": { // all ts files will be loaded with the ts loader
+ "loader": "ts"
+ },
+ }
+ },
tsconfig.json
file{
"compilerOptions": {
"target": "es5", /* target of the compilation (es5) */
"module": "system", /* System.register([dependencies], function) (in JS)*/
"moduleResolution": "node", /* how module gets resolved (needed for Angular 2)*/
"emitDecoratorMetadata": true, /* needed for decorators */
"experimentalDecorators": true, /* needed for decorators (@Injectable) */
"noImplicitAny": false, /* any has to be written explicitly*/
"jsx": "react"
},
"exclude": [ /* since compiling these packages could take ages, we want to ignore them*/
"jspm_packages",
"node_modules"
],
"compileOnSave": false /* on default the compiler will create js files */
*.js
files to *.ts
. (Similar to step 1 above with jsx -> tsx
but now just the plain JavaScript files)import foo from './foo.js'
I removed the .js
extensions like import foo from './foo'
.jsx
extension in my import
statements - but renamed them to tsx
so import foo from './foo.jsx'
became import foo from './foo.tjs'
globalTypes.d.ts
, this is where I could hack in some type definitions that I use globally in the project.I used the typings tool to search for TypeScript type definitions. And if I found one, I would typically try to install them from npm
.
For example: searching for react
like typings search react
shows me that there is a DefinitelyTyped version of the type defs and I now know that we can use NPM to install them by typing npm install --save @types/react
So I installed a ton of external library typings.
tsconfig.json
file was not at the root of my project (was at the root of my client site) - but it was nested several folders down from the root of my project. For some reason the editor wasn't picking it up until I opened the editor from the location the tsconfig.json
file was rooted, things didn't work.Honestly, I don't know what the above was about - but was something I ran into. I can't say for certain if it is still an issue - I think I'm starting to see editor features load up regardless of what folder I open things - so your mileage may vary.
THE END - ish
The above steps were really all I went through to port this project over to TypeScript and it was relatively seamless. That's not to say it was simple or easy, but definitely do-able, and worth it.
It's been a few weeks since I ported the project to TypeScript and I'm really kicking myself for not doing it sooner.
The editor assist with intellisense of function calls from internal and external modules and their usage/function signatures saves time researching external documentation.
async/await
feature.jspm bundle app
at the command line doesn't report any typescript errors - or fail any builds. However, I'm glad it doesn't because something isn't quite rite with my configuration as every import of a .tsx
file reports an error. So, for now I'm just relying on the red squigglies in my VS Code editor to help me catch typing errors.If you go for this port in your own project, I hope this was helpful, and that your port goes well.
Happy TypeScripting!
]]>win-*
html attributes. It felt like a total hack to get an app up and running by littering semantic html with these attributes.
Then along comes a little toy project they created called react-winjs and all of a sudden the WinJS
"Components" made total sense. When looking at them through the lense of WinJS through ReactJS components was the first time that I not only clicked with WinJS, but I actually fell in lov... (well I won't go that far), but was excited enough about them to pick them as the primary U.I. control suite while building out a little side-project.
Fast forward a year of development, and Microsoft essentially bailed on WinJS but at least they left it out in the open so I could hack on it and continue to depend on my own fork for the time being.
Then, they announce a NEW & SHINY library that can be used to help develop UWP and TV/Xbox One apps which is great. Except, WinJS doesn't work with this new library out-of-the-box, and since Microsoft isn't adding new features to WinJS, they likely never will build-in compatibility with the new & shiny library.
Guess that means we (I) have to figure it out on my own. And although I write this knowing that I'm probably the ONLY developer on the planet using this combination of libraries, I wanted to put out some of the hacks/code I've thrown together to get some WinJS controls to play nice with TVJS with regards to focus management.
In the context of an Xbox app, the idea is to take your'e web-page-app and get rid of the ugly mouse-like-cursor you'd see if you didn't do this and replace it with a controller navigable approach - so up/down/left/right on the controller moves the visible "focus" around the application and the A
button "presses enter" (or invokes) the control/button/etc.
The TVJS library has a helper within it called DirectionalNavigation
and is great in that it provides a focused and specific API to enable focus management while developing a Xbox App UWP Javascript (& C#) apps.
Just dropping the library in is enough to get much of the basics to work with most web apps.
However, the conflict between this and WinJS comes into play because WinJS also tried to implement some of their own focus management and the mix of these two just doesn't quite cut it.
Well, this isn't really a hack:
If you're looking at building a UWP JavaScript app for the Xbox, and tried to run your app on the Xbox (in dev mode), you may have noticed that your app behaves almost like it was just another web-page and doesn't default the cursor focus the way other xbox apps work. You're app just has a mouse-like cursor.
The way to deal with this is just by accessing the browser's gamepad api. Now, the Microsoft TVJS TVHelpers DirectionalNavigation library automatically does this for you, but for a better experience if you don't want to wait for the browser to download this library, you can manually access the api to hide the mouse cursor by throwing this at the top of your start page EX: index.html
<script>
// Hide the Xbox/Edge mouse cursor during load.
try {
navigator.getGamepads();
} catch(err) {
console && console.error('Error with navigator.getGamepads()', err);
}
</script>
Just by calling navigator.getGamepads()
, this tells the browser/hosted web app that you are going to take control of the app's focus management and to hide the mouse cursor.
Once you've done this and your app loads up with the TVJS DirectionalNavigation library and in my case some WinJS controls, focus management mostly works (sort-of).
This is about as ugly as they get...
The below code is bascially looking for the XYFocus handlers that WinJS is trying add to the document and we wan to not allow it to get added.
This XYFocus handler really creates havoc when we add the XYFocus handler from TVSJ DirectionalNavigation.
// HacktyHackHack
// The goal of this is to remove XYFocus management from WinJS
(function() {
var totalRemovedHandlers = 0;
var checkRemovedHandler = function() {
totalRemovedHandlers++;
if (totalRemovedHandlers > 2) {
console.error("EEEK, removing more than 2 handlers... be sure to validate that we're removing the right ones...");
}
};
var realAddEventListener = document.addEventListener;
document.addEventListener = function(eventName, handler, c){
if (handler.toString().indexOf('function _handleKeyEvent(e)') >= 0) {
console.warn("Ignoring _handleKeyEvent...", eventName, handler, c);
checkRemovedHandler();
return;
}
if (handler.toString().indexOf('function _handleCaptureKeyEvent(e)') >= 0) {
console.warn("Ignoring _handleCaptureKeyEvent...", eventName, handler, c);
checkRemovedHandler();
return;
}
return realAddEventListener(eventName, handler, c);
};
}());
By not allowing WinJS to add it's XYFocus handlers, we can avoid many of the issues that I worked through below...
For my app, the first control I ran into trouble with was the WinJS Pivot control. This control already does some focus management all by itself, and it's own management style contradicts the way the DirectionalNavigation helper works. So we basically have to detect focus on it, turn of TVJS focus management and handle it internally (until we leave focus of the Pivot).
To work through that, I created the following helper function:
WinJS.UI.Pivot.prototype._headersKeyDown = function (e) {
if (this.locked) {
return;
}
if (e.keyCode === Keys.leftArrow ||
e.keyCode === Keys.pageUp ||
e.keyCode === Keys.GamepadDPadLeft ||
e.keyCode === Keys.GamepadLeftThumbstickLeft) {
this._rtl ? this._goNext() : this._goPrevious();
e.preventDefault();
} else if (e.keyCode === Keys.rightArrow ||
e.keyCode === Keys.pageDown ||
e.keyCode === Keys.GamepadDPadRight ||
e.keyCode === Keys.GamepadLeftThumbstickRight) {
this._rtl ? this._goPrevious() : this._goNext();
e.preventDefault();
}
};
function handlePivotNavigation(pivotElement) {
console.log("handlePivotNavigation", pivotElement);
if (!pivotElement) {
throw new Error("handlePivotNavigation cannot use pivotElement as it wasn't passed in");
}
var pivotHeader = pivotElement.querySelector('.win-pivot-headers')
if (!pivotHeader) {
let msg = "handlePivotNavigation cannot find .win-pivot-headers in";
console.error(msg, pivotElement);
throw new Error(msg);
}
pivotHeader.addEventListener('focus', function() {
console.log("pivotHeader focus");
DirectionalNavigation.enabled = false;
});
pivotHeader.addEventListener('keyup', function(eventInfo) {
console.log('pivot keyup ', eventInfo.keyCode, eventInfo.key);
switch(eventInfo.keyCode) {
case 204: // gampead down
case 40: // keyboard down
DirectionalNavigation.enabled = true;
var target = DirectionalNavigation.findNextFocusElement('down');
if (target) {
target.focus();
eventInfo.preventDefault();
}
break;
case 203: // gamepad up
// since the Pivot is at the top of the page - we won't release
// control, or try to navigate up??? (maybe consider flowing up from the bottom of the page?)
break;
// case 205: // gamepad left arrow
// case 211: // gamepad 211 GamepadLeftThumbstickUp
// case 200: // gamepad left bumper
// pivotElement.winControl._goPrevious();
// eventInfo.preventDefault();
// break;
// case 206: // gamepad right arrow
// case 213: // gamepad 213 GamepadLeftThumbstickRight
// case 199: // gamepad 199 GamepadRightShoulder
// pivotElement.winControl._goNext();
// eventInfo.preventDefault();
// break;
}
});
}
And use it by doing the following in my React page:
componentDidMount() {
var pivot = ReactDOM.findDOMNode(this.refs.pivot);
handlePivotNavigation(pivot);
}
Or if you're not using React you can likely just go:
var pivot = document.getElementById('my-pivot-id');
handlePivotNavigation(pivot);
It's not pretty, but has been working for me so far.
Now when I navigate around using an Xbox controller I can properly navigate around the WinJS Pivot.
ItemContainer
s.With the added (remove XYFocus above - I removed the below hack)
This one is a total hack, and I look forward to a better solution, but for now it's been working.
The issue I was seeing was with WinJS ItemContainers and the TVJS library applying a separate forced "click" on the element when the control itself has already "clicked/invoked" the element.
The real fix would likely to figure out how to get the ItemContainer to event.preventDefault()
and/or event.stopPropagation()
and avoid the bubbling up to the document
keyup
event handler that DirectionalNavigation has under it's control, but WinJS ItemControl management is just so complicated that this hack was easier to figure out at the time I threw it together.
So what does this do?
It's basically hijacking the DirectionalNavigation._handleKeyUpEvent function, and re-writing it with one that ignores the keyup
event if the currently focused element is an ItemContainer.
// Hack to avoid Item containers getting double click
var originalDNKeyUp = TVJS.DirectionalNavigation._handleKeyUpEvent
TVJS.DirectionalNavigation._handleKeyUpEvent = function (e) {
console.log("Check for itemContaner", event.target.className)
if (e.target.className.split(" ").indexOf("win-itemcontainer") >= 0) {
console.log("MonkeyHack on DirectionalNavigation - SKIPPING CLICK");
return;
}
return originalDNKeyUp.apply(null, arguments);
}
document.removeEventListener("keyup", originalDNKeyUp);
document.addEventListener("keyup", TVJS.DirectionalNavigation._handleKeyUpEvent);
It's not pretty, but meh, is working so far.
I gave up on ContentDialog, and just started using react-modal
That's just a big mess from what I could figure out. I was able to get it working by using the ContentDialog
but manually creat my own buttons as the ItemContainer
in combination with the dialog kept swallowing events that didn't allow focus navigation to be sucessful. The internals of what was holding me back didn't appear to be monkey-patch-able from what I could tell... ugh...
ListView
hack,This one is a hack proposed by Todd over on the GitHub issues.
I've essentially taken the original implementation of WinJS.UI.ListView.prototype._onFocusIn, and if you look for the line starting with /* JJ */
below you can see the change there.
Don't know what this actually could mean from other scenarios, but for now it's allowing the ListView to focus properly on my initial xbox testing.
var _Constants = WinJS.UI;
var _UI = WinJS.UI;
WinJS.UI.ListView.prototype._onFocusIn = function ListView_onFocusIn(event) {
this._hasKeyboardFocus = true;
var that = this;
function moveFocusToItem(keyboardFocused) {
that._changeFocus(that._selection._getFocused(), true, false, false, keyboardFocused);
}
// The keyboardEventsHelper object can get focus through three ways: We give it focus explicitly, in which case _shouldHaveFocus will be true,
// or the item that should be focused isn't in the viewport, so keyboard focus could only go to our helper. The third way happens when
// focus was already on the keyboard helper and someone alt tabbed away from and eventually back to the app. In the second case, we want to navigate
// back to the focused item via changeFocus(). In the third case, we don't want to move focus to a real item. We differentiate between cases two and three
// by checking if the flag _keyboardFocusInbound is true. It'll be set to true when the tab manager notifies us about the user pressing tab
// to move focus into the listview.
if (event.target === this._keyboardEventsHelper) {
if (!this._keyboardEventsHelper._shouldHaveFocus && this._keyboardFocusInbound) {
moveFocusToItem(true);
} else {
this._keyboardEventsHelper._shouldHaveFocus = false;
}
} else if (event.target === this._element) {
// If someone explicitly calls .focus() on the listview element, we need to route focus to the item that should be focused
moveFocusToItem();
} else {
if (this._mode.inboundFocusHandled) {
this._mode.inboundFocusHandled = false;
return;
}
// In the event that .focus() is explicitly called on an element, we need to figure out what item got focus and set our state appropriately.
var items = this._view.items,
entity = {},
element = this._getHeaderOrFooterFromElement(event.target),
winItem = null;
if (element) {
entity.index = 0;
entity.type = (element === this._header ? _UI.ObjectType.header : _UI.ObjectType.footer);
this._lastFocusedElementInGroupTrack = entity;
} else {
element = this._groups.headerFrom(event.target);
if (element) {
entity.type = _UI.ObjectType.groupHeader;
entity.index = this._groups.index(element);
this._lastFocusedElementInGroupTrack = entity;
} else {
entity.index = items.index(event.target);
entity.type = _UI.ObjectType.item;
element = items.itemBoxAt(entity.index);
winItem = items.itemAt(entity.index);
}
}
// In the old layouts, index will be -1 if a group header got focus
if (entity.index !== _Constants._INVALID_INDEX) {
/* JJ */ /*if (this._keyboardFocusInbound || this._selection._keyboardFocused())*/ {
if ((entity.type === _UI.ObjectType.groupHeader && event.target === element) ||
(entity.type === _UI.ObjectType.item && event.target.parentNode === element)) {
// For items we check the parentNode because the srcElement is win-item and element is win-itembox,
// for header, they should both be the win-groupheader
this._drawFocusRectangle(element);
}
}
if (this._tabManager.childFocus !== element && this._tabManager.childFocus !== winItem) {
this._selection._setFocused(entity, this._keyboardFocusInbound || this._selection._keyboardFocused());
this._keyboardFocusInbound = false;
if (entity.type === _UI.ObjectType.item) {
element = items.itemAt(entity.index);
}
this._tabManager.childFocus = element;
if (that._updater) {
var elementInfo = that._updater.elements[uniqueID(element)],
focusIndex = entity.index;
if (elementInfo && elementInfo.newIndex) {
focusIndex = elementInfo.newIndex;
}
// Note to not set old and new focus to the same object
that._updater.oldFocus = { type: entity.type, index: focusIndex };
that._updater.newFocus = { type: entity.type, index: focusIndex };
}
}
}
}
}
One big improvement could be to consider setting up a unit-test that could take the original "string" value of the entire function code, and comparing it to the current version of WinJS library you're using and fail if they're even one character different. This would allow you to detect if say a fix were applied, or you need to update our local hacked version with some remote changes... It's not pretty, but one way to avoid over-writing possible working WinJS code with our potentially not-so-future-proof hacked version.
This control just seemed to have all behavior wrong for me. So I hacked the keyDownHandler and simplified it's implementation which seems to have really made it more usable (for me).
var _ElementUtilities = WinJS.Utilities
WinJS.UI.ToggleSwitch.prototype._keyDownHandler = function ToggleSwitch_keyDown(e) {
if (this.disabled) {
return;
}
// Toggle checked on spacebar
if (e.keyCode === _ElementUtilities.Key.space ||
e.keyCode === _ElementUtilities.Key.GamepadA ||
e.keyCode === _ElementUtilities.Key.enter) {
e.preventDefault();
this.checked = !this.checked;
}
}
The original had up/down/left/right configured to toggle the switch on/off which meant focus in/out was nearly impossible, it also only listened to space
as a toggle option. So by removing the up/down/left/right we can navigate in/around the control and we wanted to listen to space, GamepadA, and enter
to toggle the control on/off.
The WinJS control set is quite large, and I certainly haven't worked with each control in this manor, however, it's a step forward eh and if you managed to come across this random post on the interweb I hope it was useful?
]]>Not because it's really that great, but more because there are some navigation habits I acquire over on a Mac in a ZSH terminal that become challenging to not have on a Windows PowerShell terminal and each time I iterate on it, it synchronizes my workflows on both Mac and Windows environments.
In my previous update I added support for typing cd ...N
where cd ..
will go up 1 directory, so cd ....
will go up 3 directories.
Well today I found out that I can declare a function in powershell with a name ..
- WHO KNEW?
For example if you pasted the following into your PowerShell terminal: function ..() { echo "HELLO"; }; ..
This would define the function ..
as well as run it and print out HELLO
.
This was a fantastic stumbling on my part because on my Mac I often go up 1-n directories by typing ..
at the terminal or .....
<-- however many I want to go up.
So today I updated the Change-Directory.ps1 with the following shortcuts:
function ..() { cd .. }
function ...() { cd ... }
function ....() { cd .... }
function .....() { cd ..... }
function ......() { cd ...... }
function .......() { cd ....... }
function ........() { cd ........ }
If you're interested in the evolution of this CD tool:
SUPER SWEET!
Happy CD'ing!
]]>Given that, I decided to hack the CSS of JIRA's board to improve this. Take a look at an example before/after of the tweaks below, then I'll walk through how you can get it if you so desire.
[+ ADD TO CHROME]
button on the upper right of the page.Once the plugin is installed. Close all tabs and re-open a tab to the JIRA kanban board.
Click the new Stylebot plugin CSS
button and the Open Stylebot...
option in your chrome browser toolbar which will open up a U.I. that allow you to mess around with the page's style.
At the bottom of the Stylebot click Edit CSS
which will give you a blank text box you can write custom CSS into.
Paste in the following CSS and hit Save
/** Version 2.3
** Copyright Jason Jarrett
**/
.ghx-avatar-img {
font-size: 15px;
height: 15px;
line-height: 15px;
width: 15px;
}
.ghx-band-1 .ghx-issue .ghx-avatar {
left: 100px;
right: auto;
top: -3px;
}
.ghx-band-3 .ghx-issue .ghx-avatar {
top: 0px;
}
.ghx-issue .ghx-extra-fields {
margin-top: 5px;
}
.ghx-issue .ghx-flags {
left: 20px;
top: 5px;
}
.ghx-issue .ghx-highlighted-fields {
margin-top: 5px;
}
.ghx-issue .ghx-type {
left: 5px;
top: 5px;
}
.ghx-issue-content {
font-size: 12px;
margin-top: 3px;
padding: 5px;
}
.ghx-issue-fields .ghx-key {
margin-left: 30px;
}
.ghx-issue.ghx-has-avatar .ghx-issue-fields, .ghx-issue.ghx-has-corner .ghx-issue-fields {
padding-right: 0px;
}
/* the below adjusts the backlog view */
.ghx-backlog-column .ghx-plan-extra-fields.ghx-row {
float: right;
position: relative;
right: 70px;
margin: 0;
margin-top: -15px;
height: 18px;
}
.ghx-backlog-column .ghx-issue-content, .ghx-backlog-column .ghx-end.ghx-row {
padding: 0;
margin: 0;
}
/* filters */
.js-quickfilter-button {
padding: 0;
}
.js-sprintfilter {
white-space: nowrap;
}
.js-sprintfilter > span {
padding: 0;
}
dl dt, dd {
margin: 0;
padding: 0;
}
span.ghx-extra-field {
margin-right: 100px;
}
span.ghx-end.ghx-extra-field-estimate {
padding-top: 0;
padding-bottom: 0;
}
div.ghx-plan-extra-fields.ghx-plan-extra-fields-1.ghx-row {
height: 20px;
}
.ghx-issue-compact .ghx-row {
margin: 0px;
}
div.ghx-plan-extra-fields.ghx-plan-extra-fields-1.ghx-row {
margin-top: 0;
}
Now you should get a little bit more visible data on the page and be able to avoid hovering over titles to get enough context of the ticket to immediately know what it is.
Happy (as best you can) JIRA'ing
]]>docker-compose up
starts everything. However, what if you want to replace an existing container without tearing down the entire suite of containers?
For example: I have a docker-compose
project that has the following containers.
I had a small configuration change within the CouchDB container that I wanted to update and re-start to get going but wasn't sure how to do that.
I'm hoping there are better ways to go about this (I'm still learning), but the following steps are what I used to replace a running docker container with the latest build.
docker-compose build couchdb
(docker-compose build <service_name>
where service_name
is the name of the docker container defined in your docker-compose.yml
file.)Once the change has been made and container re-built, we need to get that new container running (without affecting the other containers that were started by docker-compose).
docker-compose stop <service_name>
<-- If you want to live on the edge and have the shut-down go faster, try docker-compose kill <service_name>
docker-compose up -d --no-deps <service_name>
<-- this brings up the service using the newly built container.The -d
is Detached mode: Run containers in the background, print new container names.
The --no-deps
will not start linked services.
That's it... at least for me, it's worked to update my running containers with the latest version without tearing down the entire docker-compose set of services.
Again, if you know of a faster/better way to go about this, I'd love to hear it. Or if you know of any down-sides to this approach, I'd love to hear about it before I have to learn the hard way on a production environment.
Thanks to Vladimir in the comments below - you can skip several steps above and do it with a single command
docker-compose up -d --no-deps --build <service_name>
I tested this and was able to avoid the build
, kill
, and up
commands with this one-liner.
Happy Container Updating!
]]>I'm not really strong with server infrastructure and some of this is "figure it out as I go", while more of it is asking for help from a good friend @icecreammatt who has been a HUGE help as I stumble through this.
But at the end of this tutorial our goal is to satisfy the following requirements.
Below are some core requirements this walk-through should help address. There is likely room for improvement, and I'd love to hear any feedback you have along the way to simplify things or make them more secure. But hopefully you find this useful.
I want there to be some semblance of a release process with various deployment environments. Push changes to qa
regularly, push semi-frequently to stage
and when things are good, ship a version to production.
my-docker-test-site.com
then I would also have qa.my-docker-test-site.com
and stage.my-docker-test-site.com
qa
, stage
, prod
in the same system. (prob not that many environments - but you get the picture)While I'd like the ability to run as many environments as I listed above, I will likely use a qa
and prod
for my small site, but I think the pattern is such that we could easily setup whatever environments we need.
What we want to do is essentially walk through how I'm thinking about accomplishing the above high level requirements using a simple node js hello world application. This app is a basic node app that renders some environment information just to prove that we can correctly configure and deploy various docker containers for environments such as qa
, stage
or prod
.
In the end, we should end up with something that I like to imagine looks a bit like this diagram:
I'm going to use DigitalOcean as the cloud provider in this case, but I think this pattern could be to other docker hosting environments.
Below is a basic view of the file structure of our site. If you're following along, go ahead and create this structure with empty files, we can fill them in as we go...
.
|____app
| |____Dockerfile
| |____server.js
|____docker-compose.yml
Let's start with the ./app/*
files:
This is a simple nodejs server that we can use to show that deployment environment variables are passed through and we are running the correct environment
. As well as showing a functioning web server.
File: ./app/server.js
var http = require('http');
var server = http.createServer(function(req, res){
res.writeHead(200, {"Content-Type": "text/plain"});
res.end(`
Hello World!
VIRTUAL_HOST: ${process.env.VIRTUAL_HOST}
NODE_ENV: ${process.env.NODE_ENV}
PORT: ${process.env.PORT}
`.split('/n').join('<br>'));
});
server.listen(80);
The goal of this file is to run a nodejs web server that will return a text document with Hello World!
along with the environment variables that the current container are running under such as qa
.
This could easily be replaced with a python, php or ruby, or a whatever web server. Just keep in mind the rest of the article may assume it's a node environment (like the Dockerfile
up next). So adjust accordingly.
Below is pretty basic and says, load up and run our nodejs server.js
web app on port 80
.
File: ./app/Dockerfile
# Start from a standard nodejs image
FROM node
# Copy in the node app to the container
COPY ./server.js /app/server.js
WORKDIR /app
# Allow http connections to the server
EXPOSE 80
# Start the node server
CMD ["node", "server.js"]
So now that we have a basic application defined, we can't host multiple versions of the app all using port 80
without issue. One approach we can take would be to place an Nginx proxy in front of our containers to allow translation of incoming domain name requests to our various web app containers which we'll use docker to host on different ports.
The power here is we don't have to change the port within the docker container (as shown in the Dockerfile
above) but we can use the port mapping feature when starting up the docker container to specify different ports for different environments.
For example I'd like my-docker-test-site.com
to map to the production container, qa.my-docker-test-site.com
the qa
container of my site, etc... I'd rather not access my-docker-test-site.com:7893
or some port for qa
, stage
, etc...
To accomplish this we are going to use jwilder/nginx-proxy. Check out his introductory blog post on the project. We'll be using the pre-build container directly.
To spin this up on our local system let's issue the following command:
docker run -d -p 80:80 --name nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
This project is great, now, as we add new or remove containers they will automatically be added/removed to the proxy and we should be able to access their web servers through a VIRTUAL_HOST
. (more on how specifically below)
Before we get too far into the container environment of our app, we need to consider how the containers will be talking to each other.
We can do this using the docker network
commands. So we're going to create a new network and then allow the nginx-proxy to communicate via this network.
First we'll create a new network and give it a name of service-tier
:
docker network create service-tier
Next we'll configure our nginx-proxy container to have access to this network:
docker network connect service-tier nginx-proxy
Now when we spin up new containers we need to be sure they are also connected to this network or the proxy will not be able to identify them as they come online. This is done in a docker-compose file as seen below.
Now that we've defined our application with the server.js
and Dockerfile
and we have a nginx-proxy
ready to proxy to our environment-specific docker http servers, we're going to use docker-compose
to help build our container and glue the parts together as well as pass environment variables through to create multiple deployment environments.
Save this file as docker-compose.yml
version: '2'
services:
web:
build: ./app/
environment:
- NODE_ENV=${NODE_ENV}
- PORT=${PORT}
- VIRTUAL_HOST=${VIRTUAL_HOST}
- VIRTUAL_PORT=${PORT}
ports:
- "127.0.0.1:${PORT}:80"
networks:
default:
external:
name: service-tier
This file is all about:
build: ./app/
is the directory where our Dockerfile
build is.environment
variables are important. The VIRTUAL_HOST
and VIRTUAL_PORT
are used by the nginx-proxy to know what port to proxy requests for and at what host/domain name. (We'll show an example later) You can see an earlier exploratory post I wrote explaining more about environment vars.ports
example is also important. We don't want to access the container by going my-docker-test-site.com:8001
or whatever port we're actually running the container on because we want to use the VIRTUAL_HOST
feature of nginx-proxy to allow us to say qa.my-docker-test-site.com
. This configuration sets it up to only listen on the loopback network so the nginx-proxy can proxy to these containers but they aren't accessible from the inter-webs.networks:
we define a default
network for the web app to use the service-tier
that we setup earlier. This allows the nginx-proxy and our running instances of the web container to correctly talk to each other. (I actually have no idea what I'm saying here - but it is simple enough to setup and I think it's all good - so I'm going with it for now...)So with all of these pieces in place, all we need to do now is run some docker-compose commands to spin up our necessary environments.
Below is an example script that can be used to spin up qa
, and prod
environments.
BASE_SITE=my-docker-test-site.com
# qa
export NODE_ENV=qa
export PORT=8001
export VIRTUAL_HOST=$NODE_ENV.$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d
# prod
export NODE_ENV=production
export PORT=8003
export VIRTUAL_HOST=$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d
This script is setting some environment variables that are then used by the docker-compose
command where and we're also setting a unique project name with -p ${VIRTUAL_HOST}
.
What we said just before this headline is a key part to this. What enables us to run essentially the same project (docker-compose/Dockerfile) but with different environment variables that define things like qa
vs prod
is when we run docker-compose
we're also passing in a -p
or --project-name
parameter. This allows us to create multiple container instances with different environment variables that all run on different ports and in theory isolate themselves from the other environments.
The thinking here is you could have a single docker-compose.yml
file that has multiple server definitions like say a nodejs web
, couchdb
, and redis
database all running isolated within their environment. You can then use the environment variables to drive various items such as feature-toggling newly developed features in a qa
environment, but are not necessarily ready to run in a production environment.
You probably want to play with this idea and test it out locally before trying to push it to a remote system.
One easy way to do this is to modify your /etc/hosts
file (on *nix) or follow this on windows to map the specific domain names you have setup for your environments to the actual service running docker. This will allow the nginx-proxy
to do it's magic.
I'm currently still using docker-machine
to run my docker environment in a VirtualBox VM so my /etc/hosts
file looks like this.
##
# Host Database
#
# localhost is used to configure the loopback interface
# when the system is booting. Do not change this entry.
##
127.0.0.1 localhost
255.255.255.255 broadcasthost
::1 localhost
192.168.99.100 qa.my-docker-test-site.com
192.168.99.100 stage.my-docker-test-site.com
192.168.99.100 my-docker-test-site.com
If you have the docker containers running that we've worked through so far (for all environments) we should be able to visit qa.my-docker-test-site.com
in the browser and hopefully get this:
Hello World!
VIRTUAL_HOST: qa.my-docker-test-site.com
NODE_ENV: qa
PORT: 8001
Also try out the production environment at my-docker-test-site.com
to verify it is working as expected.
THIS IS AWESOME :) I was actually quite happy to traveled this far in this exploration. But now let's try to take it up a notch and deploy what we just built locally to the DigitalOcean in the cloud ().
Now how do we get this locally running multi-environment system up to a server in the cloud?
Just tonight while researching options I found this simple set of steps to get it going on DigitalOcean. I say simple because you should the original steps I was going to try and use to deploy this... sheesh.
These are the steps we're going to walk through.
Tonight I discovered this blog post on docker that describes using docker-machine with the digitalocean driver to do basically everything we did above - but IN THE CLOUD - kind of blew me away actually.
First make sure you've signed up with a DigitalOcean and are signed in.
Next we're going to use a cool feature of docker-machine where we can leverage the DigitalOcean driver to help us create and manage our docker images.
Complete Step 1
and Step 2
in the following post DigitalOcean example to acquire a DigitalOcean personal access token.
Now that you have your DigitalOcean api token, you do right? Either pass it direclty into the below command (in place of $DOTOKEN
or set a local var as demonstrated.
DOTOKEN=XXXX <-- your token there...
docker-machine create --driver digitalocean --digitalocean-access-token $DOTOKEN docker-multi-environment
NOTE: There was an issue where (this used to work but stopped with the the docker-machine DigitalOcean default image reported here). To work around this try using a different image name
EX:
DIGITALOCEAN_IMAGE="ubuntu-16-04-x64"
docker-machine create --driver digitalocean --digitalocean-access-token $DOTOKEN docker-multi-environment
If you refreshed your DigitalOcean's droplets page you should see a new droplet called docker-multi-environment
.
We can now configure our local terminal to allow all docker commands to run against this remotely running docker environment on our newly created droplet.
eval $(docker-machine env docker-multi-environment)
If you ran docker ps
it should be empty, but this is literally listing the containers that are running up at DigitalOceain in our droplet. How awesome is that?
Now that we can just speak docker
in the cloud, run the following commands - these are all assuming we're executing them against the DigitalOcean droplet in the !!
Spin up our remote nginx-proxy
on the remote droplet
docker run -d -p 80:80 --name nginx-proxy -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
Create our network
docker network create service-tier
Tell nginx-proxy
about this network
docker network connect service-tier nginx-proxy
I know this post has gotten a bit long, but if you've made it this far we're almost there...
If you were to run the following script in our local project's folder where we have the docker-compose.yml
file:
Be sure to update
BASE_SITE=my-docker-test-site.com
with your domain name or sub-domain likeBASE_SITE=multi-env.my-docker-test-site.com
BASE_SITE=my-docker-test-site.com
# qa
export NODE_ENV=qa
export PORT=8001
export VIRTUAL_HOST=$NODE_ENV.$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d
# prod
export NODE_ENV=production
export PORT=8003
export VIRTUAL_HOST=$BASE_SITE
docker-compose -p ${VIRTUAL_HOST} up -d
You should now be able to run docker ps
or docker-compose ps
and see 3 containers running. The nginx-proxy
, your qa
site and also the prod
site.
All that's left is to make sure DNS is configured and pointing to our nginx-proxy
front-end...
While playing with this - I kept tearing down droplets and re-building them as I worked through this tutorial and I kept forgetting to adjust my DNS settings. However right in the middle of writing this tutorial DigitalOcean came out with Floating Ip's which wasn't perfect, but definitely made this easier to work with. I didn't have to always update the IP address of my droplet, but instead just update the floating ip to point to the newly created droplet.
I'm assuming you've already purchased a domain name that you can setup and configure on DigitalOcean. So I don't want to go too far into this process.
I also think DNS is out of scope for this post (as there are many others who can do a better job) but I used some great resources such as these while configuring my DigitalOcean setup.
If you've made it this far, you hopefully have a DigitalOcean droplet that is now serving qa
and prod
http requests.
NICE!!!
Now the most important thing - how to seamlessly update an environment with a new build...
Now that we've deployed our site to QA and to walk through this a little further, let's make a modification to our qa
site and see if we can get it deployed without causing any downtime especially to the prod
site, but maybe we can also get an in-place deployment done and have little-to-no down time in qa
as well.
I wrote that paragraph above the other night near bedtime and as I'm learning some of this on the fly had no idea if this would be easy enough to accomplish, but to my surprise deploying an update to qa
was a piece of cake.
For this test I made a simple change to my node web server code so I could easily see that the change was deployed (or not).
I turned Hello World!
into Hello World! V2
below.
File: ./app/server.js
var http = require('http');
var server = http.createServer(function(req, res){
res.writeHead(200, {"Content-Type": "text/plain"});
res.end(`
Hello World!
VIRTUAL_HOST: ${process.env.VIRTUAL_HOST}
NODE_ENV: ${process.env.NODE_ENV}
PORT: ${process.env.PORT}
`.split('/n').join('<br>'));
});
server.listen(80);
I then used docker-compose
to bring up another "environment" but using the same qa
VIRTUAL_HOST
as before.
BASE_SITE=my-docker-test-site.com
export NODE_ENV=qa
export PORT=8004
export VIRTUAL_HOST=$NODE_ENV.$BASE_SITE
docker-compose -p ${VIRTUAL_HOST}x2 up -d
NOTICE: how the
-p
parameter we addedx2
(just to give it a different project name (for being a different version)).
This will bring up another docker container with our updated web application and to my surprise the nginx-proxy automatically chose this new container to send requests to.
So if you docker ps
you should see 4 containers running. 1 nginx-proxy, 1 prod container, 2 qa containers (with different names).
You can think about leaving both containers running for the moment while you test out the new release.
One neat thing you can think about with this is if there was something seriously wrong with the new qa release you could just stop the new container and (docker stop <new_container_id>
) and the proxy will start redirecting back to the old qa container. (That only works of course if your deployment was immutable - meaning you didn't have the new container run some one way database migration script... but that's not something I want to think about or cover in this post).
Once you're comfortable running the new version you can now bring down and cleanup the older version.
docker ps # to list the containers running
docker stop <old_qa_container_id>
docker images # to list the images we have on our instance
docker rmi <old_qa_image_id>
You probably don't want to run the sample node script from above forever as you'll be charged some money from DigitalOcean for this and I'd feel bad if you received a bill for this little test beyond a few pennies as you test it out...
The following command will completely remove the droplet from DigitalOcean.
docker-machine rm docker-multi-environment
I feel like I've done enough learning and sharing in this post. But there is still more to do...
If you want to check out the snippets above combined into a sample github repository I've put it up here.
I don't know if I'll blog about these, but I definitely want to figure them out. If you find a way to extend my sample above to include the following I'd love to hear about it...
Happy Docker Enviornment Building!
]]>I've really been enjoying the new async/await
syntax that can be leveraged in recent JavaScript transpires such as Babel and TypeScript but when it happens so fast on a local development environment, some U.I. interactions could get tough to test out. So how can we slow this down or more appropriately add some stall time?
As an example, let's say you have the following async
javascript method
var doSomething = async () => {
var data = await someRemoteOperation();
await doSomethingElse(data);
}
If the first or second asynchronous methods in the example above were running too fast (as were in my case), I was pleased to see how easy it was to inject a bit of code to manually stall the operation.
The snippet below is short and sweet... this just delays continuous execution of an async operation for 3 seconds.
await new Promise(resolve => setTimeout(resolve, 3000));
Give it a name and make it a bit more re-usable.
async function stall(stallTime = 3000) {
await new Promise(resolve => setTimeout(resolve, stallTime));
}
The above helper can be used by just awaiting it (see below)
var doSomething = async () => {
var data = await someRemoteOperation();
await stall(); // stalls for the default 3 seconds
await stall(500); // stalls for 1/2 a second
await doSomethingElse(data);
}
Now we have an async
function that runs slower - isn't that what you always wanted?
Happy Stalling!
]]>As a creator, one of the happiest moments we can experience is getting into a state of "flow".
In positive psychology, flow, also known as the zone, is the mental state of operation in which a person performing an activity is fully immersed in a feeling of energized focus, full involvement, and enjoyment in the process of the activity. https://en.wikipedia.org/wiki/Flow_(psychology)
I've heard this flow state described as a process where the mind is so focus on the task at hand, so engulfed in the spirit of the process that all other external processing of our environment and even our own bodily needs can be ignored. The brain puts so much energy and focus into this, that things like the need to eat, sleep, or sometimes even ignoring the restroom (for as long as possible - waiting until my bladder is SCREAMING at me).
I'm quite happy when I'm making progress on my creation(s) as they can often invoke this flow state. While the opposite of the "enjoyment" can certainly happen while working on projects as they can frustrate the heck out of me sometimes and if it ever bleeds into our relationship I'm sorry for that.
I wish I could convey the highs I can experience while in "flow" as strongly as you've likely see my frustrations about the lows. Sadly, without the lows, struggle, up hill battles, cussing at the computer I could possibly never really experience the feelings of success or overcoming that struggle and enjoy them as much as I do.
Between work, family time, children, shopping, housework, sleep and whatever else we fill our days with, it often times feels like I get to apply very little time to this thing that I am truly driven (maybe slightly addicted to) and excited about.
I know you try to give me time to work on these things. There are times you think you've given a Saturday morning or an evening for me to work on my thing. However, sadly for it to truly be a successful session, I need time and space with room to concentrate. An hour before bedtime makes me feel like I shouldn't even try, because it could take at least 30-40 min to get back into the project leaving so little time to be productive that it's not even worth starting. These are times when I decide to blow any amount of time I've been given and just waste it watching a show on Netflix. Not because I don't want to work on my thing, but because I know the amount of effort it will take to get into the flow state will take far too long to make it worth it. If I were to get into flow, I'm then going to want to stay there and likely push past my bed time (which is getting harder and harder to recover from).
I don't want this to sound like this creation/building thing is more important than my family. In fact it's not. If you look at my actions and track record, the amount of time I have pushed aside so I could help you with your endeavors by watching kids, taking on extra shopping trips, house duties as well as the financial obligation (and strain), and still finding time to spend with you in the evenings at the expense of this thing I want to do should prove that my commitment to the family (and you) is still a priority.
I don't know how to close this out an wrap it up, other than to say I love you. I love my children. I also love what I build. I would like to work with you to find a way to balance these items a little better.
]]>If you're building a component and using any in-line styles and you're not careful you can lock the consumer of your component out of potential customizations they may require for their specific use-case (that you can't think of or foresee). Trying to build components to be reusable and a little more OCP can be challenging especially with how difficult it can be to get css layouts the way you (or the consumer of your component) may want...
As an example, let's create simple img
component to illustrate the point.
Let's say we have the following image component.
import React from 'react';
export default class Image extends React.Component {
render() {
return (
<div>
<img src={this.props.src} />
</div>
);
}
}
The above component is very simple and very specific.
Now let's say we allow our consumers to customize the height or width of the image. You may think, ok, simple we'll just allow the consumer to specify height
and width
as props to the component.
So the consumer could just go <Image height="20" width="20" src="someimage.png" />
.
And you end up with something that could look like this.
import React from 'react';
export default class Image extends React.Component {
render() {
let imageStyle = {
height: this.props.height,
width: this.props.width
};
return (
<div>
<img src={this.props.src} style={imageStyle} />
</div>
);
}
}
Now this works for a while, the consumers of your component are happy they can control the height
and width
and everyone's humming along merrily.
Then someone comes to you and says they are having some layout issues and need to control something like float
, or margin
, or padding
... This idea of extending the component with more props could become cumbersome if we have to do this for each and every potential layout option available.
How could we extend this generic pattern into something that allows the component to define a general set of happy defaults, while still giving the consumer complete control over layout?
We can use something like Object.assign() to easily accomplish this.
We can allow the consumers to pass in their own style={...}
property and provide a set of sensible defaults for the component, but allow the consumer of our component to completely override a style if necessaryl
We can update our:
let imageStyle = {
height: this.props.height,
width: this.props.width
};
to the following pattern:
let imageStyle = Object.assign(
{}, // target (starting with)
{ ...sensible defaults... }, // some pre-defined default React inline-style for the component
this.props.style // allow consumers to override properties
);
Now if the consumer calls the component with <Image style={{height: "21px", width: "21px"}} src="someImage.png" />
the component's consumers' values will override any defaults provided. And they can extend the style with anything else they may need.
Happy Componentization!
]]>Building some_server
Step 1 : FROM alpine
---> 13e1761bf172
Step 2 : ENV DEMO_VAR WAT
---> Using cache
---> 378dbaa4a048
Step 3 : COPY docker-entrypoint.sh /
---> e5962cef9382
Removing intermediate container 43fa24c31444
Step 4 : ENTRYPOINT /docker-entrypoint.sh
---> Running in 5a2e19bf7a45
---> 331d2648d969
Removing intermediate container 5a2e19bf7a45
Successfully built 331d2648d969
Recreating exampleworkingdockercomposeenvironmentvars_some_server_1
ERROR: for some_server rpc error: code = 2 desc = "oci runtime error: exec format error"
Traceback (most recent call last):
File "<string>", line 3, in <module>
File "compose/cli/main.py", line 63, in main
AttributeError: 'ProjectError' object has no attribute 'msg'
docker-compose returned -1
I had a Dockerfile
that used an entrypoint that looked like ENTRYPOINT ["/docker-entrypoint.sh"]
.
The real problem was the docker-entrypoint.sh
script was missing a #shebang.
So changing this
echo "ENV Var Passed in: $DEMO_VAR"
to this
#!/bin/sh
echo "ENV Var Passed in: $DEMO_VAR"
solved my issue!
Also note it'll depend on the base image FROM <some linux distro>
that may change what you're required #shebang should be.
Whew!
]]>While working on it (and to make it a bit more generic), my next step was to find a way to pass the database admin user/pass (and other configuraiton options) into the containers as environment variables which took me way longer to figure out than it should have...
Hopefully this posts helps it click for you a little faster than it (didn't) for me :)
If you land here, you've likely already poured over the different parts of documentation for docker, docker-compose and environment variables.
Things like:
In case things drift in the product or docs, this post was written using docker-compose version 1.7.1, build 0a9ab35
so keep that in mind...
I think the difficult thing for me was piecing the various ways you can get environment variables defined and the necessary mapping required within the docker-compose
file.
For me it didn't click until I was able to think about the stages that needed to exist for an environment variable to go from the development computer -> to the -> docker container.
For now I'm thinking of using the following model...
------------------------ -------------------- ------------------
| Env Source | | docker-compose.yml | | Docker Container |
| | | | | |
| A) .env file | --> | map env vars using | --> | echo $DEMO_VAR |
| B) run-time terminal | | interpolation | | |
| env var | | in this file. | | |
------------------------ --------------------- ------------------
If you want to see all of this in one place check out this github example which is outline below.
The example above is layed out like so...
.
|____.env
|____docker-compose.yml
|____env-file-test
| |____docker-entrypoint.sh
| |____Dockerfile
|____README.md
.env
file:This is where you can place each of the environment variables you need in here.
DEMO_VAR=Test value from .env file!
As the docs say you can use # as comments
and blank lines in the file - all other lines must be in the format of ENV_VAR=ENV_VALUE
.
environment variables in your terminal's context will take president over the values in the .env
file.
docker-compose.yml
:version: "2"
services:
some_server:
build: ./env-file-test
environment:
- DEMO_VAR=${DEMO_VAR}
The above file is the part where I got tripped up, and once I added the environment:
section it all clicked.
You likely don't want every one of your development or production server's environment variables to show up inside your container. This file acts a bit like the docker run -e ENV_VAR=FOO
option and allows you to select specific environment variables that are to be passed into the container.
I like the declarative approach of this file as it makes environment variable dependencies explicit.
env-file-test/Dockerfile
:FROM alpine
ENV DEMO_VAR WAT
COPY docker-entrypoint.sh /
ENTRYPOINT ["/docker-entrypoint.sh"]
Pretty standard Dockerfile
, but one thing I learned is you can setup default environment variables using the docker ENV
directive. But these will be overriden by the .env
file or variables in your terminal's environment.
env-file-test/docker-entrypoint.sh
#!/bin/sh
echo "ENV Var Passed in: $DEMO_VAR"
This was just a sample script to print out the environment variable.
The docs say you can specify your own env-file
or even multiple files, however I could not get that working. It always wanted to choose the .env
file.
Also note: that if you have an environment variable specified in your terminal that also exists in your .env
file the terminal's environment takes precedence over the .env
file.
Happy Environment Setup!
]]>user.name
and user.email
for git on my work computer. This is really just a post so when I forget how I did in the future I can google my own blog and be reminded...
I have always struggled with accidentally committing to an OSS project my work name/email or visa-versa, committing to a work git repo with my personal name/email.
For most, user.name
shouldn't change, unless your company ties your user.name
to something specific to the company like a username. (Contrast: user.name = Jason Jarrett
and user.name = jjarrett
).
When I clone projects I always clone them into a folder structure that looks like
|____~/code
| |____personal/ <--- this is where I would put some OSS projects that I may be working on or contributing to.
| |____work/ <--- obviously work code goes in here
Thanks to this post where I learned about direnv and followed the last option I basically used these steps...
Install direnv
- brew install direnv
(What about Windows? see this github issue and help make it work)
Create .envrc
file for each profile needing to be setup with the following content
export GIT_AUTHOR_EMAIL=<your email>
export GIT_AUTHOR_NAME=<your name>
export GIT_COMMITTER_EMAIL=<your email>
export GIT_COMMITTER_NAME=<your name>
After installing and setting the .envrc
files direnv
will prompt to use the env file which we accept by running direnv allow
.
Now I should have the following structure
|____~/code
| |____personal/
| |____.envrc <-- env settings with personal git user/email
| |____work/
| |____.envrc <-- env settings with work git user/email
Each time we cd
into either a personal/
or work/
folder direnv
will setup our shell with environment variables contained in that folder's .envrc
file. This will then allow Git which respects these env vars and now we don't have to think about committing the wrong name/email to the wrong Git repositories.
Happy Gitting!
]]>