Using make - an opinionated guide for javascript developers pt.2

Saturday, 15th Apr 2017

This post is part 2 of a two part series


Summary

In this post, I'll share some tips on working with make into your javascript project followed by an example script you could build off. The project setup I've decided to go with as an example is very frontend focused but easily changed to work with a backend if needed.

If you need a refresher on make, I'd suggest you check out part 1 on using make.

1. Add node_modules/.bin to your path

PATH := node_modules/.bin:$(PATH)

build:
    webpack -p      # it works!

First up, it's a good idea to amend the environment variable PATH to include our project's node_modules directory. This is so we can be sure make is using the correct scripts for our tasks.

2. Working around make's last-modified check using .PHONY

make is designed with an internal database system which keeps track of last modified times of files. Using this database, it knows only to build the relevant parts of the system where files have changed. This works as a great development tool for compiled projects but is mostly unneccessary if you're using webpack, mocha or any other tool which has "watch" tasks.

We can force make to always build regardless of last-modified checks, by:

  1. using cli flags -B, --always-make
  2. using the special .PHONY task in our makefile

We'll use .PHONY as it gives us greater control. Here's an example.

Example 1.

.PHONY: test

test:
    mocha **/*.test.js

Example 2. What if we have multiple tasks?

# bad

.PHONY: clean install test build deploy

# good

.PHONY: all
.all: clean install test build deploy

3. Use .SILENT to reduce output

When make runs, it will print out the commands for task being run, which can get a little verbose if you're checking for errors. Use the special .SILENT task to prevent this default output. If you're echo-ing things to stdout, these will still show.

.SILENT

4. Checking if we're in "production"

Example 1. Checking at environment variable level

AWS_BUCKET := $(if $(filter production, $(NODE_ENV)),'my_production_bucket', 'my_staging_bucket')

deploy:
    echo $(AWS_BUCKET)

Example 2. Checking at task level

deploy:
    if [ $(NODE_ENV) = production ]; \
    then \
        AWS_BUCKET='my_production_bucket'; \
    else \
        AWS_BUCKET='my_staging_bucket'; \
    fi; \
    echo $$AWS_BUCKET

Example 3. Checking with a function

deploy:
    AWS_BUCKET=$(call isProduction,"my_production_bucket","my_staging_bucket"); \
    echo $$AWS_BUCKET

define isProduction
$(if $(filter production, $(NODE_ENV)),$1,$2)
endef

5. Getting the version number from package.json

Example 1. Using a function

archive:
    echo "Now zipping version $(call get_version)"
    tar -czf archive-$(call get_version).tar.gz -C ./build

define get_version
$(shell cat package.json | sed -n 's/"version": "\([^"]*\)\",/\1/p' | tr -d '[:space:]')
endef

6. Overriding errors

make will stop if it hits a non-zero exit. If for any reason, you need to handle or bypass errors, here's an example of how you can do that.

Example 1.

lint:
    eslint src; \
    if [[ $$? -ne 0 ]]; then ...; exit 0; fi

7. Putting it all together

Here is a basic layout of what a makefile for a javascript project could be.

PATH := node_modules/.bin:$(PATH)

.PHONY: all
.SILENT

all: clean install lint test build deploy

clean:
    rm -rf node_modules/*
    rm -rf build/*

install:
    npm install

lint:
    eslint src

test:
    mocha **/*.test.js

build:
    webpack -p

deploy:
    aws s3 rm s3://mybucket --recursive
    aws s3 cp build s3://mybucket --recursive

Conclusion

In part 1, I gave a crash course on make, makefiles and how to write them. In part 2, I focused in on some specific pointers on using make for javascript projects.

I hope you've found both parts educational and at the very least convinced some of you that make can be a viable tool in your daily project workflows.

Where to next?

If you think make is for you, then you should read up on the official man pages and how others are using it. You'll be surprised that we've only really cover a small fraction on what's possible with make and I'll bet you'll stumble upon some tricks of your own.

Are you using make? Or getting started using make? Have questions?

This post is updated periodically as and when I receive feedback so shoot me a line and, time permitting, I'll do my best to get back to you as soon as I can.

This post is part 2 of a two part series

Using make - an opinionated guide for javascript developers pt.1

Sunday, 9th Apr 2017

This post is part 1 of a two part series


Summary

make is useful for stringing together multiple tasks to automate a build process. In a sufficiently sized project, we can assume there are various tasks such as moving files around, running tests, deploying to a remote location are all involved as well as the actual "building". This is where make can make this process manageable.

When running make in a directory, it will look for a makefile by default. This is a plain ascii file which list a bunch of user-defined tasks. Each task may have a list of dependencies; other tasks which will run beforehand[1] and optionally, one or more shell commands which are executed in succession when the task is called. It is possible to have tasks which simple combine other tasks and have commands of their own.

1. Getting started

To start using make, create a file called makefile in the top-level directory of your project.
Next, add the following code to your makefile:

hello:
    echo "Hello from make!"

from the command line, run the command make:

$ make
Hello from make!

2. Dependencies

In the following example, there are three named tasks clean, build and deploy. clean has no dependencies but build has one (which is clean) and deploy has one (which is build). Their respective commands are listed beneath each task and indented once.

clean:
    rm -rf ./build

build: clean
    webpack -p

deploy: build
    aws s3 cp ./build s3://mybucket --recursive

In the deploy task, we don't have to specify clean as a dependency as it is a dependency of build. Running deploy will run clean, then build before running our deploy task.

To have more than one dependency, simple add them to the dependency line. In the following example, deploy has two dependencies, test and build. Dependencies are executed left to right; that is to say, test will run first then build.

deploy: test build 
    aws s3 cp ./build s3://mybucket --recursive

3. Executing a task

When you run make with no arguments, it'll run the first task in the makefile. To run a specific task, just use the name of the task as the first argument.

# Run the first task in the makefile
$ make

# Run the deploy task
$ make deploy

4. Multiple commands per task ...

It's possible to have multiple commands under a single task. The commands will execute one after the other. Just remember to keep the indentation the same as the first command.

deploy: build
    aws s3 rm s3://mybucket --recursive
    aws s3 cp ./build s3://mybucket --recursive

5. ... but watch out for context

In make, each line is treated as a seperate shell session. This usually means you won't be able to pass outputs or return values from one command to the next[2]. However you can use a \ at the end of each line to workaround this.

# This example will fail.

deploy: build
    AWSBUCKET=mybucket
    aws s3 rm s3://$(AWSBUCKET) --recursive          # AWSBUCKET is undefined,
    aws s3 cp ./build s3://$(AWSBUCKET) --recursive  # here too

# This example is ok ...

deploy: build
    AWSBUCKET=mybucket; \
    aws s3 rm s3://$(AWSBUCKET) --recursive; \
    aws s3 cp ./build s3://$(AWSBUCKET) --recursive

# ... because it's the equivalent of writing

deploy: build
    AWSBUCKET=mybucket; aws s3 rm s3://$(AWSBUCKET) --recursive; aws s3 cp ./build s3://$(AWSBUCKET) --recursive

6. Working with environment variables

You can set environment variables in your makefile. Note that these will override existing environment variables[3].

NODE_ENV = production

test:
    echo "We'll test in $(NODE_ENV)!"
$ make
We'll test in production!

$ NODE_ENV=staging make
We'll test in production!

We can flip this behaviour by using the -e flag.

$ NODE_ENV=staging make -e
We'll test in staging!

Technically, we can override our overrides like so, but there are subtleties with how you call variables:

NODE_ENV = production

test:
    NODE_ENV=staging; \
    echo "We'll test in $$NODE_ENV!"; \   # This will be overridden ...
    echo "We'll release in $(NODE_ENV)!"  # ... but this will remain as "production" when evalutated
$ make
We'll test in staging!
We'll release in production!

Watch out for recursive errors like this one.

PATH = node_modules/.bin:$(PATH)

...

$ make
makefile:1: *** Recursive variable `PATH' references itself (eventually).  Stop.

You can fix this by using := assignment instead.

PATH := node_modules/.bin:$(PATH)

7. Functions

make also has support for functions. You can define them like so:

define <fn name>
...
endef

# you can pass arguments $1, $2 etc. like a regular bash function
define say_something
    echo "$1"
endef

To call a function use

$(call <fn name>,[arguments,...])

deploy: build
    $(call say_something,"Deploying!")
    AWSBUCKET=mybucket; \
    aws s3 rm s3://$(AWSBUCKET) --recursive; \
    aws s3 cp ./build s3://$(AWSBUCKET) --recursive

Alternatively, you may find $(shell ...) useful if you only need to call it once.

$(shell ...)

deploy: build
    echo "Deploying with $(shell aws s3 --version)"
    ...

8. Conditionals

As you've noticed by now, make does have a few builtin commands ie. $(shell ...).
I don't have a comprehensive list but here are two I've found useful for conditionals.

$(if [value],[success],[fail])

  • Where value, success, fail can be a string or a function
  • To trigger a success, value must satisfy [[ -n value ]]
  • To trigger a fail, value must satisfy [[ -z value ]]
# will always return "15%", since "active" is not null
SPECIAL_OFFER = $(if "active","15%","0%") 

# will return "0%"
SPECIAL_OFFER = $(if ,"15%","0%")

# real world scenario, you'll probably call a function
SPECIAL_OFFER = $(if $(shell ...), "15%", "0%")

$(filter [value1], [value2])

  • Where value1 and value2 can be a string or function
  • Returns value1 if value1 == value2 else null
  • Use it in conjunction with $(if ...)
$(filter abc, abc)      # abc
$(filter abc, def)      # null

Conclusion

Congrats on making it this far.
Although what I've said here only really scratches the surface, it should be enough for anyone to start writing productive makefiles.

In the next article, I do just that and show you some tips and tricks which help you get started on writing a makefile for your javascript projects.

This post is part 1 of a two part series


  1. I should write a bit of a disclaimer here that make was designed to watch for file changes in code and "dependencies" are the means to specify which files to watch. This is pretty advanced stuff and quite useful for the javascript developer but a little beyond the scope of this post. ↩︎

  2. Ouputs and/or return values to stdout. If you're writing to a file then reading from it later, you should be good. ↩︎

  3. Probably good to know that the variables are only overridden for the scope and duration of the script and the changes do not propagate to the shell session. ↩︎

The easiest test you can write right now

Saturday, 14th Jan 2017

Summary

So this is a post I've been meaning to write for a while.

I've spoken briefly about this topic with friends and colleagues that I'm in the belief that if you identify purely as a frontend developer, I'm going to assume 99% of the time that you do not write tests.

If I'm right, this is not to say you are a bad developer. Far from it.

For a lot of developers out there, refreshing the browser and looking for console errors is about as much testing as they need (and as about as much as they can stomach!).

But this is a haphazard approach which relies on excessive cognitive load (to remember all the bits that need testing), experience with browser quirks (to have confidence across different OS/browser environments) and a human (who is by default error-prone and most probably stressed out with that looming deadline fast approaching).

Me: What if you have over 100 UI components?
You: I'll just fire up my browser and run through all the pages!
Me: ...

It also wouldn't give much confidence going into a release - where it's likely there's a christmas-naughty-list length of changes - you have no way to verify what was working before, does indeed work as expected now.

Breaking the cycle

I'm sure we've all experienced embarassing moments where we've had seemingly innocent refactoring break core functionality in some way later on.

This post isn't to re-live those awkward moments but rather, to reduce occurrances of them going forward - whether it's in your current or future projects.

I speak from experience when I say frontend developers get into a habit of not writing tests. I've heard all the excuses - I've made them myself on many occasions. Writing comprehensive tests is always a daunting task and the more you think about it, the less likely you'll get started.

I broke out of that cycle by asking myself, what is the easiest test I could write right now?

Smoke tests

Ok, so you're probably asking: what does "easy" mean exactly?
Well for a start, I wanted to do the basic minimum with the idea of revisiting the tests once I had more time/motivation. This had to cover 3 basic criteria:

  1. I should be write the least amount of code needed.
  2. I can write it in the shortest amount of time possible.
  3. I should actually be testing something meaningful.

After not much research, I found this pretty much describes smoke testing.

The idea is simple. Given that the worst thing that can happen in your web app is to have something throw an exception at startup, we would write unit tests that check each and every javascript component and widget we use, is able to initialize.

Here's an example for a single component named "MyComponent"

// mycomponent.test.js

import MyComponent from 'src/MyComponent';

describe('myComponent', () => {
    it('initializes', () => {
        new MyComponent();
    }); 
});

And that's pretty much all there is to it.

Now do this for 100 components. The results are amazing.

Not only would you have written 100 tests in about 5 - 10 minutes, but you will have assured yourself that if something was to fall over, you'll know about it before anyone else does and most importantly, before it goes out to your customers.

Doubling up

Here's a tip. If your components have "render" methods, you could (and probably should) double up on your tests by ensuring these components are also able to render as well.

Extending our MyComponent example from above

// mycomponent.test.js

import MyComponent from 'src/MyComponent';

describe('myComponent', () => {
    it('initializes', () => {
        new MyComponent();
    });

    it('renders', () => {
        var component = new MyComponent();
        component.render();
    });
});

Boom! An easy 200 tests in about the same time it takes setup your webpack config.

Only the beginning

Once you start seeing the benefits and get into a habit of writing smoke tests (and unit tests in general), it becomes a starting point for more interesting tests. I find that once you're confident about the things that do work, you can focus more on writing tests for the things that don't.

If you've been holding off writing tests, give this a go! Hopefully I've been able to convince some of you for the better but I would be interested to know if this has worked for anyone else.

Winter wonderland

Saturday, 24th Dec 2016

winterwonderland

Taken at the Hyde Park, London.

This photo marks a milestone for our family - it'll be the first Christmas where we actually get to spend it together.

What I learned writing mygpxdata

Tuesday, 19th Jan 2016

What is mygpxdata?

mygpxdata is a personal opensource python project which was released in Jan 2016. It parses gpx files to render a svg path of the route and calculate relevant stats from it. Learn more about the gpx file format or check it out the mygpxdata project on github.

This project started as a weekend project and took a few more days on and off to get it onto github. Here are some lessons I picked up along the way.

1. The earth is not flat.

"No shit, sherlock!" I hear you say. Bear with me as I try to explain...
So my initial naive approach didn't take into account longitude and latitude maps to the curvature of the planet and when you don't apply meteorological projection to convert to 2d cordinates, your numbers are sure to come out distorted.

I guess it's just not something you think about when you look at google maps and the like.

2. The gpx file is really just a list longitude, latitudes and timestamps but you can calculate almost everything from it.

For example, with consecutive pairs of coordinates, you can use the havesine formula (aka "as the crow flies" method) and sum up all the values to get the total distance travelled. Pretty cool.

The only thing you won't be able to get without a 3rd party API are address details (eg. street name, postcode). However, I wanted to identify the park I was running in but it doesn't seem any 3rd party API supports that at the moment.

3. Writing a package is hard work!

I would have published the project sooner but had some reservations about the quality of the code. After faffing around for a few days, I went ahead and hosted it on github.

Definitely a new-found respect for package authors everywhere!

What's next for mygpxdata?

Documention and tests are pretty high on the list. I think the core functionality is pretty good for now and don't really see the need to add anymore.

About

Welcome to my personal blog where I talk mostly about the tools and techniques of interest in my day-to-day. I develop primarily using a javascript stack and my work revolves around SAAS & digital consumer products.


Author

Currently Senior Web Developer at @REASON, working with some amazing brands like Deutsche Telekom and GAP. Previously Lead Frontend Developer at @BEAUHURST, a data platform for clients like KPMG, Barclays and InnovateUK.

Available for hire 🔥
Please send inquiries to jim@height.io.