avatar

Dev Blog

My Top 10 IntelliJ Commands

(not including Command + F or Command + Shift + F)

10. Command + 1

Show/hide the Project Tab

9. Option + up arrow

Selects word at cursor position. Keep hitting the up arrow to increase scope.

8. Option + Shift + up (or down)

Move the current line up or down instead of using copy and paste.

7. Shift then Shift (double tap)

Brings up a menu to search EVERYTHING. But it also brings up recent files you’ve worked on. Helps you go back and forth between files you’ve looked at recently. This feature is also available via Command + E

6. Command + Shift + N

Make a new scratch file (temporary file) if you just need to paste some stuff. You can get syntax highlighting in it as well.

5. Command + L

Go to line

4. Option + F1 (then Enter to select the first item)

Scrolls to the current file inside the Project view
(Alternatively, in the Project view “settings” (the cog wheel at the top) you can select “Autoscroll from source” then the Project view will always move to the active editor.

3. Command + Shift + A

Opens up the action search menu. Just type what you want to do and magic happens. Want to show whitespace for a file? Type “whitespace”. Want convert tabs to spaces? Type “To Spaces”. Want to convert double quotes to single quotes? Type “quote”. Want a quick peek at the git history? Type “annotate”
(Like Command + Shift + P on Sublime or VSCode but way more stuff baked in without plugins)

2. Control + G

Selects the next instance and adds a new cursor (like Command + D on Sublime or VSCode)

1. Command + Shift + O

Search for a file by name
(like Command + P on Sublime or VSCode). Some varians include Command + 0 to search classnames and Command + Option + O to search symbols.

BONUS: A couple plugins I like:

The “Open in Git Host plugin” opens Github to the current line/file… useful for sharing links. For whatever reason the native handling of that in IntelliJ is disabled for me.
https://plugins.jetbrains.com/plugin/8183-open-in-git-host

Afterglow Theme
Because it just looks good….
https://plugins.jetbrains.com/plugin/8066-afterglow-theme

Read More

AngularJS not being executed in unit tests

Unit testing in Angular can be tricky because of the digest cycle. Your test code might have multiple references to $scope.apply(); in order to get promises moving.

Recently I was testing something that happened after a $timeout(). In the unit test file, I was unable to see the results that I expected. I tried $scope.apply() but that didn’t do anything.

After viewing the $timeout documentation, the answer was right there…

In tests you can use $timeout.flush() to synchronously flush the queue of deferred functions.

So that’s it… $timeout.flush() in your unit test… FTW.

Read More

How to run local commands like 'npm test' with Capistrano

Before I deploy, I want to run my test suite locally (executed with npm test). I also want the deployment to stop if the tests fail.

I created a new file in lib/capistrano/tasks. I called it test.rake. Here are the contents:

1
2
3
4
5
6
7
8
9
10
11
12
namespace :test do
desc 'Run Tests'
task :run do
run_locally do
unless system 'npm test'
puts 'Test(s) failed, so the deployment is being aborted.'
exit;
end
end
end
end

Then, in my deploy.rb deploy block, I run the command like this:

1
before :starting, 'test:run'
Read More

Laravel Service Provider Example

Making sense of Laravel Service Providers and Service Containers

I’ve read the service provider docs several times and I was always left with a vague understanding. Maybe a new explanation will be helpful? Here goes.

10,000 foot view

Laravel’s service providers are a little more than, but very much concerned with the concept of dependency injection. Here’s a non-academic description of dependency injection:

Dependency injection is sticking a required value in the parameter of a constructor or a setter method.

If a dependency that you’re injecting is simple has no dependencies of itself, this is no problem. But sometimes, your dependencies need a more hands-on approach to their instantiation.

The service container is where you would put the rules for making an instance of class that is being auto-loaded.

The service provider is a place where you can put the service container code.

The Service Provider does some other stuff too.

An example

Here’s a class that takes in a dependency.

1
2
3
4
5
6
7
8
9
use App\Services\FaceService;
class MyController extends Controller
{
public function __construct(FaceService $faceService)
{
$this->faceService = $faceService;
}
}

In the above code we have a class whose constructor expects a FaceService object. Our use statement above shows where it’s located. Because Laravel is awesome, it will autoload the FaceService class.

In many cases, this works great, but if we try to use our code, we will see this error: Unresolvable dependency resolving [Parameter #0 [ <required> $apiKey ]] in class App\Services\FaceService

Why?

The FaceService has specific dependencies of its own. This is why we need to use the Service Container.

Here’s the constructor of the face service.

1
2
3
4
public function __construct($apiKey)
{
$this->apiKey = $apiKey;
}

So our dependency is an api key. It could be anything… another object… whatever. But if we inject this FaceServcice depency into our controller, we need to ensure that the FaceService is able to be instantiated correctly.

That’s where the Service Container comes into play.

The Service Container helps us set up our dependency

Here, we take advantage of the service container:

1
2
3
4
5
use App\Services\FaceService;
$this->app->bind('App\Services\FaceService', function ($app) {
return new FaceService(env('FACE_API_KEY'));
});

The above code says that when App\Services\FaceService is going to be auto-loaded…. do stuff. In our case, we want to instantiate a new FaceService object with our API key.

Since the first parameter in the bind method is a string, you can’t use the short version of the classname. You gotta send the whole path a la App\Services\FaceService.

Now by binding to the service container, Laravel knows how to auto-load FaceService.

This code lives in the Service Provider

You saw the Service Container code above but I didn’t say where to put it. It goes in a Service Provider. And specifically, it goes into the register mehod.

You can create a new Service Provider by copying an existing one or using php artisan make:provider ProviderName

1
2
3
4
5
6
7
8
9
10
11
12
<?php
use App\Services\FaceService;
class FaceServiceProvider extends ServiceProvider
{
public function register()
{
$this->app->bind('App\Services\FaceService', function ($app) {
return new FaceService(env('MS_FACE_API_KEY1'));
});
}
}

The above code contains our code that binds to the service containe, it’s wrapped nicely inside the register method.

Finally, tell Laravel about the provider

The final thing we need to do is tell Laravel about the Service Provider. To do that we need to add to the providers array in config/app.php. Like so: App\Providers\FaceServiceProvider::class,

That’s it.

Conclusion

We now can see that the Service Container lets us define rules for how classes are auto-loaded during the dependency injection process. The Service Container code lives inside the Service Provider class, which must be registered in config/app.php

The Service Provider does more than just do this type of registration, but this is a good starting point.

Read More

Cron jobs with node

Cron jobs not working!

Recently I discovered that my cron jobs weren’t working even though I could run the cron job command manually from the command line.

I made my cron entry by opening up my crontab file on my Ubuntu server.

1
root@Monkey:~# crontab -e

After reviewing how cron jobs work, I came up with this entry:

1
*/10 * * * * NODE_ENV=production node /var/www/email/current/cron/process-ready-jobs.js

This of course was the job that wasn’t working. There are two problems.

1. The script needs to be executable.

Here are the permissions for my script:

1
2
3
root@Monkey:/var/www/email/releases/20160407193434/cron# ls -l
total 4
-rw-rw-r-- 1 deployer deployers 720 Apr 7 15:34 process-ready-jobs.js

Notice that nobody has permission to execute the script. We need to add that in. So I added deployment task (in capistrano) to make the script executable.

1
chmod +x #{current_path}/cron/process-ready-jobs.js

Now my permissions are correct:

1
2
3
root@Monkey:/var/www/email/current/cron# ls -l
total 4
-rwxrwxr-x 1 deployer deployers 994 Jun 26 00:53 process-ready-jobs.js

2. The executable path for node needs to be explicit

When the cron job attempted to run the node command was unable to be found. We need to include the full path.

We can find the full path with the which command:

1
2
root@Monkey:~# which node
/usr/local/bin/node

With that knowledge we can update our script.

Here is my new cron entry:

1
*/10 * * * * NODE_ENV=production /usr/local/bin/node /var/www/email/current/cron/process-ready-jobs.js

More Info

Cron jobs are logged in the syslog

1
root@Monkey:~# tail /var/log/syslog

But I want some more info. By outputing data as my script runs, I can feed useful information to a log file.

Here’s my final cron entry with my log file in place:

1
*/10 * * * * NODE_ENV=production /usr/local/bin/node /var/www/email/current/cron/process-ready-jobs.js >> ~/cron-out.txt

Cron jobs are now working as expected!

Read More

From JSPM to Webpack for React

Why I chose JSPM

My favorite thing about JSPM was running through the documentation and everything just worked!

I chose it because of the way that it handled all the different ways of module dependency for you. This worked with modules from NPM or even direct from github.

Development was also easy. System.js took care of all the loading on every page. I didn’t need to build anything, setup gulp rules. It just worked. During deployment, you would issue a single command on your production server and it would package evertything up and, in some sort of wizardry, would insert it into your app for you. For someone who doesn’t have much patience for learning the intracacies of build systems and settings, this was perfect.

Why I’m leaving

Once bundled, my project is fast and responsive, but while in dev mode it’s unberably slow. Every single file is requeststed on a fresh page load. If you’re building a React app and modularizing as much as possible, this leads to hundreds of requests. This takes time. About 15 seconds. On every page load.

I dove into the mega threads on github. I upgraded to beta versions of JSPM, but nothing made much of a dent.

I would still consider JSPM for smaller projects and maybe this will all get resolved, but for now, here’s my path forward.

First thoughts

After reading up on webpack, it became clear that the flow is very different from JSPM. JSPM (or more accurately system.js) essentially (from my perspective) builds a bundle in the browser as it loads. With Webpack, we will need to build our bundle explicitly whenever we change code, and then link to that bundle on our webpage. To build the bundle, we need use webpack on the command line to do so. Fortunately, webpack has a watch mode that can quickly make bundles do to caching unchanged modules.

Initial setup

Our requirements are quite simple. I want to run a react app and make use of es6 features by using Babel.

The first step is to install webpack globally: npm install -g webpack.

…and the the dependencies we need: npm install --save-dev babel-loader babel-core babel-preset-es2015 babel-preset-react json-loader

The last two are the presets that allow us to parse es6 and JSX files.

The second step is to create our config file. It should be called webpack.config.js and live in the root of your project. This is mine:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
module.exports = {
entry: './main.js',
output: {
filename: 'bundle.js'
},
module: {
loaders: [
{
test: /\.jsx?$/,
loader: 'babel-loader',
query: {
presets: ['es2015', 'react']
}
},
{
test: /\.json$/,
loader: 'json',
}
]
},
resolve: {
extensions: ['', '.js', '.jsx']
}
};

Config Gotchas

JSX and JS loading

There are a couple gotchas. In JSPM land, I was using the .jsx suffix for all my JSX files. JSX kept failing to be parsed. I would get Module build failed: SyntaxError .... unexpected token. Of course all the search results say to use the babel-preset-react. What I didn’t realize is that in the module.loaders config value, my regular expression was only for .js files. If we use /\.jsx?$/ then that will work for both .js and .jsx files.

JSON errors

Secondly, I ran into module parse errors loading .json files. For this we need that second object in the module.loaders collection that connects json files with the json-loader package.

Fix as you go

Now for the hard part. With a decent, albeit basic, webpack config file in place, we can run webpack at the root of our directory to get our build. You’ll probably get some errors. The only way forward is to fix them as they come up.

Fix as you go Gothchas

My main strategy was to try and get the webpack command compiling. Once that was working, I would use the webpack --watch command as I messed with code changes. But I actually had a difficult time getting the --watch option to work. As I was hacking all this out I was doing plenty of npm installs and npm uninstalls. At one point I uninstalled some dependencies. So if you get a Module not found error on a 3rd party module, try re-installing all the packages you know that you need (including webpack!)

Filenames

I ran into errors finding .jsx files. I would get a webpack error saying mymodule.jsx not found. If you refer back to the config file, the resolve entry has the .jsx extension so we are able to refer to the modules without explicitely including the .jsx extension.

So my first code change was to change my import statements to not use the .jsx extension. For example, I changed:

1
import MySubModule from './my-sub-module.jsx!';

to

1
import MySubModule from './my-sub-module';

I like this change a lot! I just did a find and replace in bulk.

Packages

I basically went through my config.js file used by JSPM to see what the package situation was. For all the dependencies that are in NPM, the fix is simple. Just npm install each of those packages. Webpack will find them from there.

Nodelibs

A lot of my non-NPM dependencies were node libs installed by JSPM via Github. For example, in the config, there is an entry (and a ton of similar ones) for:

1
2
3
"github:jspm/nodelibs-events@0.1.1": {
"events": "npm:events@1.0.2"
},

The solution was to replace these with an appropriate NPM module. In the case of the above, we replaces the JSPM version with the events package.

1
import events from 'nodelibs/events';

had to be changed to:

1
import events from 'events';

Of course, imports of other node libs would need similar code changes.

Script inclusion

None of this matters if we don’t include the code! In our HTML pages we need to replace the familiar system.js inclusion code:

1
2
3
4
5
<script src="/jspm_packages/system.js"></script>
<script src="/config.js"></script>
<script>
System.import('js/main');
</script>

with a more standard inclusion of our bundle. Refer to your config file to see where this file will be.

1
<script src="/build/bundle.js"></script>

Deployment

The deployment is quite similar. Running webpack from the command line or via a deployment script is all it takes to generate the same bundle file that you’ve been using in development. Heck, you could even include the built bundle in your repo and be done with it. Which if you read below… has a (uncomfortable) place.

Deployment Gothchas

My production server doesn’t have much memory. So moving all the packages to NPM caused some problems. The use of JSPM splits up the deployment workload because NPM runs and then JSPM runs later. However, with webpack NPM is the star of the show. I was having trouble getting my npm install --production to finish without crashing due to a lack of memory. My solution (not a good one) was to mark the dependencies for webpack (including all the babel stuff above) as dev dependencies by modifying the package.json file to move those package names under devDependencies.

1
2
3
4
5
6
"devDependencies": {
"babel": "^6.5.2",
"babel-core": "^6.10.4",
"babel-loader": "^6.2.4",
"babel-preset-es2015": "^6.9.0",
"babel-preset-react": "^6

Dev dependencies are not installed when the production flag is used with npm install If we include our bundle.js file in the repository we have no need to do any webpacking on the production server, since it will already be bundled.

This strategy is generally frowned upon, but it makes the NPM install process shorter and less memory intensive. I didn’t go so far as to isolate packages I only use on the client as dev dependencies since I hope to upgrade my servers at some point and don’t want to go too far off the beaten path. I plan to revert this change some day.

Dev-ing

Now that we have webpack in place, we can use webpack --watch while we work and all our JS updates are bundled at every change. But don’t worry, it’s very fast since it caches unchanged modules in this mode.

Next Steps

Webpack has a development server built-in. I still am using gulp for sass and browser-sync which is why I haven’t used the development server. But I know sass and browser-sync are both options that can be added onto webpack. I will investigate that.

Webpack also provides this cool feature of multiple entry points. Here is a good description of that. I think moving forward I will try to take advantage of that feature.

Read More

Production Logging With Morgan In Express

At the top of a newly generated express app using express’s application generator you’ll see one of the bundled dependencies:

1
var logger = require('morgan');

Later we see it initialized with the other middleware like:

1
app.use(logger('dev'));

Here is the output that is produced:
Dev logs

Now, in a hurry to get my code out and on the web I never paid close attention to what was happening. The ‘dev’ aspect just went right over my head. When bugs crept up in production, the log files were very difficult to utilize since there wasn’t much information. Specifically, there is no date and no way to tell who is making each request.

If we simply look at the Morgan documentation we will see many useful pre-defined formats. I like the combined setting becuause it provides user agent info. In my stint in customer support, identifying user agents has been helpful.

We can make our change like this:

1
2
3
4
5
if (app.get('env') === 'production') {
app.use(logger('combined'));
} else {
app.use(logger('dev'));
}

Let’s look at our logs now:
Default production logs

Lots more info! But… if you look closely, you’ll see the IP address looks awfully useless. I’m running a reverse proxy setup with NGINX to serve my site, so it actually makes sense that the IP addresses would all be internal. We need a way to get at the source IP address.

Configure NGINX to forward the real IP address

To do this we need to update the location block in our site configuration file in the sites-available directory.

1
proxy_set_header X-Forwarded-For $remote_addr;

And then of course we’ll need to restart NGINX (sudo service nginx restart).

Tell Express to use the remote IP address

In the Express proxy documentation we can see how to take advantage or our newly forwarded IP address. By enabling the trust proxy feature, the IP address will be updated in the correct places with the forwarded IP. It will also be available on the req object. We can enable the trust proxy feature after initializing the app.

1
2
var app = express();
app.set('trust proxy', true);

The final result:
Final production logs

We now have IP addresses. No we can see dates, IP addresses, user agents and lots more when looking through our logs.

Read More

Getting more reliability out of RabbitMQ

The Scenario

When we had our first beta user give us a heavy load, we ran into some issues. In this instance, we were using RabbitMQ to communicate with our text messaging service. In our testing, we never had any problem with missed text messages, but we started hearing reports of text messages never getting delivered or received.

Now, I’ve seen blog posts saying that can crank a million messages a second out of RabbitMQ. I know you need to be a pro to do stuff like that, but it never occured to me that our level of usage thus far (a couple hundred spread out over a minute or two) would ever cause a problem. But, after not finding any issues in the surrounding logic, I decided to test RabbitMQ.

The setup

The consumer:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
var amqp = require('amqplib');
var count = 0;
amqp.connect('amqp://localhost').then(function(conn) {
process.once('SIGINT', function() { conn.close(); });
return conn.createChannel().then(function(channel) {
var exchange = 'triggers';
var routingKey = 'foo.target';
channel.assertExchange(exchange, 'topic').then(function() {
channel.assertQueue('', {exlusive: true}).then(function(qok) {
var queue = qok.queue;
console.log('consuming from ' + queue);
channel.bindQueue(queue, exchange, routingKey).then(function() {
channel.consume(queue, function(message) {
count++;
console.log(" [" + count + "] Received '%s'", message.content.toString());
channel.ack(message);
}, {noAck: false}).then(function() {
console.log(' [*] Waiting for messages. To exit press CTRL+C');
});
});
});
});
});
}).then(null, console.warn);

The producer:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
var amqp = require('amqplib');
var send = function(message, key) {
message = JSON.stringify(message);
return amqp.connect('amqp://localhost').then(function(conn) {
console.log('connected to rabbit');
return conn.createChannel().then(function(channel) {
var exchange = 'triggers';
channel.assertExchange(exchange, 'topic').then(function() {
channel.publish(exchange, key, new Buffer(message));
console.log(" [x] Sent %s:'%s'", key, message);
return channel.close().then(function() {
return conn.close();
});
});
});
});
};
for (var i = 0; i < 100; i++) {
send('A message', 'foo.target');
}

Notice the last few lines. We’re sending 100 messages in rapid succession. Let’s see what our producer shows.

All received

All received. Very good. What happens if we up the load a bit: Say… let’s try 300 messages by upping the loop boundary.

5 dropped!

We have 5 messages that didn’t go through! And since only 295 were sent, we see that the problem is on the sending side.

What’s happeing?

Each time I attempt to send, I’m opening up a connection. That leaves many opportunties for connection failures.

What if I just use one connection

With a little reorganization we can keep the connection and channel open.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
var amqp = require('amqplib');
var count = 0;
var preSend = function() {
return new Promise(function(resolve, reject) {
amqp.connect('amqp://localhost').then(function(conn) {
console.log('connected to rabbit');
conn.createChannel().then(function(channel) {
var exchange = 'triggers';
channel.assertExchange(exchange, 'topic').then(function() {
resolve({
channel: channel,
conn: conn
});
});
});
});
});
};
var send = function(channel, message, key) {
message = JSON.stringify(message);
count++;
var exchange = 'triggers';
channel.publish(exchange, key, new Buffer(message));
console.log(" [" + count + "] Sent %s:'%s'", key, message);
};
preSend().then(function(cc) {
for (var i = 0; i < 300; i++) {
send(cc.channel, 'hi there', 'foo.target');
}
cc.channel.close().then(function() {
cc.conn.close().then(function() {
console.log('TX complete');
});
});
});

All the message made it! But I don’t like this pattern. My messaging is sporatic. Do I want to deal with the overhead of managing the connection? Maybe? Probably? But not now.

What if I just resend on a failed connection

Simple, I like it! But we need to be able to see if our message was successfully sent off. To do that we use ConfirmChannel

It basically means that instead of createChannel, we use createConfirmChannel. The server will then acknowledge our message when we issue the publish command. If it doesn’t we can schedule a resend. There’s some extra code in there to create a little space and potentially give up, but so far everything works well.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
var amqp = require('amqplib');
var count = 0;
var maxSendAttempts = 5;
var resend = function(message, key, attempt) {
var _this = this;
var timeout = 2000;
setTimeout(function() {
console.log('reconnecting');
send(message, key, attempt);
}, timeout);
};
var send = function(message, key, attempt) {
attempt = attempt || 0;
attempt++;
message = JSON.stringify(message);
amqp.connect('amqp://localhost').then(function(conn) {
console.log('connected to rabbit');
conn.createConfirmChannel().then(function(channel) {
var exchange = 'triggers';
channel.assertExchange(exchange, 'topic').then(function() {
channel.publish(exchange, key, new Buffer(message), {}, function(error, ok) {
if (error) {
console.log('Message was not confirmed');
console.log(error);
if (attempt < maxSendAttempts) {
resend(JSON.parse(message), key, attempt);
} else {
console.log('Message failed ' + maxSendAttempts + ' times. Giving up.');
console.log(JSON.parse(message));
channel.close().then(function() {
console.log('closing connection');
conn.close();
});
}
} else {
count++;
console.log(" [" + count + "] Sent %s:'%s'", key, message);
channel.close().then(function() {
console.log('closing connection');
conn.close();
});
}
});
});
});
}).catch(function(error) {
if (attempt < maxSendAttempts) {
resend(JSON.parse(message), key, attempt);
} else {
console.log('Message failed ' + maxSendAttempts + ' times. Giving up.');
console.log(JSON.parse(message));
}
});
};
// send a bunch of stuff
for (var i = 0; i < 600; i++) {
send('A message', 'foo.target');
}

Here’s the proof.

All Received Again

Conclusion

I’ve barely scratched the surface with how make RabbitMQ rock solid, but since I have implemented these changes, we have had no more complaints about messages getting through.

I would like to learn more about the cost of opening so many connections at once. It may be worth it to better manage my messages as our load increases.

Another next step is that, instead of giving up, we can store the message in a database for re-sending at a later time.

Read More

TLS for NGINX and Node

Not too long ago was finally time to get with the times and get a TLS (SSL) cert. I went with Let’s Encrypt whose installer made the process simple.

But once you have the certificates you need to serve them!

Here was my old NGINX config file:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
server {
listen 80;
server_name mysite.com;
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

This is a reverse-proxy setup as described in another post.

Here is my new TLSified version. There are two server blocks. This first block is to redirect the http requests to use https

1
2
3
4
5
server {
listen 80;
server_name mysite.com;
return 301 https://mysite.com;
}

The second block has the good stuff. You’ll notice a couple key things:

  1. The listen line is now the standard SSL port of 443.
  2. THere are lines for the location of the certificate and key file. These were the files created by Letsencrypt.
  3. Below that there’s some additional SSL config.
  4. The location block is the same.
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
server {
listen 443 ssl;
server_name mysite.com;
ssl_certificate /etc/letsencrypt/live/mysite.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/mysite.com/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_prefer_server_ciphers on;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
location / {
proxy_pass http://127.0.0.1:3000;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}

My NGINX config file (in sites-available) is composed of those two server blocks. HTTPS now works great.

Read More

Working with moment.js

Moment.js is a javascript library that makes working with dates and times way easier than using native javascript methods. It’s worth its weight in gold.

That said, I’ve ran into some areas that haven’t been intuitive for me.

Manipulating moments

Let’s look at the following code:

1
2
3
var start = moment();
var startPlus1 = start.add('1', 'hours');
var startPlus3 = start.add('3', 'hours');

I expect the three values to be different. However, they are all the same and they all equal start + 4 hours.

Moment’s add method doesn’t return a new object. Rather it manipulates the given object. From another post, we can see that simply making a new variable isn’t a true separate copy of the object.

The workaround is quite simple. We can create a new moment from an old one.

1
2
var startPlus1 = moment(start).add('1', 'hours');
var startPlus3 = moment(start).add('3', 'hours');

Now are moments are all different and as expected.

Timezones

I was just about at peace thinking that I couldn’t work in any timezones other than local or UTC. And by “work”, I don’t mean simply displaying the time in the correct timezone. I mean, manipulating moments with functions like startOf and endOf.

As I write this it is 9:48am pacific time.

1
2
3
4
5
var start = moment().utcOffset(-5);
console.log(start.toDate());
// Output
Fri Aug 07 2015 09:48:15 GMT-0700 (PDT)

What I expected was Fri Aug 07 2015 09:48:15 GMT-0500 (CDT)
Another expectation was Fri Aug 07 2015 07:48:15 GMT-0700 (PDT)
I did not expect the current time. The utcOffset method appers to do nothing.

But what if we use moment’s formatting capabilites?

1
2
3
4
5
6
7
var start = moment('2015-08-05 10:00:00').utcOffset(-5);
console.log(start.toDate());
console.log(start.format('MM/DD/YYYY H:mm:ss'));
// Output
Wed Aug 05 2015 10:00:00 GMT-0700 (PDT)
08/05/2015 12:00:00

The times are different. 10am pacific it certainly not noon. The difference is that the formatted time has been backed off UTC by 5 hours… which is our CDT.

From this I conclude that Moment is storing the offset for use and not actually manipulating the underlying time. The good news is that it is getting used!

So now, if I wanted to find the start of a work day in central time, I can:

1
2
3
4
5
6
var time = moment('2015-08-05 10:00:00').utcOffset(-5);
var startOfWorkDay = moment(time).startOf('day').add(8, 'hours');
console.log(startOfWorkDay.toDate());
// Output
Wed Aug 05 2015 06:00:00 GMT-0700 (PDT)

The first line of output shows the correct time. 6am PDT would correlate to 8am CDT. Using the same principle to get an endOfWorkDay, I can now make comparisons to see if some arbitray time is during a work day in a given timezone.

The last piece is to be careful about how I get my offset. The timezone offsets are not consistent primarily due to daylight savings times. That’s a topic for another day.

Read More