Posts for Tag: node

A Simple Node.js PM2 Setup Guide

Introduction

PM2 is a great tool that can help you manage not just your processes when they are running but also env types, vars, and configs. I still find many questions around the basic operation of the tool around the internet such as stackoverflow and other boards. So this article will cover how I setup most of my projects using PM2; short, sweet, and to the point. 

Setting up a quick app

First let's create a quick API that returns hello world as a response. Move into an empty directory and create a file called API.js and put the following into it. 

Then run npm init -y  This will cause npm to create a bare bones application package.json for you so other packages can be installed. You can also leave the -y out if you want to manually put in all the information for your project. 
After you have your project setup with NPM install the express library package npm i express --save 

With express installed you should now be able to run your api app with the following command node API.js 
You can confirm that it is running by going to http://localhost:3000/ in your browser. You should see "Hello World!" printed within the page.
You now have a basic app running! Within the terminal that you started the app in you can input ctrl + z to stop the API from running as we will be setting up PM2 to be our process runner.

Starting up PM2

Within the project directory install PM2 as a package globally and into your project dependencies npm i pm2 -g; npm i pm2 --save 

You have now installed all you need to setup production level process running and environment control ( with enough configs and a pipeline of course ). With the latest versions of PM2 the tool comes with the ecosystem command that will help us generate what we need to create and application definition for our small API. However to see some immediate action you can just run pm2 start API.js 

Confirm that your API start by visiting localhost:3000/ again and checking if the text "Hello World!" is rendered. You can see that your process starts up and you get a nice looking table print out with some facts about your process. You can shut down your process by running pm2 delete API .

Note: Delete will remove the application entirely from PM2's registry; however you can also use  pm2 stop API  and it will stop the application but not remove it so you can use  PM2 start API  to start it up again. In our case however we want PM2 to forget about our process since we are going to create an ecosystem config for it and start it that way.

Setting up ecosystem.js

Now that we have shown that pm2 will start our API, and that our API still returns what we expect we can setup a more maintainable and useful way to start our app. Within the project app run pm2 ecosystem 

This will generate a application definition with a few other things inside of a ecosystem.js file that it created. Lets go through the sections that get generated and a quick over view of what we can do with them.

Apps Section

This section is arguably the most important for getting your application up and running. This is where you will define how your application runs, environment specific vars, logging behaivors, and more. In addition you can define multiple apps in the same ecosystem file; this can be used to start up co-processors, log streamers, queue managers, and more.

I will go over a few of the fields that can be used with an app definition that I think are some of the most common or useful.

Instances - Int

The number of app instance to be launched. This can be set to -1 to start as many processes as the system has CPU cores subtract 1. I use this in docker setups a lot because I can allow the application to consume the entire container since that is what it is dedicated to. 

node_args - String Array

This is an array of arguments that will be passed to the actual node execution which allows you to pass things like the --harmony flags for older node version or things such as debug flags.

error_file, out_file, pid_file - String ( directory/file path )

These values point to the directory and file name that you want pm2 to export the generated logs from your application to. This is valuable again in the containerization scenario when you want your logs to go specific places to be picked up by log aggregation systems. 

max_restarts - Int

This is the number of consecutive unstable restarts (less than 1sec interval or custom time via min_uptime) before your app is considered errored and is stopped from being restarted any more. This is a great option to allow your application to show that it can't connect to mission critical services or API's at start up time. I currently use this as a flag that something is very wrong when a new deploy goes out.

max_memory_restart - String

This option allows a max memory limit to be set that, when hit, causes PM2 to restart you application automatically. Useful to ensure a rogue process doesn't bring down boxes if they suddenly get a ton of load or the process simply goes off the rails. The value of this field is a String that uses the normal volume types M = megabyte, b = byte, so if you wanted the max memory to be 400 megabytes then the value here would be "400M".

min_uptime - String

This option lets pm2 know that your application should be up and stable for x amount of seconds before being considered started instead of in the starting state. This can be important if your project has to connect to a service and timeouts are an issue or there is a lengthy read operation on disk, etc.

env - Object Key/Val

The env object is what allows you to define environment level vars based on the environment flag you give PM2 at startup time. the plain env field is an object that contains values that will be available all of the time regardless of the environment that is passed to pm2. 

Any field that follows the pattern env_${myEnvironmentName} is considered a valid env setting and the values defined within will only be available when the --env flag us used and a valid key that matches the following rules is found. For example a env_production env object then allows you to use the following command pm2 start ecosystem.js --env production 

Note I think it is worth mentioning how you get to these environment variables. All of these keys will be available via the global process variable. So to get the env NODE_ENV to see what mode the application has been started in you would use process.env.NODE_ENV.

Here is an example of all the discussed settings for an application definition. There are plenty more options that you can find in the PM2 docs page.

Deployments Section

As a disclaimer I do not use the PM2 deployment tool in any projects currently aside from deploying to some rasberry pi's I have hooked up on my local network. This is because these days all of my CI/CD pipelines use Docker or run through Heroku. That being said I will give you what I know on how to get deployments working for you via PM2. 

The use case for using the PM2 deployment tool set fits if you have static servers that are not containerized that you don't want to bring down your systems just to do a small update to your application. This is the case at with my home setup where I have node instances running on Rasberry Pi's. 

To setup deployments via PM2 first define your application section with the options you want from the previous section. Then in your ecosystem.js go to the deploy section of the config. Here you will find the two generated deployment options production and dev, these are enviornment configs just like the application definition has; they will be used to define what env your are deploying to. Here there are a few key definitions that need to be flushed out.

user - String

This is the user that the target machine will use to run any commands that are pushed to it via PM2. It must have the approprite permissions to execute the commands ( git pull, npm install, etc ) on the target machine. In addition this is the user that PM2 will attempt to authenticate via ssh using a key on your machine or a key you provide within the deployment configuration ( more on this a little later )

host - String/String Array

The host field can hold a single host or an array of hosts. These hosts can be IPs or hostnames that will get resolved via DNS. The machine doing the deploy must have an SSH key for these machines so that authentication can occur OR a .pem file must be given as part of the deployment config.

key - String

The file location of a .pem file that contains the approprite key to authenticate against the hosts using the User as the username.

ref - String

Ref is the git origin branch that you want to get deployed. Most of the time this will be a "production" branch or something similiar that you merge into when a version has been tagged or something similiar. 

repo - String

The git repository URI that the ref branch is in and the hosts have access to pull from. 

path - String

The path that the git branch will be downloaded into on the host.

pre-setup - String

This field is (a) command(s) to be run BEFORE the git checkout for the branch is done on the host.

post-setup - String

This field is (a) command(s) to be run AFTER the git checkout for the branch is done on the host. This is generally where you would put your npm install, gulp builds, etc.

post-deploy - String

This field is (a) command(s) to be run AFTER the pre and post setups events. This is where you will put your application restart/start command(s)

pre-deploy-local - String

This field is (a) command(s) that will run on the deployment machine, not the hosts that are being deployed to BEFORE it actually fires off the deployments. This is useful for putting deployment configs, alerts, etc into things like slack, emails, etc.

An example of the all these fields for a production env.

Once you have your application running for the first time on the hosts listed in the production deploy config. From a build server ( or your local box ) you can run pm2 deploy ecosystem.js production .

If everything runs successfully you should see a message similiar to : "Deploy Succesful" , you are all done!

Daemonizing your application with PM2 (2.2 >)

PM2 allows you to configure a startup script that will ensure your application comes back up if for some reason a restart or shut down has occured. This is useful again if you have bare metal you are running on that is not containerized or if for some reason you have a very unstable server setup; or have to do rolling restarts of servers for deployments. 

PM2 comes with the pm2 startup command that will out put a command for root to run that will add the approprite config so your application will start on machine startup. You can also pass pm2 an explicit type of startup script to generate if you know your environment OS. Here is the supported options: ubuntu, ubuntu14, ubuntu12, centos, centos6, arch, oracle, amazon, macos, darwin , freebsd, systemd, systemv, upstart, launchd, rcd, openrc. 

Here is an example output from the command :

On a build server you can write some simple awk, grep, or regex to extract the command and run it on the machine that is being provisioned. 

You can read more about process management via things like init.d and more here.

Wrap up

PM2 is a powerful tool for getting things up and running fast, but it also has the staying power for production level applications. I have many applications both at my day job and my personal applications that run using PM2 and KeyMetrics. Some of the deployment management I feel is antiquated by the rising use of containers and the associated services such as AWS, Google Cloud, Heroku, etc but it still has it's place in situations where you don't have the flexiability of a container driven development environment. 

Links :

http://pm2.keymetrics.io/docs/usage/application-declaration/

http://pm2.keymetrics.io/docs/usage/deployment/

http://pm2.keymetrics.io/docs/usage/startup/

Using Chrome Dev Tools To Debug Your Node.js Projects

Introduction 

To this day I get asked a lot on how I find issues inside my code base, some times even where I put my console.logs(). My answer is that I use a debugger; however almost every time this surprises people in the Node.js/JS community. I thought we had gotten past the strange period of JavaScript as a language where console.log()ing random points in your code base was the way to debug things. 

Apparently I was wrong; at least based on how often I get asked this type of thing.

So in the spirit of hoping to propagate something I strongly feel should be a standard and something every JS developer ( or any type of developer really ) should now how to setup and use, this is a small article on how to use Chromes dev tools to debug your Node.js projects.

Installation and Setup For Node 6.3 & Above

In Node 6.3 we got a native debugger module that Node.js now ships with that is actually developed by the Node team. To use this there is now command flag options that we can pass in when starting our node projects. It will do some simple quality of life things as well, if the same file ans instance are brought down and back up the debugger will reattached itself, which is pretty helpful.

Debugging Just Using Node

When you start your application now you just append a --inspect to the node command and it should do everything needed on the process level. 

node myProject.js --inspect

Next open up chrome and go to about:inspect in the URL bar. This will bring you to a panel that looks like the image here 

You can then click the "Inspect" link under the name and path of your running application and it will open up a standard chrome debugger that is attached to your process.

Debugging using PM2

PM2 is a great process runner that I personally use for all my node related projects. However due to the how PM2 works and handles configurations for projects it requires a little extra work to get running with --inspect. 


Managing developer debug configs and app definitions for pm2

A lot of the time you don't want to have create two different files just for debug mode. So what the teams I have been on normally do is just create two application definitions in the same ecosystem.json file and then create different startup commands in our package.json for the devs and startup scripts. You can see the following gist for an example.

Installation and Setup For Node 6.2 & Below

Like most things these days, there is already a package that you can grab that does most of the heavy lifting for you. This package is node-inspector. Install it via the npm command globally via command line :

$ npm i node-inspector -g

Now to ensure that all went well during installation run the inspector command :

$ node-inspector

It should print out a version and a local URL that you can visit.

Hooking up the debugger to your process

Now that the inspector is up and running on your machine you need to hook up your process to it so that it can evaluate the code base as it runs. To accomplish this you will need the process ID of your project after you have started its. 

Start your project using something like : node myProject.js

Or if you use PM2 : pm2 start myProject.js

Getting your PID

I normally run my projects through PM2 which gives you the PID in the process table that it prints out; however if you are not doing that you can find your PID by using the ps command as follows :

$ ps -ax | grep node

That should give you a list of all the node instances that are running on your box at the time in which you can pick out the source file that was started ( myProject.js ). Once you have the PID you can then send the PID a signal that tells the process to enable debugging.

Sending the Debug signal to your process

The process of sending the debug signal is very straight forward. I will use $[pid} where your process id that you found earlier should go. Now lets send that signal :

$ kill -s USR1 ${PID}

Now this won't actually kill your process, we are simply sending a system level signal to it, that i what the -s is for in the command. You are now ready to start debugging your running Node.js application. 

Getting to your debugger

Getting the node-inspector is as easy as visiting the URL that was print out for you near the beginning of the article with one change. By default V8 starts the debugger on port 5858, if for some reason yours is different, or you have multiple debugging sessions going you can tell node-inspector what port you want to hoot the debugger up to by providing a port as a GET param. For Example

http://127.0.0.1:8080/?port=5858

You can change that port param to whatever your process printed when you sent the system signal.

Wrap up

That's it! Pretty simple, yeah? I hope this is something that people will find useful and we can get away from the console.log() times. Debugging will help save you countless hours, especially when trying to determine what variable is now what value when. You just set a break point and watch it flow to the break point, then you can evaluate the entire state of the application at that moment.

Links

PM2 - https://github.com/Unitech/pm2

Node-Inspector - https://www.npmjs.com/package/node-inspector

Debugger docs - https://nodejs.org/en/docs/inspector/


 

A talk on Generators & Bluebird.js coroutine()

Intro

Generators can be scary, confusing, and can require a lot of setup to really get the most out of. Most people are looking to just yield a statement so that certain async actions can occur in a specific order; this is most common, from what I have seen, when dealing with mongo read writes when using promises. 

In reality the patterns what I will go over can be applied to any promise or none promise based setup where you have multiple async operations that you may or may not have to wait on. 

Bluebird.js

If you are not familiar as to what Bluebird.js is here is the skinny. Bluebird.js is a library that fills a single purpose : Better, faster promises and support structures. The Bluebird team has done a great job at making Promises fast and accessible to a variety of environments which makes it a great tool for any node or front end project looking to ensure that the Promise spec is met and usable.

You can read their own "Why Bluebird?" section here : http://bluebirdjs.com/docs/why-bluebird.html

You can also see the benchmarks here : http://bluebirdjs.com/docs/benchmarks.html

Being able to rely on promises and generators being available is key to a lot of the work I do these days as they help control the flow of things like multiple atomic operations occurring inside of a single controller action making the action none atomic as a whole. This kind of thing can become a nightmare when dealing with in order writes or reads from say mongo.

Generators

If you haven't heard of or your knowledge is just lacking a bit on generators here is a quick description : Generators are functions that can be executed and then exited, but with their state maintained, and then reentered at another time. 

Many people ask why I don't just use async/await; without creating a debate or an entire article it is mostly due to the fact that at async/awaits core is generators. I prefer to use the common denominator. 

In addition to that Bluebird is simply faster then the native promises and other libs that provide other limited functionality sets. Refer the benchmarks link above for more info on how those benchmarks are created.

Creating & Using Generators

Finally lets look at some code! The Generator syntax is very simple and should look pretty familiar aside from a single difference.

You will notice the * at the end of the function, this is what defines the function as a generator which allows us to exit and reenter the function using a .next() method. It reality all a generator is, is a constructed Iterator type but with some additional functionality that is beyond just a primitive type which allows for better flow control. 

Here is an example of what using a generator as an Iterator type :

About The Yield Keyword

With the example above we aren't really doing anything that a primitive Iterator type can't do such as a Number type.  However it is important to observe that you have flow control that you don't with the primitive Iterators using the yield keyword. The yield keyword is an actor that functions like return but functions differently in an important way. 

Yield maintains the generators memory state which is what allows for the iterative functionality that you can control. The state of the generator is always at the line of the yield keyword just passed the value after it.

Let us take a look at what is happening behind the scenes for us here, which will give us a better understanding of how to interface with generators as an Iterative type.

Since a generator is technically an Iterator type, it exposes all the normal Iterator methods, this includes .next() as shown above. This is how you reenter a generator that has been stepped out of using the yield keyword. When you reenter the function it will continue the generator execution at the point start after the line terminator of the yield statement.

About next()

We have seen the next() method in action to continue till the next yield call in our generator. Next() can do a little more then just tell our generator to continue; it allows us to pass values into the generator for the duration of the execution up till the next yield that we can use or store for a cumulative value return from the generator.

Here is an example on the cumulative return of some numbers that we will pass into the generator mid execution between yields.


This method of execution is much more controllable when dealing with the aspects of potentially unknown amounts of processing or long running processes such as job queues, stitching DB query chains, or loop processing. It is also worth noting how the data is returned from the .next() method. is an object literal with two fields: value, and done. The value is whatever value you yielded in the generator; the done field represents if the generator ran into another yield statement or not denoting that the execution loop or sequence has completed within the generator. 

About throw()  

Sometimes you need to be able to cancel the execution of the generator based on a result of the yielded value that was returned when an iteration has occurred. The throw() that the generator interface allows you to define a try/catch in your generator which will get caught within the context of the generator; this allows you to either re-throw the error or let the error bubble up to the parent execution scope that invoked the generator.

Let's take a look at what this might look like:

As shown above you can actually pass in the error you want to be thrown inside of the generator. This pattern can help dealing with scope hell when dealing how your generators exit before they are technically complete.

About return()

Every once and a while you will need to get the current value of a generator when it is in a completed state or more likely; you will need to end a generators execution sequence early but you don't want to throw an error. The return() method does exactly as described it simply allows us to end a generator execution but without throwing an error. 

If you give the return() method a value as a parameter the value returned, which is the same as the type of return you get form the next() method, the value property will be the same value you passed in to return(). This can be useful when creating reusable components, or when using a factory pattern that can return a generator.

Take a look at the following example :

Bluebird Coroutine

There is sometimes confusion around what what a coroutine is and what it means within different contexts. A coroutine was originally one of the names that Generators went by the ECMA spec for a little while; so if you google "javascript coroutines" you will find a lot of examples that look a lot like the ones in this article because they are really just generators.

Coroutines are Promises

The Bluebird.coroutine() does a few things that are different. First off the result of the invoking the Bluebird.coroutine() method is that it returns a Promise that is resolved when the generator that is passed in returns the state of done : true. This means that you can suspend the execution of entire Promise generation functions with pairs nicely with the ability to wrap nearly anything us Bluebird.promisfy() which will ensure that any callback based method returns a promise instead. 

This enables you to use promise patterns when waiting for a generator to complete. Observe the following example :

It is important to note that when you wrap the generator in the Bluebird.coroutine() you need to return a promise both when you yield or what you use return. This is because the co-routine is iterating your generator for you and is looking for both a done state and a promise resolve on the promise state. It is also worth mentioning, that the results will only print after 1 second has passed. That is how you know that your generators yielded statement is actually being hit. This is due to the generator returning a Promise that delays the setting complete state for 1 second ( 1000 ms ).

Note that you can also change the order in which something will and get resolved just by changing what gets yielded or moving the statement all together.

Coroutines & Bluebird.all()

Much of the time you have maybe a single operation that then multiple operations depend on for data to complete their processing. Coroutines help a lot with this issue when combined with the Bluebird.all(). The .all() method allows you to await the results of many promises at once and then get the results all at the same time; this paired with the fact that now your generators can return a promise instance makes controlling multiple async operations a snap. 

I personally use the following pattern all over the place for things like dealing with multiple Mongo DB calls, file reads, and subsequent API calls for data stitching. Here is an example of the generator multi-async pattern :

Taking it further

There is a lot more that you can do with this pattern outside of DB calls and simple processing methods and modules. You can take your implementation a bit further by pulling in and using clustering with Node. Clustering is a basically how you do something much closer to true multi-threading with Node.

 To do something like this you can wrap the signals that a child process would send with data with a generator function which would allow you to yield the execution of say a function processing a network request to wait for the worker thread to be completed. That would allow the master thread to field other requests coming into the event loop while checking the status of the yielded call on every event cycle. 

This is a subject that will require another article; and is one that I choose to write. But it is worth thinking about and at least knowing about when using things like generators, promises, and Bluebird.

Conclusion

Included is what that logic flow looks like without the coroutine and yield. If nothing else it is easy to see that it creates much more readable code that will be much easier to deal with while you're working on it and in the future when someone else has to deal with it. I use the pattern(s) shown here for all kinds of implementations from APIs to heavy processing using clustering.

Links

  1. Bluebird.js
  2. Generators
  3. Spawning
  4. Mongoose

Using mongoose to validate and manage form data in Aurelia.js

Intro Rant 

I've been using Aurelia since it was first available to the public. I have made a lot of bad decisions with it and a lot of good ones over the course of the last year or so. One project that I embarked on had a large amount of forms that needed to get filled out, validated, and sent to the server that expected the data to be in a particular format like any other API call. 

I was using mongoose on my back end as it was Node.js powered with mongo as a storage solution. I really wanted to find a way to reduce the the overhead of form validation and value selection. So I went to take a look at mongoose for the browser, nothing  extreme just a way to bind the validation I already had in my schemas for the back end to the forms on the front end. 

After finding that to be a relatively easy task I then had to find a way to get my schemas to load into the browser and into my node modules. I first tried some old crusty methods of wrapping the returns in various statements depending on if env variables could be found to determine if I was in Node land or Browser land, but ultimately it became way to bloated to be included in every schema that I wrote. 

So I moved on to a cleaner solution of browser first. Since I was using Aurelia and Babel to handle keeping things clean and the live transpiling I could just write straight ES6 style schemas. which I could then load using Babel on the backend as well. thus began the journey of my mongoose schema input custom attribute in Aurelia. 

Building The Custom Attribute

I normally start things out by just defining the logic-less parts of a new component that I am working on. This was no different, I created a empty custom attribute using Aurelia bringing in all the relevant modules I'd need to make it function. it looked something like this

This obviously is not very exciting in and of itself; however it shows some key interactions with the Aurelia component life cycle. These life cycle methods on the class get called by Aurelia when our component is constructed/deconstructed and attached/detached to the DOM, this is bind and unbind respectively in our class definition.

Next I wanted to identify some issues that I've come across with some other form utilities and see if I could address them with my own implementation. The list ended up being relatively short but crucial to reusable form validation components 

  1. Two way binding for values in my mongoose model
  2. error class appending/removal on state change
  3. Independent logic set that lives outside of any view model
  4. Validation logic throttling
  5. onChange event support for things such as dropdown menus, multi select, and radio buttons.
  6. Callback support for when something is validated
  7. No jQuery, only core JS.

After I finished my list I didn't feel like it was something that couldn't be done or was to wild to keep in check without bloating it. With this list in my mind I added properties to my class definition to represent my list items. In the end it looked like this :

Most of these properties are pretty self explanatory with the comments provided in the code sample. Most of these are just properties that get bound from the parent view model so that the attribute has access to the memory reference so that two way binding can be used. It is worth mentioning that we explicitly make the binding type of the component two way, this is to enforce the behavior between the view model and the attribute references. 

Building The Attribute Functionality

So now we have a pretty solid skeleton for the functionality that we can load into a view model and use as an attribute on a input field. Sadly so far it doesn't really do anything; the functionality still needs some scope as to how we bind it to the view and the view model. Lets go over how we want the HTML and ViewModel to look like when using this custom attribute.

HTML Bindings

The HTML binding part of the custom component is relatively easy and simply binds values from a view model to the reference containing properties of the custom attribute instance. Given the properties we made on the attribute the bindings can look like this 

ViewModel Bindings

Given the attributes in the HTML template and the properties of the attribute we can create a sample view model to hook everything up for examples purpose. We need a small mongoose model, a callback, and the path name of the field to validate in the mongoose model. In the end we can use something like the following

We really don't need much in the view model here, just a callback and a mongoose model. Normally I would suggest creating your schemas separately and importing them; but for the purposes of this example we will define the schema inside the view model.

The callback that you pass to the validator attribute gets passed either an object containing the path that was validated if it was successful or the error object that the mongoose validator returns for that paths validation call.

Writing The Attribute Methods

Now that the attribute has some functional scope with how it will be used and integrated the logic within it can be defined. The full attribute logic will be given below and then each piece will be covered. I feel it is easier to understand and explain code when you get the full block to browse over and absorb at your pace and then move forward when you are ready.

Constructor

Our constructor does not do a lot of logic here, we simply assign the element handle to a property on the class instance that gets passed to it by the Aurelia injection/creation life cycle. We also create and grab a logging instance from the Aurelia framework.

Bind

The Bind method is invoked by the Aurelia life cycle when the attribute is bound to the DOM and the view model of the outer template instance that the attribute was required in. The Bind method is passed the context of the parent view model which allows us to invoke methods that are on the parents scope ( like we do with the callback that you pass to the validator ). 

A word of caution : It's tempting to use this context for direct bindings to the parent scope instead of doing the binding on the template/DOM level ( What I personally call Angular Scope Hell Syndrome ). This really is not the best way to do this and creates very tightly bound component that relies on things to exist in every view model that acts as the parent context to the custom attribute which makes it very brittle and not reusable.  

Unbind

Unbind is invoked in the life cycle of the attribute when the attribute is detached from the DOM and current route scope. In this case it is used to remove the the listeners that were attached in the Bind method. 

validateField

This is where most of the magic happens for the custom attribute. This is where the actual validation using the mongoose model happens using the path name string and the model that we bound to the instance in the html bindings. There are a few things the method does 

  1. Calls the mongoose document validation method
  2. Because of the fact that the mongoose document validates all fields whenever you call the .validate method we have to look for our current paths error specifically by using the path string.
    1.  If we have a validation object and we find an error matching the path string that was bound to the instance then we throw a not validated error.
    2. If we do not have any validateResult object then all is well OR if now error was found for this path then it has passed validation.
  3. In either of these cases we either add or remove the error classes that we also bound in the HTML bindings.
    1. These error and success classes do have defaults as seen in the class property definitions but those can be changed to suite any of your needs.

Due to the fact that I didn't want to use anything but pure JS for this the validateField method seems a bit chunky and I am sure could be reduced in size with some modifications. But this blog isn't about perfect code, it's about learning.

The validateField will apply the error or success classes to the parent of the input field. This allows you to wrap your input fields and have them show the errors based on the mongoose validation result.

valueListener

The valueListener method is what actually gets attached to the event hooks on the element that the attribute is instantiated on. ValueListener is where throttling is handled via some simple time out mechanics that aren't very mysterious. Basically all that happens here is the throttle flag is set when we start validating the field and then when another request for validation comes in we set a small timeout for the next validations to occur in sequence. 

While not perfect it does work, though I do have the thought of potentially using the binding behaviors here that Aurelia comes with; I will add my findings when I get to that here.

The Results!

It's been a quick and hopefully painless ride to our custom validation using mongoose schemas on the frontend. Here are some examples of the forms that I have put together using this mongoose method. The icons on the right of the input fields change depending on the success and failure classes that get added or removed to inputs parent. I set it up this way so I could add other elements in the same element as the input which could pick up on the success or failure state of the input field; but that is a another article.

Empty

Invalid

Valid

Links

Using PM2 and Vorpal to manage a Node.js service stack on developer machines.

Introduction rant

Throughout the various places I've worked, how to setup a development environment has always been a subject of contention and argument. Each developer always has their preference as to what to use to develop which is fine in most cases; however when it comes to how the application you are all working on runs and how consistently it runs that application is of great importance in my mind. 

This comes down to a few simple things that need to be met for development when I am building out a project.

  • consistency 
  • test-ability
  • flexibility
  • developer/ops friendliness. 

It is hard to get all of these, and truth be told you won't get all 4 perfect in any project. The important thing is that you get at least good with all of them. This article came from my setup of a micro service infrastructure that I had developed for a company to support high volume traffic without a service discovery solution. This made both developer and ops management of a distributed computation application with micro services a serious challenge. 

The problem

The issue here is that without a solid service discovery solution in use such as Consul ( An article will come on that later ) but still using micro services there was no consistent way for the developers setup all the services and manage them in a reasonable way. Before I came on board to where this project happened devs had to manually keep track of everything running on their box and they didn't have a choice but to start the entire set of services. It was a situation that wasted a lot of time and effort if anything went wrong in a single service. 

The solution

The solution to this issue is long winded, but included converting Java services into Node.js ones, and introducing PM2 as a process manager/runner. But then the issue of how to create consistency and a service management cropped up for developers.

To this end I embarked to find a reasonable and easy way for developers to manage these services on their boxes. I did a fair amount of tools soul searching for a while before dedicating myself to a type of solution but finally just settled on setting up a command line tool that developers could install via NPM from our internal Nexus. 

The solution ideals

The idea behind the solution is that each service we build has a PM2 config that contains all the details for running that service. On top of that the PM2 Json configuration format allows me to define an instance to be run by name. This is a great setup for developers to be able to run things such as pm2 restart exampleService to manage their environment easily.

The ideal flow is that a developer can checkout any service from our Git servers put them in a single directory and use a tool to generate a PM2 configuration that handles the running of all the services in that directory for them.

The solution code bits

The tool assumes you have a single directory on your developer box with all the services under it. For example : 

  • AllMyServices
    • service1
    • service2

This is so our tool can walk through each service project and extract the configuration for PM2 and use the application definition to build a service cluster configuration. To build the tool I used the following packages and technologies. 

  • Vorpal.js - A Node library for creating command line tools.
  • Babel JS - For modern JS hotness.
  • Node
  • Npm

lets take a look at the actual code behind this tool, be gentle I wrote in about an hour :

Alright, so this isn't perfect, and honestly this is the unpolished but tried and trusted code that is currently being used. But that being said lets take a look at what is happening here. It's a little long winded but pretty straight forward. In essence all this does is :

  • Walks through the directory that it is given look for files named pm2.config.json
  • Loads that configuration file into a JSON object
  • Modifies the pathing so that you can run the configuration in any directory
  • Dumps that config out in the out directory given. 

This will provide you with a PM2 config that can startup and manage each node process that had a pm2.config.json file in the search directory which is pretty sweet. Granted this could use a fair amount of improvement, I won't say otherwise, but I think some of the most useful code is the raw concepts that get a developer moving in the right direction. Lets take a look at a configuration that was generated by this tool : 

If you are familiar with the PM2 configuration structure this will look pretty familiar if not a little boring, but ultimately boring is kind of our goal here; a simple way to manage your node instances. With this configuration you are able to issue commands to specific instance being run, or the entire stack of instances/services.

For example, using this tool to build a config and run things looks like this for one of my own projects that is smaller and doesn't have the support of large infrastructure, it's just a 2 instance application that lives on my own box currently.

As shown above though, it provides a huge amount of usability for a developer as your distributed application grows and becomes more and more separate pieces. With this kind of setup the developer can load the config once and then flip instances off and on as they are needed for development.

Conclusion 

Again PM2 allows us to manage modern Node.js projects with ease and provides us quick ways to build out tools. Though this is not a perfect solution, it has worked great for my team thus far in our state of development. This can also be a great alternative to happening to set up an orchestration system on every developer machine you have which is an absolute nightmare from my experience. There are alternatives to this as always, Docker images being the most common to come up. 

While I love Docker and am currently actually building these services into Docker images for an orchestration system in my current project; I feel that after you hit a threshold of instances that need to exist on the developers box it becomes unmanageable resource requirements wise. 

Being able to create this type of managable ecosystem on your developers machines ultimately leads to more felxability everywhere that your application goes. There are a lot applications here for QA and testing as well, being able to single out specific instances for debugging within the stack or even multiple versions of the same service for debugging becomes an easy and relatively painless task.

Links

  • Vorpal.js - A Node library for creating command line tools.
  • Babel.js - A lovely JS compiler that gives us access to next gen JS
  • PM2 - The lovely Node.js process management tool