A Simple Node.js PM2 Setup Guide

Introduction

PM2 is a great tool that can help you manage not just your processes when they are running but also env types, vars, and configs. I still find many questions around the basic operation of the tool around the internet such as stackoverflow and other boards. So this article will cover how I setup most of my projects using PM2; short, sweet, and to the point. 

Setting up a quick app

First let's create a quick API that returns hello world as a response. Move into an empty directory and create a file called API.js and put the following into it. 

Then run npm init -y  This will cause npm to create a bare bones application package.json for you so other packages can be installed. You can also leave the -y out if you want to manually put in all the information for your project. 
After you have your project setup with NPM install the express library package npm i express --save 

With express installed you should now be able to run your api app with the following command node API.js 
You can confirm that it is running by going to http://localhost:3000/ in your browser. You should see "Hello World!" printed within the page.
You now have a basic app running! Within the terminal that you started the app in you can input ctrl + z to stop the API from running as we will be setting up PM2 to be our process runner.

Starting up PM2

Within the project directory install PM2 as a package globally and into your project dependencies npm i pm2 -g; npm i pm2 --save 

You have now installed all you need to setup production level process running and environment control ( with enough configs and a pipeline of course ). With the latest versions of PM2 the tool comes with the ecosystem command that will help us generate what we need to create and application definition for our small API. However to see some immediate action you can just run pm2 start API.js 

Confirm that your API start by visiting localhost:3000/ again and checking if the text "Hello World!" is rendered. You can see that your process starts up and you get a nice looking table print out with some facts about your process. You can shut down your process by running pm2 delete API .

Note: Delete will remove the application entirely from PM2's registry; however you can also use  pm2 stop API  and it will stop the application but not remove it so you can use  PM2 start API  to start it up again. In our case however we want PM2 to forget about our process since we are going to create an ecosystem config for it and start it that way.

Setting up ecosystem.js

Now that we have shown that pm2 will start our API, and that our API still returns what we expect we can setup a more maintainable and useful way to start our app. Within the project app run pm2 ecosystem 

This will generate a application definition with a few other things inside of a ecosystem.js file that it created. Lets go through the sections that get generated and a quick over view of what we can do with them.

Apps Section

This section is arguably the most important for getting your application up and running. This is where you will define how your application runs, environment specific vars, logging behaivors, and more. In addition you can define multiple apps in the same ecosystem file; this can be used to start up co-processors, log streamers, queue managers, and more.

I will go over a few of the fields that can be used with an app definition that I think are some of the most common or useful.

Instances - Int

The number of app instance to be launched. This can be set to -1 to start as many processes as the system has CPU cores subtract 1. I use this in docker setups a lot because I can allow the application to consume the entire container since that is what it is dedicated to. 

node_args - String Array

This is an array of arguments that will be passed to the actual node execution which allows you to pass things like the --harmony flags for older node version or things such as debug flags.

error_file, out_file, pid_file - String ( directory/file path )

These values point to the directory and file name that you want pm2 to export the generated logs from your application to. This is valuable again in the containerization scenario when you want your logs to go specific places to be picked up by log aggregation systems. 

max_restarts - Int

This is the number of consecutive unstable restarts (less than 1sec interval or custom time via min_uptime) before your app is considered errored and is stopped from being restarted any more. This is a great option to allow your application to show that it can't connect to mission critical services or API's at start up time. I currently use this as a flag that something is very wrong when a new deploy goes out.

max_memory_restart - String

This option allows a max memory limit to be set that, when hit, causes PM2 to restart you application automatically. Useful to ensure a rogue process doesn't bring down boxes if they suddenly get a ton of load or the process simply goes off the rails. The value of this field is a String that uses the normal volume types M = megabyte, b = byte, so if you wanted the max memory to be 400 megabytes then the value here would be "400M".

min_uptime - String

This option lets pm2 know that your application should be up and stable for x amount of seconds before being considered started instead of in the starting state. This can be important if your project has to connect to a service and timeouts are an issue or there is a lengthy read operation on disk, etc.

env - Object Key/Val

The env object is what allows you to define environment level vars based on the environment flag you give PM2 at startup time. the plain env field is an object that contains values that will be available all of the time regardless of the environment that is passed to pm2. 

Any field that follows the pattern env_${myEnvironmentName} is considered a valid env setting and the values defined within will only be available when the --env flag us used and a valid key that matches the following rules is found. For example a env_production env object then allows you to use the following command pm2 start ecosystem.js --env production 

Note I think it is worth mentioning how you get to these environment variables. All of these keys will be available via the global process variable. So to get the env NODE_ENV to see what mode the application has been started in you would use process.env.NODE_ENV.

Here is an example of all the discussed settings for an application definition. There are plenty more options that you can find in the PM2 docs page.

Deployments Section

As a disclaimer I do not use the PM2 deployment tool in any projects currently aside from deploying to some rasberry pi's I have hooked up on my local network. This is because these days all of my CI/CD pipelines use Docker or run through Heroku. That being said I will give you what I know on how to get deployments working for you via PM2. 

The use case for using the PM2 deployment tool set fits if you have static servers that are not containerized that you don't want to bring down your systems just to do a small update to your application. This is the case at with my home setup where I have node instances running on Rasberry Pi's. 

To setup deployments via PM2 first define your application section with the options you want from the previous section. Then in your ecosystem.js go to the deploy section of the config. Here you will find the two generated deployment options production and dev, these are enviornment configs just like the application definition has; they will be used to define what env your are deploying to. Here there are a few key definitions that need to be flushed out.

user - String

This is the user that the target machine will use to run any commands that are pushed to it via PM2. It must have the approprite permissions to execute the commands ( git pull, npm install, etc ) on the target machine. In addition this is the user that PM2 will attempt to authenticate via ssh using a key on your machine or a key you provide within the deployment configuration ( more on this a little later )

host - String/String Array

The host field can hold a single host or an array of hosts. These hosts can be IPs or hostnames that will get resolved via DNS. The machine doing the deploy must have an SSH key for these machines so that authentication can occur OR a .pem file must be given as part of the deployment config.

key - String

The file location of a .pem file that contains the approprite key to authenticate against the hosts using the User as the username.

ref - String

Ref is the git origin branch that you want to get deployed. Most of the time this will be a "production" branch or something similiar that you merge into when a version has been tagged or something similiar. 

repo - String

The git repository URI that the ref branch is in and the hosts have access to pull from. 

path - String

The path that the git branch will be downloaded into on the host.

pre-setup - String

This field is (a) command(s) to be run BEFORE the git checkout for the branch is done on the host.

post-setup - String

This field is (a) command(s) to be run AFTER the git checkout for the branch is done on the host. This is generally where you would put your npm install, gulp builds, etc.

post-deploy - String

This field is (a) command(s) to be run AFTER the pre and post setups events. This is where you will put your application restart/start command(s)

pre-deploy-local - String

This field is (a) command(s) that will run on the deployment machine, not the hosts that are being deployed to BEFORE it actually fires off the deployments. This is useful for putting deployment configs, alerts, etc into things like slack, emails, etc.

An example of the all these fields for a production env.

Once you have your application running for the first time on the hosts listed in the production deploy config. From a build server ( or your local box ) you can run pm2 deploy ecosystem.js production .

If everything runs successfully you should see a message similiar to : "Deploy Succesful" , you are all done!

Daemonizing your application with PM2 (2.2 >)

PM2 allows you to configure a startup script that will ensure your application comes back up if for some reason a restart or shut down has occured. This is useful again if you have bare metal you are running on that is not containerized or if for some reason you have a very unstable server setup; or have to do rolling restarts of servers for deployments. 

PM2 comes with the pm2 startup command that will out put a command for root to run that will add the approprite config so your application will start on machine startup. You can also pass pm2 an explicit type of startup script to generate if you know your environment OS. Here is the supported options: ubuntu, ubuntu14, ubuntu12, centos, centos6, arch, oracle, amazon, macos, darwin , freebsd, systemd, systemv, upstart, launchd, rcd, openrc. 

Here is an example output from the command :

On a build server you can write some simple awk, grep, or regex to extract the command and run it on the machine that is being provisioned. 

You can read more about process management via things like init.d and more here.

Wrap up

PM2 is a powerful tool for getting things up and running fast, but it also has the staying power for production level applications. I have many applications both at my day job and my personal applications that run using PM2 and KeyMetrics. Some of the deployment management I feel is antiquated by the rising use of containers and the associated services such as AWS, Google Cloud, Heroku, etc but it still has it's place in situations where you don't have the flexiability of a container driven development environment. 

Links :

http://pm2.keymetrics.io/docs/usage/application-declaration/

http://pm2.keymetrics.io/docs/usage/deployment/

http://pm2.keymetrics.io/docs/usage/startup/

Using Chrome Dev Tools To Debug Your Node.js Projects

Introduction 

To this day I get asked a lot on how I find issues inside my code base, some times even where I put my console.logs(). My answer is that I use a debugger; however almost every time this surprises people in the Node.js/JS community. I thought we had gotten past the strange period of JavaScript as a language where console.log()ing random points in your code base was the way to debug things. 

Apparently I was wrong; at least based on how often I get asked this type of thing.

So in the spirit of hoping to propagate something I strongly feel should be a standard and something every JS developer ( or any type of developer really ) should now how to setup and use, this is a small article on how to use Chromes dev tools to debug your Node.js projects.

Installation and Setup For Node 6.3 & Above

In Node 6.3 we got a native debugger module that Node.js now ships with that is actually developed by the Node team. To use this there is now command flag options that we can pass in when starting our node projects. It will do some simple quality of life things as well, if the same file ans instance are brought down and back up the debugger will reattached itself, which is pretty helpful.

Debugging Just Using Node

When you start your application now you just append a --inspect to the node command and it should do everything needed on the process level. 

node myProject.js --inspect

Next open up chrome and go to about:inspect in the URL bar. This will bring you to a panel that looks like the image here 

You can then click the "Inspect" link under the name and path of your running application and it will open up a standard chrome debugger that is attached to your process.

Debugging using PM2

PM2 is a great process runner that I personally use for all my node related projects. However due to the how PM2 works and handles configurations for projects it requires a little extra work to get running with --inspect. 


Managing developer debug configs and app definitions for pm2

A lot of the time you don't want to have create two different files just for debug mode. So what the teams I have been on normally do is just create two application definitions in the same ecosystem.json file and then create different startup commands in our package.json for the devs and startup scripts. You can see the following gist for an example.

Installation and Setup For Node 6.2 & Below

Like most things these days, there is already a package that you can grab that does most of the heavy lifting for you. This package is node-inspector. Install it via the npm command globally via command line :

$ npm i node-inspector -g

Now to ensure that all went well during installation run the inspector command :

$ node-inspector

It should print out a version and a local URL that you can visit.

Hooking up the debugger to your process

Now that the inspector is up and running on your machine you need to hook up your process to it so that it can evaluate the code base as it runs. To accomplish this you will need the process ID of your project after you have started its. 

Start your project using something like : node myProject.js

Or if you use PM2 : pm2 start myProject.js

Getting your PID

I normally run my projects through PM2 which gives you the PID in the process table that it prints out; however if you are not doing that you can find your PID by using the ps command as follows :

$ ps -ax | grep node

That should give you a list of all the node instances that are running on your box at the time in which you can pick out the source file that was started ( myProject.js ). Once you have the PID you can then send the PID a signal that tells the process to enable debugging.

Sending the Debug signal to your process

The process of sending the debug signal is very straight forward. I will use $[pid} where your process id that you found earlier should go. Now lets send that signal :

$ kill -s USR1 ${PID}

Now this won't actually kill your process, we are simply sending a system level signal to it, that i what the -s is for in the command. You are now ready to start debugging your running Node.js application. 

Getting to your debugger

Getting the node-inspector is as easy as visiting the URL that was print out for you near the beginning of the article with one change. By default V8 starts the debugger on port 5858, if for some reason yours is different, or you have multiple debugging sessions going you can tell node-inspector what port you want to hoot the debugger up to by providing a port as a GET param. For Example

http://127.0.0.1:8080/?port=5858

You can change that port param to whatever your process printed when you sent the system signal.

Wrap up

That's it! Pretty simple, yeah? I hope this is something that people will find useful and we can get away from the console.log() times. Debugging will help save you countless hours, especially when trying to determine what variable is now what value when. You just set a break point and watch it flow to the break point, then you can evaluate the entire state of the application at that moment.

Links

PM2 - https://github.com/Unitech/pm2

Node-Inspector - https://www.npmjs.com/package/node-inspector

Debugger docs - https://nodejs.org/en/docs/inspector/


 

A talk on Generators & Bluebird.js coroutine()

Intro

Generators can be scary, confusing, and can require a lot of setup to really get the most out of. Most people are looking to just yield a statement so that certain async actions can occur in a specific order; this is most common, from what I have seen, when dealing with mongo read writes when using promises. 

In reality the patterns what I will go over can be applied to any promise or none promise based setup where you have multiple async operations that you may or may not have to wait on. 

Bluebird.js

If you are not familiar as to what Bluebird.js is here is the skinny. Bluebird.js is a library that fills a single purpose : Better, faster promises and support structures. The Bluebird team has done a great job at making Promises fast and accessible to a variety of environments which makes it a great tool for any node or front end project looking to ensure that the Promise spec is met and usable.

You can read their own "Why Bluebird?" section here : http://bluebirdjs.com/docs/why-bluebird.html

You can also see the benchmarks here : http://bluebirdjs.com/docs/benchmarks.html

Being able to rely on promises and generators being available is key to a lot of the work I do these days as they help control the flow of things like multiple atomic operations occurring inside of a single controller action making the action none atomic as a whole. This kind of thing can become a nightmare when dealing with in order writes or reads from say mongo.

Generators

If you haven't heard of or your knowledge is just lacking a bit on generators here is a quick description : Generators are functions that can be executed and then exited, but with their state maintained, and then reentered at another time. 

Many people ask why I don't just use async/await; without creating a debate or an entire article it is mostly due to the fact that at async/awaits core is generators. I prefer to use the common denominator. 

In addition to that Bluebird is simply faster then the native promises and other libs that provide other limited functionality sets. Refer the benchmarks link above for more info on how those benchmarks are created.

Creating & Using Generators

Finally lets look at some code! The Generator syntax is very simple and should look pretty familiar aside from a single difference.

You will notice the * at the end of the function, this is what defines the function as a generator which allows us to exit and reenter the function using a .next() method. It reality all a generator is, is a constructed Iterator type but with some additional functionality that is beyond just a primitive type which allows for better flow control. 

Here is an example of what using a generator as an Iterator type :

About The Yield Keyword

With the example above we aren't really doing anything that a primitive Iterator type can't do such as a Number type.  However it is important to observe that you have flow control that you don't with the primitive Iterators using the yield keyword. The yield keyword is an actor that functions like return but functions differently in an important way. 

Yield maintains the generators memory state which is what allows for the iterative functionality that you can control. The state of the generator is always at the line of the yield keyword just passed the value after it.

Let us take a look at what is happening behind the scenes for us here, which will give us a better understanding of how to interface with generators as an Iterative type.

Since a generator is technically an Iterator type, it exposes all the normal Iterator methods, this includes .next() as shown above. This is how you reenter a generator that has been stepped out of using the yield keyword. When you reenter the function it will continue the generator execution at the point start after the line terminator of the yield statement.

About next()

We have seen the next() method in action to continue till the next yield call in our generator. Next() can do a little more then just tell our generator to continue; it allows us to pass values into the generator for the duration of the execution up till the next yield that we can use or store for a cumulative value return from the generator.

Here is an example on the cumulative return of some numbers that we will pass into the generator mid execution between yields.


This method of execution is much more controllable when dealing with the aspects of potentially unknown amounts of processing or long running processes such as job queues, stitching DB query chains, or loop processing. It is also worth noting how the data is returned from the .next() method. is an object literal with two fields: value, and done. The value is whatever value you yielded in the generator; the done field represents if the generator ran into another yield statement or not denoting that the execution loop or sequence has completed within the generator. 

About throw()  

Sometimes you need to be able to cancel the execution of the generator based on a result of the yielded value that was returned when an iteration has occurred. The throw() that the generator interface allows you to define a try/catch in your generator which will get caught within the context of the generator; this allows you to either re-throw the error or let the error bubble up to the parent execution scope that invoked the generator.

Let's take a look at what this might look like:

As shown above you can actually pass in the error you want to be thrown inside of the generator. This pattern can help dealing with scope hell when dealing how your generators exit before they are technically complete.

About return()

Every once and a while you will need to get the current value of a generator when it is in a completed state or more likely; you will need to end a generators execution sequence early but you don't want to throw an error. The return() method does exactly as described it simply allows us to end a generator execution but without throwing an error. 

If you give the return() method a value as a parameter the value returned, which is the same as the type of return you get form the next() method, the value property will be the same value you passed in to return(). This can be useful when creating reusable components, or when using a factory pattern that can return a generator.

Take a look at the following example :

Bluebird Coroutine

There is sometimes confusion around what what a coroutine is and what it means within different contexts. A coroutine was originally one of the names that Generators went by the ECMA spec for a little while; so if you google "javascript coroutines" you will find a lot of examples that look a lot like the ones in this article because they are really just generators.

Coroutines are Promises

The Bluebird.coroutine() does a few things that are different. First off the result of the invoking the Bluebird.coroutine() method is that it returns a Promise that is resolved when the generator that is passed in returns the state of done : true. This means that you can suspend the execution of entire Promise generation functions with pairs nicely with the ability to wrap nearly anything us Bluebird.promisfy() which will ensure that any callback based method returns a promise instead. 

This enables you to use promise patterns when waiting for a generator to complete. Observe the following example :

It is important to note that when you wrap the generator in the Bluebird.coroutine() you need to return a promise both when you yield or what you use return. This is because the co-routine is iterating your generator for you and is looking for both a done state and a promise resolve on the promise state. It is also worth mentioning, that the results will only print after 1 second has passed. That is how you know that your generators yielded statement is actually being hit. This is due to the generator returning a Promise that delays the setting complete state for 1 second ( 1000 ms ).

Note that you can also change the order in which something will and get resolved just by changing what gets yielded or moving the statement all together.

Coroutines & Bluebird.all()

Much of the time you have maybe a single operation that then multiple operations depend on for data to complete their processing. Coroutines help a lot with this issue when combined with the Bluebird.all(). The .all() method allows you to await the results of many promises at once and then get the results all at the same time; this paired with the fact that now your generators can return a promise instance makes controlling multiple async operations a snap. 

I personally use the following pattern all over the place for things like dealing with multiple Mongo DB calls, file reads, and subsequent API calls for data stitching. Here is an example of the generator multi-async pattern :

Taking it further

There is a lot more that you can do with this pattern outside of DB calls and simple processing methods and modules. You can take your implementation a bit further by pulling in and using clustering with Node. Clustering is a basically how you do something much closer to true multi-threading with Node.

 To do something like this you can wrap the signals that a child process would send with data with a generator function which would allow you to yield the execution of say a function processing a network request to wait for the worker thread to be completed. That would allow the master thread to field other requests coming into the event loop while checking the status of the yielded call on every event cycle. 

This is a subject that will require another article; and is one that I choose to write. But it is worth thinking about and at least knowing about when using things like generators, promises, and Bluebird.

Conclusion

Included is what that logic flow looks like without the coroutine and yield. If nothing else it is easy to see that it creates much more readable code that will be much easier to deal with while you're working on it and in the future when someone else has to deal with it. I use the pattern(s) shown here for all kinds of implementations from APIs to heavy processing using clustering.

Links

  1. Bluebird.js
  2. Generators
  3. Spawning
  4. Mongoose

Using mongoose to validate and manage form data in Aurelia.js

Intro Rant 

I've been using Aurelia since it was first available to the public. I have made a lot of bad decisions with it and a lot of good ones over the course of the last year or so. One project that I embarked on had a large amount of forms that needed to get filled out, validated, and sent to the server that expected the data to be in a particular format like any other API call. 

I was using mongoose on my back end as it was Node.js powered with mongo as a storage solution. I really wanted to find a way to reduce the the overhead of form validation and value selection. So I went to take a look at mongoose for the browser, nothing  extreme just a way to bind the validation I already had in my schemas for the back end to the forms on the front end. 

After finding that to be a relatively easy task I then had to find a way to get my schemas to load into the browser and into my node modules. I first tried some old crusty methods of wrapping the returns in various statements depending on if env variables could be found to determine if I was in Node land or Browser land, but ultimately it became way to bloated to be included in every schema that I wrote. 

So I moved on to a cleaner solution of browser first. Since I was using Aurelia and Babel to handle keeping things clean and the live transpiling I could just write straight ES6 style schemas. which I could then load using Babel on the backend as well. thus began the journey of my mongoose schema input custom attribute in Aurelia. 

Building The Custom Attribute

I normally start things out by just defining the logic-less parts of a new component that I am working on. This was no different, I created a empty custom attribute using Aurelia bringing in all the relevant modules I'd need to make it function. it looked something like this

This obviously is not very exciting in and of itself; however it shows some key interactions with the Aurelia component life cycle. These life cycle methods on the class get called by Aurelia when our component is constructed/deconstructed and attached/detached to the DOM, this is bind and unbind respectively in our class definition.

Next I wanted to identify some issues that I've come across with some other form utilities and see if I could address them with my own implementation. The list ended up being relatively short but crucial to reusable form validation components 

  1. Two way binding for values in my mongoose model
  2. error class appending/removal on state change
  3. Independent logic set that lives outside of any view model
  4. Validation logic throttling
  5. onChange event support for things such as dropdown menus, multi select, and radio buttons.
  6. Callback support for when something is validated
  7. No jQuery, only core JS.

After I finished my list I didn't feel like it was something that couldn't be done or was to wild to keep in check without bloating it. With this list in my mind I added properties to my class definition to represent my list items. In the end it looked like this :

Most of these properties are pretty self explanatory with the comments provided in the code sample. Most of these are just properties that get bound from the parent view model so that the attribute has access to the memory reference so that two way binding can be used. It is worth mentioning that we explicitly make the binding type of the component two way, this is to enforce the behavior between the view model and the attribute references. 

Building The Attribute Functionality

So now we have a pretty solid skeleton for the functionality that we can load into a view model and use as an attribute on a input field. Sadly so far it doesn't really do anything; the functionality still needs some scope as to how we bind it to the view and the view model. Lets go over how we want the HTML and ViewModel to look like when using this custom attribute.

HTML Bindings

The HTML binding part of the custom component is relatively easy and simply binds values from a view model to the reference containing properties of the custom attribute instance. Given the properties we made on the attribute the bindings can look like this 

ViewModel Bindings

Given the attributes in the HTML template and the properties of the attribute we can create a sample view model to hook everything up for examples purpose. We need a small mongoose model, a callback, and the path name of the field to validate in the mongoose model. In the end we can use something like the following

We really don't need much in the view model here, just a callback and a mongoose model. Normally I would suggest creating your schemas separately and importing them; but for the purposes of this example we will define the schema inside the view model.

The callback that you pass to the validator attribute gets passed either an object containing the path that was validated if it was successful or the error object that the mongoose validator returns for that paths validation call.

Writing The Attribute Methods

Now that the attribute has some functional scope with how it will be used and integrated the logic within it can be defined. The full attribute logic will be given below and then each piece will be covered. I feel it is easier to understand and explain code when you get the full block to browse over and absorb at your pace and then move forward when you are ready.

Constructor

Our constructor does not do a lot of logic here, we simply assign the element handle to a property on the class instance that gets passed to it by the Aurelia injection/creation life cycle. We also create and grab a logging instance from the Aurelia framework.

Bind

The Bind method is invoked by the Aurelia life cycle when the attribute is bound to the DOM and the view model of the outer template instance that the attribute was required in. The Bind method is passed the context of the parent view model which allows us to invoke methods that are on the parents scope ( like we do with the callback that you pass to the validator ). 

A word of caution : It's tempting to use this context for direct bindings to the parent scope instead of doing the binding on the template/DOM level ( What I personally call Angular Scope Hell Syndrome ). This really is not the best way to do this and creates very tightly bound component that relies on things to exist in every view model that acts as the parent context to the custom attribute which makes it very brittle and not reusable.  

Unbind

Unbind is invoked in the life cycle of the attribute when the attribute is detached from the DOM and current route scope. In this case it is used to remove the the listeners that were attached in the Bind method. 

validateField

This is where most of the magic happens for the custom attribute. This is where the actual validation using the mongoose model happens using the path name string and the model that we bound to the instance in the html bindings. There are a few things the method does 

  1. Calls the mongoose document validation method
  2. Because of the fact that the mongoose document validates all fields whenever you call the .validate method we have to look for our current paths error specifically by using the path string.
    1.  If we have a validation object and we find an error matching the path string that was bound to the instance then we throw a not validated error.
    2. If we do not have any validateResult object then all is well OR if now error was found for this path then it has passed validation.
  3. In either of these cases we either add or remove the error classes that we also bound in the HTML bindings.
    1. These error and success classes do have defaults as seen in the class property definitions but those can be changed to suite any of your needs.

Due to the fact that I didn't want to use anything but pure JS for this the validateField method seems a bit chunky and I am sure could be reduced in size with some modifications. But this blog isn't about perfect code, it's about learning.

The validateField will apply the error or success classes to the parent of the input field. This allows you to wrap your input fields and have them show the errors based on the mongoose validation result.

valueListener

The valueListener method is what actually gets attached to the event hooks on the element that the attribute is instantiated on. ValueListener is where throttling is handled via some simple time out mechanics that aren't very mysterious. Basically all that happens here is the throttle flag is set when we start validating the field and then when another request for validation comes in we set a small timeout for the next validations to occur in sequence. 

While not perfect it does work, though I do have the thought of potentially using the binding behaviors here that Aurelia comes with; I will add my findings when I get to that here.

The Results!

It's been a quick and hopefully painless ride to our custom validation using mongoose schemas on the frontend. Here are some examples of the forms that I have put together using this mongoose method. The icons on the right of the input fields change depending on the success and failure classes that get added or removed to inputs parent. I set it up this way so I could add other elements in the same element as the input which could pick up on the success or failure state of the input field; but that is a another article.

Empty

Invalid

Valid

Links

A small talk on software critical mass.

An awkward conversation for us all

This won't be a technical example and won't have any example code with it, it is simply a talk. A talk that I don't feel most people are willing to have in today's software development environment; in fact I find most people any higher then a senior engineer flat out refuses to even address most of the time. 

Software Critical Mass.

 Some of you most likely have heard of this and some not, so here is the definition of software critical mass as I see it.

Critical mass - A stage in the software life cycle when the source code grows too complicated to effectively manage without a complete rewrite. At the critical mass stage, fixing a bug introduces one or more new bugs that may or may not be related to the fix directly but was still inadvertently affected.
While this concept isn't complicated in my opinion, recognizing when you are approaching it can be. Once you are there it is pretty easy to to identify it because your developers will be creating more work for themselves by fixing something rather reducing the work load.

Impact Vs Cost

Before I get into how to identify your approach to critical mass I'd like to go over some of the cost and issues that come up when discussing it. I talk about critical mass where I am currently employed a fair amount due to the fact that the original software was built in a monolithic way and there are a lot of efforts going into moving it away from that. In those efforts however there is always the question of "Why do we need to redo what is already working.", as a developer that spends 40+ hours a week in the code base my mind and I am sure your mind races with ideas as to why and it all seems obvious. But to those that don't spend the time in that code base such as most managers and C levels, it's not obvious.

Now I am not necessarily bad mouthing the higher ups in various companies because the lack of understanding goes both ways. As a developer you don't always see a lot of the business impact that the stall in new features, updates, etc have on the customer base/revenue. 

This brings us to Impact vs Cost.

Impact is what developers are normally after, and cost is what higher ups are normally worried about. The stance can be switched as well however. The developer is concerned with how much time they put into sorting through a messy code base, and the higher ups are after a meaningful impact. 

Determining Impact 

Impact now has two definitions for us :

  1. Director/C level impact - The direct result of a change relative to the user which may create a change in revenue numbers.
  2. Developer impact - The ease of developing in the code base to make the changes that can satisfy C level impact ultimately 

This often is brought down to time and cost for what kind of change is proposed. While there is no perfect way to determine this it is often beneficial to determine some base line definitions for what kind of changes will be coming.

For developers/leads

As developers we often say that we would like just to rewrite an entire pieces of the software or even the entire thing without any type of qualifiers on it. This doesn't seem like a very reasonable thing to do from a sky level view of how things work even though it may be entirely the right thing to do code base wise. 

Determining impact for developers should ultimately come down a few factors in my mind :

  • Ease of modification
    • I generally consider this to be a time based metric. It will take a experiments to get a time but it is worth it. 
  • Small spin up time
    • Again this can be measured but generally only when you are bringing new people on or starting up a new project.
  • Readability
  • Developer comfort levels when making changes

While some of these are anecdotal they can still be quantified reasonably via simple things like surveys and simply asking for a score between 1 and 10 every once and a while. I feel these key things are what lead to faster more polished features that require less constant upkeep.

So when you say that you would like to rewrite something try to attach value not on what we as developers would like but what the end result would be and it's value to a C/Director level impact because at the end of the day it's not our call on what initiatives get focused on. Not what we like to hear, but just the way companies work.

For Directors/C levels

If you are a director/C level reading this, then welcome. If not, feel free to forward this on to yours whatever your intention. Lets talk about value and reason to a developer. The primary goal of a developer most of the time is to create a better code base and a better piece of software because that means something that is easier to turn into a service or product to sell ( if they aren't at least attempting to improve things to some degree that is a talk for later ).

Determining the impact of something your developers want to do for whatever reason can be difficult, and some times we aren't the best at communicating why. But it also isn't productive to either side to just ignore them when they come to you, maybe in an angry or ill temperament, and tell you that there is something that seriously needs to be addressed. for one that doesn't make much sense for you to ignore the person you hired to maintain the software and spends their entire work week in it. Two it is a sign that something is unhealthy in both the code base AND your developers base. Developers don't like working in an awful code base.

So impact. Evaluating impact should be a duel effort between you and your team(s), but at a core level a good developer impact should give you a good impact. Now sometimes things just don't line up and you can't let the developers go off and do whatever they want. You obviously can't just spend all your developers time making the code base pretty; but again they shouldn't be ignored.

However to avoid this critical mass issue a compromise must be made. Business value is directly correlated to team value and health which is tied to your code base. So when a developer comes to you and wants to rewrite something, don't just turn them down, attempt a compromise of extended time to work on a Jira ticket to clean things up which should mean less effort and time is required to modify that same feature in the future, which will translate to your own initiatives. 

Or if you want to go a level deeper, ask what new functionality can come of these changes and how it will actually improve the code base and support future features. Through this you can then gain even more value for an initiative because the chances of what a developer would like to improve crossing into new planned features or changes is pretty likely. 

Recognizing Critical Mass Approach

Culture

One sure fire way to recognize critical mass is the fact that most of the developers constantly complain about how things were written awful and it is no use trying to fix any of it. When that starts to crop up I would say once every two week basis you are on the door step of critical mass. 

It is worth mentioning that some developers are just always out to say everything is awful but never really participate in fixing anything, it has become their work ritual. Those types in my view should be removed from most scenarios but we all know at least one. Why this isn't a real measurable thing it can be pretty easy to take 10 minutes of random peoples time every once and while to find out how they feel about the code base.

Work load and ticket branching

For a more quantifiable way to measure critical mass approach is through ticket tracking. Now as a developer I often find ticket tracking one of the most infuriating things to deal with, especially with complex system changes. But it does have a place; this is one of them. If it can be identified that often times a fix, that is indeed a fix and makes something work correctly, breaks something else and creates a work load for your team more then the work it took to make the fix: your code base is approaching that critical mass. It's this type of occurrence that I encourage managers, leads, etc to really look out for. This type of metric is pretty easy to track given tools such as Jira because you can see things like spring overflow ( assuming all your Jira flows work ).

Now the tickets spawning tickets ( Ticket Branching ) thing is common in most companies but it depends on the percentage of the time a ticket will generate more work then it took to close. Ticket branching as I've named it, should really be under a 1/5 ratio from my experience. Now depending on how big the team is and the product that can be adjusted, but I have been able to take that anywhere I've gone so far. 

There are numerous ways to deal with this particular issue, but that list is simply to long to be gone over for every type of situation; this is simply a warning to make people aware of what to watch for.

Wrap up

It's been a lengthy and most likely painful read, I know. Company politics, C/Director levels, listening to developers, and attempting to quantify is a hard part of the job, one that I see ignored more times than not. I am sure there are plenty of opinions some agreeing and some not, and that is fine. This isn't a guide, to do list, or one stop shop for how to run anything; simply my own observations on how software companies are run these days. 

When it comes right down to it, the fact is that higher ups need to come down, and devs need to come up for air so something can finally get agreed upon, explained, and defined for things to move forward. It's hard, and this type of thing isn't something most people would say they have time for, but maybe that is part of the problem.

Critical mass can kill software faster then almost anything I've seen; why don't we all work together to make it less of an issue?

I will leave you with questions to ask yourself depending on where you are in a company, give these a good hard thought and be honest; then maybe change a few things here and there to see if you can make a clearer line of communication regardless of the level you are at.

  • C/Director level
    • When was the last time you had a honest talk with a developer ( not a manager ) about the state of the code base?
  • Developer
    • When was the last time you gave a thought to the companies stock, online reviews, and the current market?