Moving to a Cloud has become a non-neglected approach for any company that anticipates growth, service continuity, and security while improving its operation experience, therefore reducing certain costs to a specific extent.
In this article, we are covering a flexible approach that makes it possible to move your NodeJS projects and/or services to the Cloud with minimal effort. Started from a single code monolithic App therefore broken into independent and scalable modules/services deployable in the Cloud.
The YouTube Channels in both English (En) and French (Fr) are now accessible, feel free to subscribe by clicking here.
The Simplicity of NodeJS
Nowadays, it’s amongst the top backend platforms used for small, medium, and large infrastructures for software development so a proper Cloud Migration strategy should be kept in mind regardless of what we are building.
Cloud Migration Strategies
Approaches put in place to safely move to the Cloud should be part of a well-defined and thoughtful plan that fits the needs of the project or company, this might differ from one team/company to another depending on the problems they are solving (in their priority list) and the third party involved. It’s possible at the same time to adopt multiple strategies or to keep using one while having the gate opened to jump to another with the least possible effort. Let’s focus on Rehosting aka lift-and-shift and Replatforming aka lift-tinker-and-shift, more of these are described in this article by Stephen Orban that first appeared in AWS Cloud Enterprise Strategy Blog.
Initially, it started with the easiest lift-and-shift strategy where we simply moved the monolith App to an EC2 with its API being served via an Nginx server. Next, the team started some reorganization, mastering Clouds Concepts, and started breaking the App to have independent modules that do very specific tasks/features by making the main App even more stateless. By doing so, we ended up with a NodeJS microservice template that seems convenient for cloud migrations still with the idea in mind to perform on-premise deployments whenever we want to revert from the cloud migration.
Sample NodeJS Architecture
Not only for illustration purposes but this approach has been used while accompanying many companies to quickly migrate their production NodeJS services to the cloud before developing their expertise on it for further improvements.
AWS has been chosen for this example, but the idea behind is replicable to many other Cloud providers and even for On-Premise infrastructures, that’s why we consider this to be a multi-purpose migration strategy for NodeJS Apps, here’s what the high-level architecture diagram looks like:
In this architecture, a set of services are involved to materialize a micro-services environment (not at its full capacity) in which NodeJS Apps are deployed either as Lambda functions or as normal server Apps.
The code has been improved just a bit to be able to support Lambda deployment to the already existing server, which is related to running different starter files to start the App, respectively lambda.js and server.js. The full source code is available on GitHub.
Regardless of the way each service is deployed, a kind of custom service discovery/configuration is present to know more about services at runtime. When deploying, the service specifies the mode in which it got deployed by setting a value in the cache (ElasticCache or Custom Redis).
Very soon, the need to centralize the configurations arose, Knife was the top option but we finally opted to use a simple private (soon to be encrypted) AWS S3 bucket for simplicity even though it’s an intermediate choice.
As illustrated up here, the services communication is handled by two main protocols; the popular and trivial Rest API way and by using asynchronous notification channels (Redis/SNS pub and sub). The project is built to listen to specific channels depending on its relations with other deployed services/modules or third parties.
If deployed as a Lambda function (re-platforming), AWS SNS is used to send asynchronous notifications to other deployed services and scheduled events (sometimes AWS Batch with pre-built Docker images) are the cron jobs runner.
Redis is used as the services’ caching server regardless of the deployment strategies, we started with a single-node ElastiCache cluster till we noticed more is needed.
In this particular setup, we first moved out the database from the same machine as the server code, so the App would be more independent and stateless. We were able to run the databases either on MongoDB Atlas service (recommended) within a VPC or by deploying and managing custom ones within AWS EC2 instances, the On-premise approach could also work since we are thriving to build a hybrid architecture that solves our problems, but that comes with other maintenance/management costs.
Still in the path of making the server App lightweight enough and stateless, AWS S3 is being used for image/video storage, not via a direct integration but by using another small service built for the migration with everything put in place to work either on-premises or runnable no matter the cloud provider since it’s flexible enough and has an interface to change the storage service whenever needed.
Breaking the initial code base into reusable modules was one of the first decisions we made to accomplish the desired result, it resulted in having a separate service that handles email sending by using any providers as the implementation details. AWS SES is used for our migration purposes.
It’s important to adopt a decentralized security mechanism for a decentralized architecture or at least have proper integration patterns to make it work flawlessly amongst your components. JWT was present already so we kept it, once we reached that level where all our components were deployed in the Cloud, we opted to use an isolated network (AWS VPC) to deploy these and made the necessary changes in our CI/CD pipelines, without putting in place other security best practices gradually.
Once you deploy your Lambdas, some default metrics are saved on CloudWatch to help you know how your functions perform over time, it includes and is not limited to: invocations, duration, error count, throttles, concurrent executions … etc.
Another important factor is the ability to programmatically publish custom metrics via the SDK to AWS CloudWatch at runtime depending on what we desire to track or by using a simple console log that respects the embedded metric format specification.
- Running it locally or within a custom server is done as usual with
- ClaudiaJS, the tool at the center of the deployment procedures makes it easy to perform new Lambda deployments of the App/service with a single command for each of our environments: npm run deploy:dev or npm run deploy:prod, before that, we should create it for the first time by using
npm run create.
- During each deployment, a copy of the App is saved in ni-deployments bucket defined as “use-s3-bucket” parameter, this could be used for compliance and other verification purposes.
- If deployed as a Lambda function, some warmers should be put in place to keep our functions hot, we do so by running the script
npm run job:warm:env(where env is either dev or prod in our context) by updating the warm.json file to have multiple distinct warmers (we felt covered with 5). Another solution is to take advantage of Provisioned Concurrency which seems more efficient but is expensive.
- Similar to the precedent point, enabling the function to subscribe to some specific SNS topics is possible first by defining the targeted topics in the policy.json file, it should match with the variable
Constant.Eventsin used by Redis still for pub/sub pattern and then running
npm run sns:env.
- At the deployment stage, the environment variables are fetched from a specific bucket accessible via the script
npm run get-vars:envand saved in
env.jsonwhich will then be used to deploy the function. The command
npm run set-vars:envdoes the inverse, takes env.json, and pushes it to the configuration bucket that you can change in the package.json file.
- Express-gateway is used in local development and serves as an API Gateway to the other services while the production deployment is run on top of AWS API Gateway. This means the developers have to start their services by using the suggested template that we built, and then update their gateway configurations for the API endpoint redirections to work as expected.
- The CI/CD pipeline is implemented using Gitlab CI, and the corresponding configuration file is located in the root folder.
In case you would like to run that App’s scripts from your computer, you have to configure the AWS CLI and the following article points out exactly that:
By following the server deployment approach while scaling the services horizontally using ASG, we should make sure that there are no duplicates of Redis pub/sub subscriptions’ handlers by adding custom logic or simply covering that need differently.
To cover more technical aspects, the CI/CD pipelines have been designed, built, and documented in the following article:
Also, we strongly recommend AWS Skill Builder as a go-to resource to improve your AWS Cloud skills, it’s well-structured and practical.
We have just started our journey to build a network of professionals to grow even more our free knowledge-sharing community that’ll give you a chance to learn interesting things about topics like cloud computing, software development, and software architectures while keeping the door open to more opportunities.
Does this speak to you? If YES, feel free to Join our Discord Server to stay in touch with the community and be part of independently organized events.
If the Cloud is of interest to you this video covers the 6 most Important Concepts you should know about it: