5 common but extremely important DevOps practices

Gone are the times when teams worked in departmental silos on a single project. The IT industry was convinced long ago that internal collaboration is a vital for delivering high quality products with maximum efficiency.

And DevOps is known for bringing together teams and building a common platform for teams to collaborate right from the early stage of software development. This results in frequent deployments, less error codes, more clarity and transparency in any organization.

But when it comes to finding the suitable path for you, this can be a nightmare. To lessen the intensity of this nightmare, we have searched for most common practices followed by industry giants which helped them in getting the best ROI from devops.

Version control system

When there are more than two developers working on the same project, version control will help in keeping a log of all the changes which can be referred by other developers.

Version control will make the error identification process quicker by giving you a centralized platform to compare different versions, and locate the one causing trouble. Introducing new features can go wrong in many ways, version control will help you in retrace your steps.

Source code, database changes, configuration docs all can be seen and stored via version control softwares like GitHub and BitBucket. They allow you to save multiple versions of the source code and switch between them as per your needs.

Test automation

Automated tests can be executed at every stage of the SDLC. You can write cases and scenarios based on the functions specification documentation, run them multiple times in a day, and validate their results in the development stage itself. This way you are actively looking for issues from the start instead of fixing them after, like in QA or worse after deployment.

And not to forget, automation will save your coders and developers from the monotonous task of carrying out testing which are repetitive in nature. Tests which can be automated are:

  • Regression testing
  • Stress and load testing
  • Integration testing
  • Smoke testing
  • Black box testing

To automate the whole testing process there is a range of tools available like Selenium, JMeter, Appium, TestRail, etc. Automating the testing process will result in increased testing frequency thus, getting you step closer to an bug-free software.

Configuration and change management

Dealing with new configurations in any sphere of your product can be troublesome at any point of time, especially after the deployment. Configuration management helps you in finding change requests, change logs and current status of all configurations in one place. It lets you see the configurations done within servers, storage bases, networks, etc, thereby giving you a holistic view of the system.

Change management on the other hand deals with the process of configurations carried out. It will paint a picture highlighting all the possible affected areas because of any new configurations, determining its ripple effect on the existing product. It will consider and recognize any red flags that you will need to take into consideration.

CI/CD

Continuous Integration looks out for any troubles in the current and modified code which may lead to  in the future. It does so by leveraging Version Control System and automation testing tools which look out for any vulnerabilities on a frequent basis. Jenkins, TeamCity, Bambooa are some of the popular CI tools.

Continuous delivery is facilitated in devops as new features are pushed as and when they are developed and tested instead of being restricted to a specific timeline. Any glitches found can be solved in the early stage itself, thus the feedback loop is cut short. This also reduces the time between user feedback and its subsequent corrective actions.

Automated dashboard

Automated dashboard provides data insights via detailed reports. These reports will let you know the success and failure rate of testing, number of tests done, their duration, errors found etc. This database is a goldmine of insights for developers, testers, coders to find the loopholes and avoid any repetitive errors.

The graphical representation of the information will help in drawing comparisons for all the changes done in the system and pinpoint the most effective ones. A track of all the deployments done and the effects of them can be seen in one place, making it accessible for all the teams involved.

The above mentioned practices have helped companies like Netflix, Etsy, Facebook, Walmart, Target to increase their overall efficiency and collaboration. They have adopted the practices after many failed and successful attempts.

Devops planning and implementation take years to master, but taking inspiration from our surroundings will smoothen this journey for you and all your stakeholders. At the end of the day, deploying high qualit

Maintenance chores to keep your WordPress site in good health

Today, websites are more than just a ‘set-it-and-forget-it’ proposition. They are marketing machines that help you get new customers and leads. You might be planning to or already own one.

Regardless of what size and functions they inherit, this is true for all sites…for WordPress ones too. And complexity being the most ingrained quirk of websites nowadays, maintaining the website on a regular basis becomes a necessity.

As this will help you avoid being reactive, eschewing the risk of ending up with a chaotic, insecure and slow website that is hard to use for both admins and visitors.

The Blame Game

In a typical manufacturing facility, there are two primary groups: production (or operations) and maintenance. Likewise, a website entails two major phases – the development and maintenance (after going live).

If you have poorly trained operators then this will lead to reactive maintenance and destroy maintenance’s ability to efficiently maintain the factory. The opposite is also true.

Tension and blame games become a common scenario due to conflicts.

In the same way, installing the wrong versions of themes and plugins in the development phase can lead to a site break after going live.

Conversely, if you are not analyzing or worrying about achieving loading times under 1 second after getting the website live, you will probably never get a Mayday! signal from your code.

Both the phases are so intertwined and symbiotic in nature that they cannot be at opposite ends of the performance spectrum.

There are certain maintenance exercises that are needed to be assured in both the phases. The list of WP maintenance is endless.

But, we have sifted down some major practices to maintain good website health…

During Website Development

Set up your WP website with latest versions

It’s crucial that you always use the latest version of WordPress, themes and plugins.

Ensure that your plugin and theme developers coordinate their updates with major WP releases. Because with each new release, they enhance existing features, improve performance, add new features, and fix bugs to stay up to date with new industry standards.

So that you won’t miss out on new improvements/features, and risking your website to break.

Install plugins to plug website’s security holes

The security of your site is only as good as the backend and foundation it’s running on. Security plugins can save you from..

  • Hackers stealing your data belonging to your users
  • A compromised website that can malicious code to unsuspecting users

Sucuri, Wordfence and iThemes Security are some of the popular security plugins.

Use a Content Delivery Network (CDN)

There are chances that large images, CSS and JS files might have not been optimized taking a lot of time to arrive from the web server to the visitor. Whereas, your hosting server is hosting many websites together and response times are slow due to resource + bottlenecks.

A CDN is a network made up of servers all around the world that can help to speed up loading times for all of your visitors. And you can take a lot of load away from your server, because the heaviest resources are now hosted by the CDN.

Work on your code

There may be times you have to access the source code of your website. There are 3 main areas where you need to maintain a clean coding environment.

Commenting:

It cuts down time on edits and bug fixes which otherwise will be spent by the new developers or even by the same developer on figuring out what a particular code block does.

Linting:

It is another cool feature which enforces rules on the way we write code, and sometimes it corrects the code formatting itself.

Debugging:

Some popular examples of WP debugging plugins include Debug Bar, Kint Debugger and Query monitor.

Maintaining a live website

Back it Up!

The Rule-Of-Thumb dictates you must create a back-up of the data of your website. Sometimes it is also advisable to create more than one back-up and store it in different places to avoid any contingency.

Make sure to run backup plugins only during low traffic time on your website. You also need to adjust the frequency of backups and data that needs to be backed up.

Monitor website server uptime

There can be many occasions when your website is down and you’ll not even be aware of it. And this affects your business severely, website reputation, and user experience.

Use Jetpack, Down Notifier, and Pingdom plugins to monitor your website every specific minute via stats available on the dashboard. These plugins notify when your website goes down or is inaccessible.

SEO

The whole point to starting a business is to have clients. You get ranked on Google with most up-to-date and relevant information. They may even de-index your website entirely if it hasn’t been updated recently enough and if they suspect it has been infected by malware.

You must keep your website regularly updated via current content, news, keywords, permalinks, and rich snippets to rank well in search results.

The big dog in this picture – No..Not Speed!

Your website is a very important element of your business. It can cost you dearly, if it’s not in a good state or it can be a valuable asset if it is up-to-date and running smoothly. Once you are behind on your maintenance needs, it can be quite the process to bring the website back up to speed.

Do your business and yourself a favor by staying on top of your website. This will repay you with simple ease of mind. Devote your time what you’re best at: creating content and managing business.

Afterall you don’t want to wake up to alarm bells showing that something has gone south with your WordPress website.

Taking burden off you

It’s true that much of the work of maintaining your site can be automated.

But you still have to set it all up. You have to take the time to check any issues and you may have to fix them too. That will require you to learn about website security, error logs, or 404 pages.

Which is why it can be helpful to hire WordPress maintenance professionals. As a good WordPress maintenance expert not only fixes problems but also finds ways to keep them from happening again.

9 web app development frameworks that continue to dominate 2020

App development frameworks are at the front and center of the development community. Backend and frontend frameworks are continuously taking inspiration from each other to ensure smooth development and deployment of web and mobile apps. In such scenarios, factors like learning curve, community support, programming language, and flexibility become the deciding factors. In this blog you will find USP’s of the most popular backend and front end frameworks which can help you narrow down the one that works best for your product.

Backend frameworks

Ruby on Rails

Screenshot of Ruby On Rails  Github page
A popular server side framework, Ruby on Rails (RoR) works on the MVC model to facilitate interconnection of parts. It uses HTML, CSS and JavaScript for user interfacing and for data transfer JSON and XML. You can use Convention over Configuration and DRY features which will speed up your overall coding process. RoR also makes testing easy via test automation thus accelerating the debugging process.
  • Release year- 2005
  • Programming language- Ruby
  • Use cases- AirBnB, Shopify, Github, Basecamp

Django

Screenshot of Django Github page
Django is a full stack Python development framework which uses the MVT (Model-View-Template) structure, a modification of MVC model. This framework comes with user authentication, ORM, testing, caching and many other mechanisms thus making the development process swift and efficient. The documentation is excellent and this was in fact one of it’s differentiating factors in the open source community. Till date, more than 12,000 projects are known to be built using this framework.
  • Release year- 2005
  • Programming language- Python
  • Use cases- Instagram, NASA, Pinterest, Quora

Laravel

Screenshot of Laravel Github page
Laravel comes with 20+ pre installed Object-Oriented and Modular libraries which helps in building responsive web applications. The Artisan built in command line tool automates all the major repetitive programming like database structure creation, skeleton code, etc. Advance security systems like Bcrypt hashing algorithm, salted and hashed passwords are used for storing all your user’s passwords in encrypted forms. And it also facilitates writing and running multiple unit tests.
  • Release year- 2011
  • Programming language- PHP
  • Use cases- Barchart, Alison, Mailcoach

ASP.NET Core

Screenshot of ASP.net Core Github page
Developed and designed by Microsoft, ASP.NET Core is known for its ability to build rich and dynamic cross platform apps. Core applications made using ASP.NET do not require IIS hosting, they can now be self hosted. HTTP’s size request has come down to 2kb in core from ASP.NET’s 30kb, thus improving performance. The compilation is easy as it is executed within the memory which starts functioning as the browser is refreshed.
  • Release year- 2016
  • Programming language- .NET
  • Use cases- Kentico, elmah.io, SwitchThink

CakePHP

CakePHP comes with zero preconfiguration, making it a hassle free process for developers. It auto detects the settings and all you have to do is configure your database, rest will be taken care by the framework. This framework offers a range of core tests and custom made tests, fulfilling the requirements of all the developers on board. Built in tools like CSRF protection, SQL injection prevention, XSS prevention give a good security layer for your web app.
  • Release year- 2005
  • Programming language- PHP
  • Use cases- teamspeak, blendtec, BMW

CodeIgniter

To begin with, the entire source code of CodeIgniter is around 2MB only. This small footprint makes it easy to manage and update the framework. The error logging done in this framework helps in building a webapp with minimum errors. CodeIgniter helps in developing SEO friendly URLs by generating clean URLs. And security features like Input Data handling, XSS filtering, password handling give it an edge over other frameworks.
  • Release year- 2006
  • Programming language- PHP
  • Use cases- Buffer, Nissan, Casio Computer

Frontend frameworks

React

React is developed by facebook and is often referred to as a javascript library instead of a full fledged framework. But that did not hold it back from gaining popularity in the front-end community. Its features like component based architecture, JSX syntax for DOM manipulation and fast rendering views make it a popular choice among front end developers.
  • Release year- 2013
  • Programming language- JavaScript
  • Use cases- AirBnB, Uber, Medium

Angular

Angular was created by Google for making single page web apps and mobile apps. Within the MVC architecture two way data binding helps in automatic synchronization between model and view. Tree shakeable feature help in removing unused code, reducing your app’s size. There is limited data that is sent to and fro after the initial page is loaded, thus enhancing the overall user experience.
  • Release year- 2010
  • Programming language- JavaScript
  • Use cases- Forbes, Samsung Forward, Delta

Vue JS

Vue is a framework that was built taking inspiration from React and Angular, thus its learning curve is shorter. Vue is a progressive framework i.e. you can use Vue for any part of your already developed product and there will be no glitches. It’s simple and straightforward architecture makes error finding and solving much easier.
  • Release year- 2014
  • Programming language- JavaScript
  • Use cases- Louis Vuitton, Upwork, Wavetrotter
For every framework you deem fit for your product, you will come across some critics. And since no framework is perfect, you will need to stick to your selected framework and find ways to make it work for you. The more time you spend with one framework, it will help you in learning the others quickly.

Efficiently tackling complexities with Docker and Kubernetes

It all started with taking on the monolith code by microservices, and shaping the final product into a lego-like software.

Services like shopping carts or the payment option began to be written as separate pieces of software. Technologies like orchestration (K8s) and containerization (Docker) are helping companies in outstripping profitable parameters from making easy-to-deploy applications to handling the huge rush on a big sale day.

K8s and similar technologies like Docker Swarm, are technically known as container orchestration platforms designed to support large and distributed systems, and the sales pitch is:

Run billions of containers a week, Kubernetes can scale without increasing your operation team. Well, even if you have 10-100 containers, just imagining we are not all Google size…still it’s for you.

If you are at the beginning of the journey or just considering adopting K8s and Docker containers for your cloud infrastructure, this post will hopefully help you evaluate some of the major advantages offered by these technologies.

Squeezing every ounce by avoiding vendor lock-in

Migrating to the cloud can bring a lot of benefits to your company, such as increased cost savings, flexibility, and agility. But if something goes wrong with your CSP (Cloud Service Provider) after your migration, moving to another cloud vendor can incur substantial costs. No portability support and the steep learning curve are a couple of the reasons why it becomes harder to switch vendors.

Kubernetes and Docker containers make it much easier to run any app on any public cloud service or any combination of public and private clouds.

Container technology helps isolate software from its environment and abstract dependencies away from the cloud provider. And it should be easy to transfer your application to a new cloud vendor if necessary, since most CSPs support standard container formats. Thus easing the transition from one CSP to another making the whole process more cost-effective.

Rolling back the deployment cycles

There is an increasing demand to decrease the delivery time and be able to ship more number of features at a time. Manual testing and complex deployment processes can cause post release issues which worked in testing, but failed in production, resulting in delays in getting your code to production.

K8s and Docker containers help you shrink the release cycles through declarative templates and rolling updates.

It is the default strategy to update the running version of your app. You can deploy such updates as many times as you want and your user won’t be able to notice the difference. Moreover, with its production readiness, you can ensure zero-downtime deployment when you wish not to interrupt your live traffic.

Adapting the infrastructure to new load conditions

When the workload to perform a particular business function suddenly increases, the entirety of a monolithic application has to be scaled to balance the workload. This results in consumption of computing resources. And in the world of cloud, redundant usage of resources costs money.

Especially, in the case when you have a 24/7 production service with a load that is variable in time, where it is very busy during the day in the US, and relatively low at night.

Docker containers and Kubernetes allow scaling up and down the physical infrastructure in minutes through auto-scaling tools.

Scaling is typically done in two ways with Kubernetes:

Horizontal scaling:

When you add more instances to the environment with the same hardware specs. For example, a web application can have two instances at normal times and four at busy ones.

Vertical scaling:

When you increase your resources. For example, faster disks, more memory, more CPU cores, etc.

Kubernetes and Docker container technologies are now seen as the de facto ecosystem. It can lead to great productivity gains if properly implemented into your engineering workflows, and adopted at the right time.

You can make the move especially when…

  • Your team is facing trouble managing your platform because it is spread across different cloud services.
  • Your company has already moved its platform to the cloud and has experience with containerisation, but is now beginning to have difficulties with scale or stability.
  • You have a team that already has significant experience working with containers and cloud services.

But what about tons of configurations and setup that is required to maintain and deploy an application, you will ask.

Well to be honest, the amount of benefits it offers deserves a little bit of complexity.

Why Design QA should be a non-negotiable part of your process?

Did you ever happen to spot some inconsistencies in your product’s design that were not there in your prototype? The color being a bit different, some changes in the font style or micro interactions not working the way they are supposed to.

You think that some of these errors could have been avoided if a designer was shown the coded version before the app release. And you are not alone in thinking like this. There is a solution to combat this problem.

The answer is… Design QA.

So what exactly is Design QA and how does its implementation resulted in streamlining our product cycle. Here are some tips and tricks for you that we learned along our journey.

Defining Design QA:

Design QA is a cross verification process done by designers. It entails checking for any inconsistencies in your copy, visual aspects, micro interactions, and the likes of the code developed before the release of your product.

Why is it neglected from design sprints?

In many organizations, design sprints are very elaborate, taking a full week or longer. And this is prior to the developer hand off. Once this is done, designers move on to other projects with no further updates on the previous product. Bringing designers back for the review is not considered by many. And some other common reasons we hear for neglecting design QA are:

  • A misconception in the design world is that a designer’s work is done after forwarding Zeplin links and Invision prototypes. But that is seldom the case.
  • Design QA discussion can cause friction between designers and developers, making it an uncomfortable conversation to have at times.
  • Design QA is seen as an add on step in an already elaborate design sprint. When teams are working under time constraints, collaborating with designers for more reviews is not ranked high on the priority list.

Why did we implement it in our process?

The reasons for design QA to be a part of our process is nothing different from what the experts vouching for it say. We pitched it to our clients explaining its importance and what benefits will they get after its successful implementation. Some major factors that drove us to inculcate it within our process were:

It’s a pretty underrated time saving hack

Design QA will be good for your long term goals. Getting a few extra hours of your designers is better than a few extra days of your developers to search, spot, and iterate for design inconsistencies after the app release.

Better collaboration between your designers and developers

When designers and developers are in the same room (or call) it will help in solving the issues at hand quickly. Your designers will be aware of the technical issues that developers are facing and will account for such issues in the future.

Developers will also get insights into how designers have envisioned the final product and code to bring out the same in the product.

No surprise design inconsistencies

A designer’s work does not end after a simple functionality test of buttons and interactions. Instead, they evaluate the design elements behavior right from the speed of the interaction to the feedback of an action being performed, will there be a slide-out option or slide in, you get the picture.

Minimum design debt

Design debt is accrued over time when small changes and improvements are kept for the next sprints every time they are brought up. As this pile keeps on growing, it results in a bad user experience. And there will be a point where no amount of small tweaks will make it better and you end up rewriting your whole product.

Integrating Design QA in our existing process ensured that we never went back to square one because of design debt.

You’re convinced that it belongs in your process, now how do you implement it?

We know that it is easier to say rather than execute any changes in your workflow. But how to transition from ‘thinking’ to ‘doing’? Here are some tips that have helped us in including design QA within our workflow.

  • Start out together

The first and most vital thing that we have learned is to involve stakeholders from various product stages in the initial meetings. This helps in seeing the feasibility of the product’s features and setting the right expectations for everyone onboard.

  • Sort issues on the basis of priority

You will face numerous issues when testing the final product from the design point. But not all these issues have to be solved right away, some can wait till the next sprint cycle or are the icing on the cake type features.

When discussing with your developers, define priorities to get the critical issues addressed before ones that are only for aesthetic value addition, this way you are making your developers life a tad bit easier.

  • Have a checklist ready

We all know how good our memory is when we need it the most. Having a reference checklist when design QA is carried out will ensure that we don’t miss out on essential checks. Look for text alignment, colors, content placement and spacing.

You also need to check for the accessibility of the design. Here again, a checklist sorted on priority basis makes it easier for everyone involved.

  • Start the review the moment you get your hands on functional prototypes

We believe that there is no fixed timeline that needs to be followed when it comes to review cycles. Infact, the earlier the review cycle starts, the better. Waiting till the last moment can lead to unexpected delays in the launch of your product.

Getting a designer review on the product’s features will keep the development going in the right direction.

  • Give reasoning behind your feedback

Just saying that “this does not look/feel right” defeats the purpose of reviews. You should back your reviews with proper reasoning and even document them for references. This will not only help your developers but your designers as well to evaluate what they like the best and why.

Design QA has helped us ship perfect products reflecting the original design intents. This has worked wonders for us, especially when collaborating remotely. To get your stakeholders onboard, you can utilize the same reasoning that aided us in actively streamlining our workflow.

Reinforcing leading training platform for heavy user load

For over 35 years, NHLS has been a robust source for enterprise technology and software training solutions offering industry-leading learning content. They provide computer courses and certifications to more than 30 million students through in-person and online learning experiences.

Understanding the challenges

NHLS turned to Galaxy to check the load the platform can withstand under certain user scenarios over different web pages, and wanted the system to be able to entertain 10,000 concurrent users. They expressed concerns over the performance of their learning platform seen during user interaction.

They wanted us to go for performance testing to pull off higher volume load tests, and implement required measures to optimize website load times and ensure zero-downtime during the busiest days.

Test planning and implementation

We developed an in-depth understanding of the client’s system architecture and the platform. We used Jmeter to simulate heavy loads on virtual servers, networks to check strength, test the ability to handle heavy loads and determine system performance for a variety of loads.

We started with 1000 users. Reports of regression and stress tests made it pretty clear that the webapp is not optimized, since even after the FMP (first meaningful paint), the load times were far from what we expected. Servers were running out of capacity even on a few requests, which was not ideal for the server architecture NHLS already had.

Their application concurrency target was 10,000 users which was initially crashing at 100 users. In order to identify the point of bottlenecks due to which application started degrading performance, we defined few performance test objectives:

  • Response Time: To check the amount of time between a specific request and a corresponding response. User search should not take more than 2 seconds.
  • Throughput: To check how much bandwidth gets used during performance testing. Application servers should have the capacity of entertaining maximum requests per second.
  • Resource Utilization: All the resources like processor and memory utilization, network Input output, etc. should be at less than 70% of their maximum capacity.
  • Maximum User Load: System should be able to handle 10,000 concurrent user load without breaking database by fulfilling all of the above defined objectives.

Bottlenecks we encountered and the Solutions we provided

We used Jmeter to start testing with 100 users and then ramped up progressively with heavier loads. We performed real-time analysis and conducted more thorough analysis using a variety of tests like load test, smoke test, spike test and soak test.

In order to get to grips first with inaccurate page load and slow page speed, we decided to test per page load. We onboarded with our team of developers and network/server engineers to look into the bottlenecks and solve the issues to get expected results.

Bottleneck #1: Obsolete code

Adding new features to old coding architecture accumulated unnecessary JS and CSS files, code controllers and models on every page. This was acquiring cumbersome and resource-heavy elements or code throughout the website, and exacerbating the page load.

Solution:

We minified static assets (JavaScript, CSS, and images) i.e. optimized scripts and removed unnecessary characters, comments and white spaces from the code to shrink file sizes. To further improve the page speed, the server team performed static code caching that reduced the bandwidth usage from the website.

This resulted in a significant size reduction in requested assets and improved the page speed taking only 2 seconds to load the home page.

Bottleneck #2: Memory

A single query was processing more data than needed, mainly accessing too many rows and columns, from so many parts of the database. This in case of large tables means that a large number of rows were being read from disk and handled in memory causing more I/O workload.

Solution:

We used RDS Performance Insights to quickly assess the load on the database, and determine when and where to take action, and filter the load by waits, SQL statements, hosts, or users.

We performed indexing, removed redundant indexes and unnecessary data from the tables to quickly locate data without having to scan/search every row in a database table every time a database table is accessed. Server team used Innodb storage engine for MySql to organize the data on disk to optimize common queries based on primary keys to minimize I/O time (minimizing the number of reads required to retrieve the desired data).

Bottleneck #3: CPU

Use of nested loops to process large data sets made it difficult to trace the flow of the code, hitting so many requests (1-10k requests) on the database by a single user. This caused the code to execute multiple times in the same execution context hitting the CPU limit and driving up its usage.

Solution:

We performed query performance optimization to remove unnecessary code in loop (by making sub queries of queries) and removed multiple loops thus reducing time of rendering content from looped code that resulted in sending only 100 requests by a single user now. This reduced page size, response time, and marked down CPU resources and memory from 8GB to 4GB on the application server.

Ridding the code off of redundancies and optimizing the database helped us get to the 5000 user traffic mark. This lessened the extra work of the MySQL server, reducing server cost to 10-20%.

We launched a single server on AWS and configured all the required packages such as Apache, PHP and PHP-fpm, load balancer, and others to run our application.

Bottleneck #4: Network Utilization

The former HTTP/1 protocol was using more than 1 TCP connections to send and receive for every single request/response pair. It utilized many resources on the web page making different requests for each file. As the overload continued, the server began to process more and more concurrent requests, which further increased the latency.

Solution:

We used HTTP2 to reduce latency in processing browser requests via single TCP connection. Enabling Keep-Alive avoided the need to repeatedly open and close a new connection. It helped reduce server latency by minimizing the number of round trips from sender to receiver. And with parallelized transfers, letting more requests complete more quickly thus improving the load time.

  • To identify the slow log queries, and requests taking long execution time in the code, we established a proxy connection between Apache web server and PHP-FPM (communicating through modules earlier) to identify the bottlenecks of individual entities by letting them functioning individually. Then we configured PHP-FPM to identify RAM capacity by calculating how many max. parallel connections RAM can handle, leaving the system memory free to process at the same time.
  • We found inadequate server capacity, while inserting the data in the login and without login scenario to create real-life testing environment.

We proposed a distributed server system so that more than 1 server can be auto generated. We added auto scaling and added 4 servers, but was still burning at the load of 8k users and saw an increased server cost. With Round Robin load balancing, we distributed incoming network traffic or client requests across the group of backend servers. This helped us identify that the load is increasing due to inaccurate working processes of sessions stored in the database.

Bottleneck #5: Session queues

The server was getting overloaded due to accumulating too many sessions when performing load of 10k users login concurrently. And because the sessions were stored in a database, increase in the wait activities decreased the transaction throughput taking session time upto 100s, thus increasing the load on the system.

Solution:

We switched storing sessions from database to Memcache server. It stored sessions and queries in memory/cache instead of files, thus reducing the number of times that the database or API needs to be read while performing operations. It cached the data in the RAM of the different nodes in the cluster, reducing the load in the web server.

Making such scalable and cost-efficient server infrastructure helped the client application achieve the load of 10k users in less than 5 mins using only 2 servers capacity.

The testing process was able to ensure a smooth customer experience and save significant capital expense by maximizing server capacity already in place.

Your cheat sheet to delivering a robust and crash-free app

You’re working happily on your app and all is well, then suddenly – POOF! – it crashes. Aaargh!!

Apparently ‘Crashes decrease with new session the very next day by almost 8x’ (Cue sad violin).

It happens to the best of us.

Stability issues can derail the success of even the best apps. Buggy apps can drive your users away, uninstallations aside, they have the potential to wreck your reputation when users leave one too many distasteful reviews.

Something to bear in mind is that it’s not only about crashes. Users face errors that are not necessarily a crash, so tracking user-facing errors such as error messages and API response times is also important. There are several types of crashes an app can experience- either it can be ANR errors or crashes related to API integrations.

Bug fixing doesn’t need to be hard. You’re likely to worsen the situation if you freak out and start changing things at random, hoping the bug will magically go away.

Instead, you need to take a methodical approach and learn how to reason your way through a crash and maintain app stability.

You can’t fix what you can’t find

a man looking for something

The first order of business is to find out where exactly in your code the crash occurred: in which file and on which line.

You do have control over some of the errors and can find them. You can let the user know of the problem via building a communication channel when a user loses their WiFi connection while file transfer is in the process.

When errors are from unexpected app behaviour, your mobile app testing kit will need app monitoring and tracking tools to detect the behaviour that leads to app crashes. There are tools that are not only for mobile apps but also for OTT and Unity as well. In addition, no matter what tool you use, it’s normally an SDK that reports you about the crash alerts.

Even if you are able to collect all this data, it’s difficult to figure out which crashes to troubleshoot and fix first. And this issue follows our next approach to a stable app.

Follow Eisenhower’s Urgent/Important Principle

It’s important to prioritize and sift the problems most users are facing first.

As it’s common that teams tend to tackle bugs on a first come first served basis. Especially if the report comes from a loud voice or a key stakeholder.

This principle comes handy in such a situation which is more like a Rule of Thumb. This principle can tell you to quickly identify the activities that have an outcome that leads to us achieving our goals (Important), and activities that demands immediate attention (Urgent) that you should focus on, and the ones you should ignore.

Basically there are a couple of crashes or bugs that might be causing complaints, solve those first and iterate your priorities accordingly – you will see your crash rate and complaints decrease dramatically.

Practice better Exception Handling

An exception happens when something goes wrong. Tried to open a file but it doesn’t exist? You’ve got an exception. Attempted to call a method on an object but the variable was null? Boom! Exception.

And here comes Exception handling which is an error-handling mechanism. When something goes wrong, an exception is thrown. The exception causes your application to crash, if you do nothing.

For example: Developers often find themselves in a situation like- Is there a way to avoid an EXC_BAD_ACCESS from crashing an app, and handle an exception gracefully?

To handle the exception that was thrown, you need to catch it. You can do this by using an exception-handling block i.e. by using ‘try block and catch block’.

Proper Memory Management

GIF of chrome as pacman

One of the biggest causes behind app crashes is the lack of memory because not all users possess powerful smartphones and high-end tablets. Whereas, people write code as though only their apps exist.

Taking the similar example from above: You can also get EXC_BAD_ACCESS (while attempting to access nonexistent memory) because of the usable memory already being assigned to other processes. The unavailability of enough memory will prompt your app to shut down.

How to go on about this?

  • Do your best to avoid using too much memory and use caching whenever possible.
  • Find where your app holds the biggest data structures or uses the most amount of data and see if everything is absolutely necessary.
  • You can profile your app to see if there is any memory leak.
  • If there is no way to save memory, then prioritize what data/ features to dispense and keep, to save memory when it is already low.

These kinds of crashes become very rare with just a little defense and care in programming, and using “treat warnings as errors”.

While improving app usability, knowing how to deal with dysfunctional apps can be of great help. If you want assistance in development and testing of mobile applications, you can chat with us here.

Gutenberg vs Page builders | Settling an year long debate

The appeal of WordPress is simplicity. The debate on Gutenberg and Page builders is much more than functionalities. It’s more about ease of use.

In the world of page builders, Gutenberg emerged as a complete paradigm shift for the users. The battle goes on whether the page builders will be replaced by Gutenberg and lose their influence? Or is Gutenberg alienating most of the current user base because of its workflow?

In this blog, we are going to compare WordPress page builder plugins with Gutenberg to help you select an ideal choice that suits your WordPress website requirement.

Live visual editing

If you want to figure out how things would appear without saving the page – through an actual live preview of a page as you edit it – then here is a big turn up you will find in the differences.

Page builder offers the level of visual editing that Gutenberg doesn’t. A page builder lets you design the entire page in a WYSIWYG interface. You can save the preview time by making direct changes in the layout and style of the web page.

Styles and themes compatibility

Page builders provide more style options in comparison to Gutenberg editor. Elementor and Divi Builders are among the perfect examples for it.

For a button, Page builder offers a lot of customizations like color, border radius, animations, filters etc. But Guntenberg will only have few choices in its bag like shape, background color, and text color.

Gutenberg relies on WordPress for themes and styles. With custom CSS, you can customize your blocks but the overall look of the page will be the same as your theme. On the other hand, Page builder lets you override themes and styles. Thus help in creating a unique experience for standalone landing pages.

Responsive design controls

Gutenberg does not have a responsive design control. But plugin like Kadence blocks adds custom blocks to extend Gutenberg’s editing capabilities. You will be able to better control columns for different screen sizes and will be responsive according to your theme.

But you can’t change how that responsive design will work in Gutenberg.

Whereas page builders provide responsive design control for each element. Therefore, if you want to have full control on the design, you must prefer page builders as an ideal option.

Layout functionality

You can’t customize margins and paddings for individual blocks in WordPress’ new default editor. It gives some basic options for alignment such as left, right, and centre. On the other hand, a page builder gives you the option to edit every single entity on the screen like margins and paddings.

Gutenberg has some third party extensions to let you use drag and drop options to adjust different layouts for different devices. One of the most popular extensions is Kadence blocks. It adds detailed layouts in a web page. But Gutenberg’s default column block is limited and cannot be compared to page builders.

It’s for this reason, you will always require a page builder to get full layout control.

Scope

Gutenberg comes with limited actions. Ease of use is the main goal of Gutenberg. It is intended to be used by writers, bloggers, and businesses.
Whereas, Page builders work well for people who rely on customizations for their development needs.

Page builders are commonly used by eCommerce stores, WordPress agencies, aspiring designers, small businesses, among others.

Takeaways

  • Gutenberg is for people who put reliability and ease-of-use over a buffet of customizations.
  • You can prefer Page builder when you have specific vision for site’s design. You can create unique designs according to your needs and preferences.
  • You can use Page builders when you want to add some specific features that are not present in Block editor. Some of these features include media carousels and subscription forms.
  • People who are used to the freedom of customizations in page builders will find the simplicity of gutenberg quite limiting.

Currently, Gutenberg is not powerful enough to replace highly flexible page builder plugins. It’s just a modern version of the classic editor. But there’s a good news Gutenberg enthusiasts…

Integrating page builder plugins within Gutenberg has the potential to bring out the best of both worlds. Maybe WordPress is heading that way only with custom fields integration.

As a result, you’ll be able to use different modules as separate blocks in Gutenberg editor itself. It would help you to create rich and more flexible web experience. At the same time a future with a win-win situation for all.

What are Design Systems and why do we vouch for one?

Design System is one of the most excessively used design terms in the software industry right now, so much so that it has given birth to a parody Twitter account that calls people out on their overuse of the concept as a mere keyword.

There is a lot of uncertainty about its definition, different professionals have different definition of Design Systems. But here’s one that will put your mind to ease.

A Design System is a set of deliverables(not a deliverable itself) that acts as a single source of truth for design and development teams to realize a product. It evolves constantly with the product, tools, and new technologies.

What a Design System isn’t

Going with the wind a lot of companies tend to claim that they have a Design System in place. But in reality, they’re just pointing towards their Sketch libraries and style guides.

You can’t call libraries or style guides Design Systems for the same reasons you can’t call individual notes of music, a song.

You can’t make a functional software with just some static designs and patterns, just as you can’t make a song with a music sheet filled with notes and melodies, you’d also need instruments and singers to conduct a piece of music.

How to find the right one?

Design Systems can be simple, comprehensive, strict, loose, mono or cross-platform. Based on the scale of projects or operations, you can define what kind of Design System you need. Because it’s always easier to find things when you know what you’re looking for.

Start by asking the right questions:

  • Number of people that’ll use the system?
  • What are their profiles?
  • Are they willing and able to adapt?
  • How many products, platforms, and technologies?
  • Degree of consistency across them

Based on the scope there are two types of Design system. You can either go for a Modular Design System or an Integrated one.

Modular

Module based Design Systems are good for large scale projects like e-commerce, finance and government websites. It’ll enable you to scale quickly and adapt to multiple user needs. There is one downside to module based system though, it can be expensive to build and maintain.

Integrated

An integrated system focuses on one unique context. It’s also composed of parts, but these parts will not be interchangeable. This kind of system suits products that have very few repeating parts and that need a strong and often changing Art Direction (portfolios, showcases, marketing campaign).

Based on the process you can either build a Centralized or Distributed Design System.

Centralized

In a centralized model, one team is in charge of the System and makes it evolve. This team ensures that the System covers everyone’s needs.

Distributed

In a distributed model, several people of several teams are in charge of the system. The adoption of the system is quicker because everyone feels involved but it also needs team leaders that will keep an overall vision of it.

Regardless of the type of system you choose, a Design System consists of:

  • Interaction models
  • Typography
  • Page layouts
  • Components
  • Colour
  • Sounds
  • Tone and voice
  • Words, grammar, and mechanics
  • Spacing
  • Code snippets paired with elements

Why building a Design System can be transformational for your organization?

If you’re a big organization, it’s likely that you have multiple products and services that need to reflect a consistent brand identity. A Design System helps you do that with it’s efficient organization and one-click implementation across all the assets. A Design System has all the pieces engineered to fit together like a lego set, as and when needed. Which makes designing highly scalable, reliable, efficient, and robust.

Here are the benefits of a Design System

  • Productive and cost-effective. Reusable components lets the team be efficient and deliver faster, as they don’t waste time on repetitive tasks or useless meetings.
  • Brand continuity across all products or services. If a UI element which is a part of hundreds of screens is changed, it conveniently reflects in all the places it’s used.
  • Better collaboration and knowledge sharing. With every essential piece of information easily accessible in the system, onboarding new team members also becomes easier.
  • No need to code. Everything is in one place just ready to be picked out and used without having to code. Just copy the required snippet and implement the visual element of your choice.

An extensive design system alone won’t solve your problems. You’ll also need good designers to create a unique and reusable system and good developers to interpret it their own way.

Let us know if you’re on the lookout for something just like this. Pardon the Coldplay wordplay.

The right and the only way of outsourcing Software Testing and QA

When you’re looking to outsource, you probably spend a lot of the project’s time in finding that “right partner”.

What if we tell you that even after choosing the right partner the performance rate for software outsourcing remains below 50%. It means that companies who outsource without considering the risks and operation costs, only get half the efficiency out of their outsourcing efforts.

Companies are so quick to handoff their work to off-shore agencies that outsourcing becomes purely transactional. Maybe we can rethink of outsourcing as a holistic approach and start with evaluating our processes and current performance instead of diving straight into a partner hunt and handing off the work to them.

The key is to have an extensive plan so that the risks are low and success rates are high. The rule of 5 P’s sums it up really good-

Proper Planning Prevents Poor Performance

With this guide you’ll be able to strategize for efficient outsourcing, choose the right vendor, optimize testing costs, and streamline your development process with QA.

Getting ready for outsourcing

Running track marking 800
  • Define Objectives and Goals

Clearly defined objectives and measurable goals makes for a good basis of an outsourcing strategy. Objectives will help you with decisions concerning a project’s business value, vendor, outsourcing models, projects to outsource, and related risks to assume. Down the line objectives will also help you evaluate the success/failure of your strategy.

Whereas, Goals are the events and functional metrics by which management can monitor progress, take corrective action, and project future performance.

  • Measure Performance Baseline

You’ll also need to define metrics with which you can represent a baseline performance for your outsourcing efforts. Use these metrics to get a baseline for your current performance which can be later referenced for future measurements. Baseline also clarifies which metrics are important in achieving specific goals and business objectives.

  • Set realistic expectations

After defining your goals and expectations, you need to check if they’re are just. Unrealistic expectations of large immediate savings is the reason behind most of the failed projects.

Practical expectations ensure stability for your offshore strategy. A careful analysis for ROIs and timing of the benefits will help you evaluate and set better expectations.

How to choose and manage an Outsourcing vendor?

Relay track picture

Shortlist a vendor –

A quick Google search will land you on the pages of thousands of vendors with a fair amount of happy client testimonials. How do you see past a few deliberately filtered out success stories? The first thing in your course of action should be checking the review and references of your shortlisted vendors.

A vendor with a good track record should be able to provide you with sufficient references. References might give you just enough green flags to go ahead in your research. You can then continue your vendor evaluation based on the below mentioned factors.

  • Gauge their expertise

To showcase one’s expertise vendors should provide you with their test documentation, portfolio, and test cases. The depth of their reports should give you a good idea about their process and cases they cover.

  • See if they have sufficient resources and services

An ideal vendor should always have more resources than you need at the moment. Regardless of your immediate needs, your vendor should be able to do all types of testing be it automated and manual for web or functional, performance, usability, compatibility, API, and security testing for mobile/desktop. It enables your vendor to scale as you do.

Vendor management and assessment –

  • Understand your vendor

Vendor management starts from understanding their needs and where they are coming from. An outsourcing vendor has to deal with operational costs, talent acquisition challenges, and problems with other projects. Excessive price negotiations might push them to cut corners by allocating insufficient or junior resources.

  • Regularly assess the vendor

Regular assessment ensures quality. You need to have a systematic assessment in place, so that when you’re unable to get the expected quality of work, you can take action or look for other vendors.

Make sure that the frequency of these tests is not on the higher side, because it will shift their focus on showing rather than actually doing things. Assessing too frequently will keep them on the edge all the time.

This assessment criteria should get you started.

– Number of missed bug – Quality of defect description – Correlation between testing efforts and outcomes – Quality of test documentation – Capacity and availability of resources – Efficiency of testing tools

  • Manage vendor performance

Assessment provides you with insight that you can use to improve the testing procedures in place and maybe introduce some measures to increase the efficiency.

You should review vendor’s testing documentation at least once a month. Based on the reviews, your QA lead should provide the test team with relevant feedback, detect hidden wasteful steps and cost drivers.

You should also be in constant touch with your vendor’s QA manager to communicate missed bugs or unclear reporting. Ensure that the test team properly understands business and software requirements.

In case a vendor fails to deliver on your expectations, you can consider a multi-vendor strategy. For big enough projects you can assign different part of projects to different vendors. Having options makes the replacement easier when and if your projects are at risk.

Dealing with cooperation issues –

  • Prioritize testing activities

Addressing urgent issues is a common practice in an agile environment but urgent requirements can often delay the important issues. Because, every time there is a change in requirement vendors need to adapt and reprioritize. While dealing with the changes they might leave business critical or problematic features out of the scope.

Your QA manager should be able to help the test team create a clear test plan and prioritize testing activities, so that nothing is swept under the rug.

  • Include several SLAs in your contract

Since it’s difficult to match a traditional contract with a flexible agile testing process, you can divide your contract into several service level agreements(SLA) to make collaboration more manageable. Each SLA should cover a part of the services to be rendered, the time required for execution, priority, and KPIs.

Which Outsourcing Model to choose?

picture of shoes

An outsourcing model has many variables, such as scope, distribution of responsibility, contractual flexibility, and duration, but the main variables that define a model are the distribution of responsibility between you and offshore vendor, and the scope of the outsourcing effort.

Staff augmentation

This model has the same characteristics as a traditional onshore staff-augmentation model. You hire contractors to perform a particular task or role. The contractor receives work assignments directly from your company, the same as all other developers on the team, and performs the work remotely.

However, the staff-augmentation model has the advantage of having the lowest risk and being the easiest to implement as it can be executed with a single offshore resource for a fixed task and duration.

Offshore vendors tend to shy away from this model and many strongly discourage its use due to the shared overhead costs and limited upside for the vendor.

Project-outsourcing

This model is a self-contained engagement with fixed start and end milestones where a dedicated offshore team is responsible for delivering a complete project according to your specifications.

If you have a large project, you can start with a pilot project by assigning an isolated part of the project to see if the vendor’s processes are mature and what are the overhead costs and vendor also learns how your company functions.

If the project is small, the risk is relatively contained and both parties figure out the intricacies of an effective business relationship.

This model is more appealing to many offshore vendors and represents a more significant benefit for both your company and the vendor because the model can be scaled up to more and larger projects.

Dedicated development centre

In this model the vendor has a pool of resources, resources that are dedicated to your company’s use.

As your company matures in its relationship with an offshore vendor, this is a logical next step in growing from either a staff-augmentation model or a project-outsourcing model.

This model allows the same resources to be retained for multiple successive projects and reduces the loss of intellectual capital prevalent with the project-outsourcing model.

Functional outsourcing

This model outsources an entire business function, process, application, or department. This tends to be a high-risk, high-reward endeavor.

You must be confident in your vendor’s ability to deliver significant business value and minimize the risks of business disruption before entering into this kind of relationship.

However, offshore vendors that specialize in a certain business functional area can often provide a higher level of expertise than you can — at a reduced cost.

Tests to look for

People running downhill

An experienced outsourcing vendor with structured QA processes will help you realize robust and reliable products in shorter turnaround time. With their proven industry experience they will also ensure consistent implementation of best practices.

Knowing their process can prove to be an insightful experience into their work and how they operate.

Just to give you an example here’s our process that explains how a product is tested, starting from an atomic level (lines of code) to a molecular level(modules) to an elemental level(System).

Unit testing-

This stage focuses on a small piece of an application, even something as granular as a line of code like a method or class, and ensures that it functions as per expectations.

Our Unit testing checklist

  • Write a line of code
  • Write a method to test that code
  • Implement the code
  • Launch test
  • Verify results

Unit testing accelerates productivity by streamlining development and lowering the risk of time-consuming and costly bugs down the line.

Integration testing-

Units make up a module and if units do what they are supposed to then it’s time to see how they work together as a module in integration testing.

Here are some methods used in integration testing:

  • Big bang
  • Top-down
  • Bottom-up
  • Sandwich/hybrid

Integration testing verifies the functionality, reliability, and interoperability of multiple system components working together. It also identifies and addresses problems with exception handling.

System testing-

Software system testing looks at a software product as a whole and evaluates whether it successfully meets the pre-defined functionality, end-user, and business criteria.

  • Functionality Does the system function as the requirements criteria detail it should?
  • Performance Is the software reliable, responsive, stable, and performant under various conditions?
  • Regressions Has the software retained its original functionality since its modifications?
  • Usability Is the software user-friendly, and intuitive? Does it offer an optimal experience for the end-user?
  • Stress Can the software hold up as the load and stress on the system increase?
  • Load How quickly does the system respond under normal and peak conditions?
  • Security Do the security features ensure the integrity of the software product as far as protecting sensitive data and information are concerned?
  • Recovery Can the software recover successfully and quickly following a crash or failure?
  • Interoperability Can the software successfully interact with other software systems or components?
  • Documentation Are all test scenarios and requirements agreed upon prior to and during this QA phase well-documented?

If bugs, breaks, or defects are identified during this stage of evaluation, they are fixed and then re-tested, forming a repeated quality assurance cycle until the software QA team signs off for deployment.

System testing ensures end-to-end evaluation of an entire product prior to release and lowers risk for application failures once the product is live.

Acceptance testing-

Even after all the programming, technical oversight, quality assurance, and bug fixing, software acceptance testing is necessary to evaluate that the end product fulfills the purpose for which it was originally designed and developed.

Acceptance testing mitigates any fallout from outstanding bugs or defects that weren’t identified in the previous unit, integration, or system examinations. It also improves overall user experience as testers and users relay usability and functionality feedback

The above mentioned testing process is common to all software development and testing providers. But to ensure utmost product quality and robustness we have these additional layer of tests that help us make our products flawless.

  • Load or performance testing on page and application scale
  • Security testing
  • Accessibility testing
  • Visual QA
  • Automation testing

Key takeaways

  • Software testing and QA outsourcing is an opportunity for businesses to reduce IT overheads and improve efficiency.
  • Good software testing is a specialized and professional skill, and not merely an afterthought entertained at the end of the IT project life-cycle.

Even if large-scale offshore outsourcing is not an option that you’re ready to consider, outsourcing a small part of a large project can provide an effective supplement to your existing solution.

Ultimately QA outsourcing boils down to understanding your needs, setting cautious expectations, and knowing when to withdraw.