Moving Forward in 2022, With a Retrospect on 2021!

With a commitment to vaccines and the hope of smiling without face masks, 2021 was indeed a year to remember. Even in its second year of working remotely, Galaxy clocked a growth of 97% in revenue over 2020. This growth spurt paved our way into the INC 5000 list that showcases America’s fastest 5000 growing companies. 

We were fortunate enough to be able to continue doing what we love and are passionate about! We were all very excited about what 2022 would bring for us. Brimming with new ideas, now is the time to review and ponder how the year went by and be proud of it. 

Here is a look back at 2021 in the form of numbers, milestones, and learnings.

At Galaxy, our employees are our greatest strength. We added a whopping 219 talented techies to our family, taking our overall number to 570. We are also big on recognizing the contributions of our employees and have promoted 41 of them as a reward for their hard work and commitment.

Galaxy added 44 new clients to our rapidly growing list and an impressive 140 new projects. Our areas of specialization include 

The year 2021 was also full of industry recognitions and accolades.

We also gained some valuable life lessons and valuable learning.

  • Businesses are beyond B2B or B2C. It is about H2H, i.e., human to human connection.
  • We are committed to remaining transparent and approachable to our clients. 
  • Galaxy is a close-knit family where the well-being and growth of our employees are our biggest commitments. 
  • Leadership is about creating more leaders. 

Another feather in our cap is Sanjeevani, our corporate social responsibility initiative. We raised INR 200,000 from our employees in 2021, which was matched equally by the organization. We used this amount for various philanthropic initiatives like providing financial assistance for medical expenses, donating at old age homes, donating clothes to the underprivileged during festival season, and so on. 

2021 proved to be a fruitful year for Galaxy, and we look forward to more growth, learning, and experiences in 2022! 

5 Steps for Effective Performance Testing: A Practical Guide

Applications are becoming increasingly complex and with a reduced development cycle. This necessitates the adoption of new, agile development and testing methodologies. Application performance as a component of the overall user experience is now among the most important aspect of application quality. 

Sequential projects with static qualification/implementation/test phases that postpone performance testing until the end of the project may face performance risks. By today’s application quality standards, this is no longer acceptable.

This article will give you practical advice on how to conduct efficient performance testing in this new and more demanding environment.

Stage 1- Software analysis and requirements preparation

Before beginning performance testing, it is critical to ensure that the software configuration has been properly adjusted. When the system is deployed, functional testing should be performed to ensure that the main functions used for performance checks work properly.

The examination of a system’s features, operation mode, and peculiarities are part of its analysis. To achieve the following goals, detailed analysis is required:

  • Simulate the best user behavior patterns and load profiles
  • Determine the amount of test data required
  • Identify system bottlenecks
  • Define the software monitoring methods

The primary focus should be on defining the success criteria for the tests that are usually included in the SLA (service-level agreement). The criteria that the system technically corresponds to are called the requirements. The requirements defined during the first stage will be compared to the received results to evaluate the behavior of the product and system units and determine the bottlenecks. There are so many crucial performance testing metrics that are used as success criteria.

System analysis encompasses all information about the software, testing objectives, planned test runs, test stand configuration, performance testing tools, user behavior scenarios, load profile, testing monitoring, load model, application requirements, and the method of delivering results.

Stage 2- Design Testing Strategy

The testing strategy is developed based on detailed system analysis as described above. Below are the steps to develop an ideal testing strategy:

2.1. Choosing optimal performance testing tool

These tools can be used to conduct performance testing:

ToolAdvantagesDisadvantages
The test results are well-structured and stored in an MS SQL database.
There is no limit to the number of virtual users covered by the license.
Declarative tests have limited functionality for scripts logic.
Declarative tests are the only type of testing available for SharePoint.
Provides a distributed architecture (VuGen, Controller, Load Generator, Analysis).
There are tools available for working with various protocols.
Allows you to generate reports and create report templates.
It is possible to keep detailed action logs for each virtual user.
The number of virtual users is limited.
The number of updates is relatively low.
It is possible to run tests with complex logic and dynamic parameter correlation. Web application testing is possible (including API and web services, database connections).
Allows you to simulate the behavior of multiple users in multiple parallel threads while applying a heavy load to the web application.
Encounters issues in the reproduction of AJAX traffic.
There’re limited reporting functions.
Requires many resources of RAM and CPU for test launching.

2.2. Design of a load profile and a load model

As part of the performance testing process, statistics on application usage are gathered. The data gathered is required for the creation of the load profile – a user behavior model.

Different load models can be used in the same test. For example, one new virtual user could be added every five minutes, or all users could be added all at once. The query rate, test duration, 

and the number of users are the most important aspects of the load model.

2.3. Configuration of the test stands

The results of performance testing can be influenced by a variety of factors such as test stand configuration, network load, database content, and many others.

As a result, to obtain the most reliable results, performance testing should be carried out in a separate environment with features and configurations that are similar to the parameters of the real software.

Test stand elementsFeature

Application

ArchitectureDatabase (structure, data)Software required for the system operation

Network

Network mappingBandwidth performanceCommunication protocol

Software

Operational system (version and service packs)Application server (version and patches)DBMS (version and type)
HardwareCPU (number of cores, type, clock speed)Random-access memory (space, type)Hard disk (type, speed)

Stage 3- Load generator configuration and monitoring

Performance testing tools should be installed on a load generator to produce high-quality results. It is a virtual or physical machine that is near the application server(s).

If a heavy load is to be generated, one machine’s resources may be insufficient. Distributed performance testing is carried out in this case.

Software monitoring can be carried out with the assistance of tools for controlling system hardware resources consumption.

The following are the most popular hardware monitoring tools:

  • New Relic: The performance tracking service offers analytics for every component of the environment. The tool is a powerful tool for viewing and analyzing massive amounts of data and obtaining information about real-time actions.
  • Grafana: This monitoring data visualization tool can analyze data from Graphite, InfluxDB, and OpenTSDB time-series databases.
  • Zabbix: The software employs a “server-agent” model. The server collects all of the data and allows you to view the monitoring history as well as set up metrics and rules.
  • JProfiler combines CPU, memory, and thread profiling, making it simple to determine what should be optimized, eliminated, or modified. This tool can be used for both local and remote services.
  • Xdebug is a powerful tool used to analyze PHP code and identify bottlenecks and slow elements.
  • XHprof decomposes the application into function calls (methods) and generates resource consumption statistics for them. The metrics include the amount of allocated memory, the number of function calls, the execution time, and many others.
  • PostgreSQL is an object-relational database management system that is free to use. The pgBadger profiler can be used to achieve performance testing objectives.
  • MS SQL Server Profiler is a tool for tracking, reconstruction, and debugging of the MS SQL Server. It enables the creation and processing of queries.

Stage 4- Test Data Generation and Test Scripts Development

Let’s look at four different types of test data generation:

Code

Scripts written in various programming languages (Java, Python) enable the creation of users, passwords, and other values required for proper data usage.

SQL statements

SQL queries can also be used to populate a database. This method is only available if the database is accessible from the server. The approach can be implemented as follows: first, a completed DB in MS Access with fields identical to the server-side database is created; second, a dump file containing requests to add information to the DB is created.

API requests

API requests can be used to populate the database with items for sale or user information. One or two phone calls will suffice.

Interface

For filling the database via the system interface, a script that mimics the steps of the user registration process can be developed. The script adds new users to the database. A snapshot of the file system can be created in order to use the newly created users for test execution.

Stage 5: Preliminary Launch and Test Execution

Preliminary tests aid in determining the best load model for the system. They also show how the application resources are used and whether the power of the load generator(s) is sufficient for running full-scale tests.

The tests should be run with various load models. As a result, the testing load is determined based on software monitoring and analysis. For test execution, there are many types of checks, like stress testing, lead testing, stability testing, volume testing, etc. Read more in detail.

Performance testing is an ongoing process.

Performance testing needs to be carried out on a regular basis. Hopefully, your website or application will continue to grow, necessitating changes to accommodate a larger user base.

Contact us to find out how performance testing experts can improve the quality of yo

Security in the Cloud: How to Enhance it Using Security Controls?

Traditional IT security is no longer what we’ve known it to be for the past few decades. There is a massive shift to cloud computing that has changed how we see and perceive IT security. We have grown accustomed to the ubiquitous cloud models, their convenience, and unhindered connectivity. But our ever-increasing dependence on cloud computing for everything also necessitates new and stricter security considerations.

Cloud security, in its entirety, is a subset of computer, network, and information security. It refers to a set of policies, technologies, applications, and controls protecting virtual IP addresses, data, applications, services, and cloud computing infrastructure against external and internal cybersecurity threats.

What are the security issues with the cloud?

Third-party data centers store the cloud data. Thus, data integrity and security are always big concerns for cloud providers and tenants alike. The cloud can be implemented in different service models, such as:

  • SaaS
  • PaaS
  • IaaS

And deployment models such as:

  • Private
  • Public
  • Hybrid
  • Community

The security issues in the cloud fall into two categories. First, the issues that cloud providers (companies providing SaaS, PaaS, and IaaS) face. Second, the issues faced by their customers. The security responsibility, however, is shared, which is mentioned in the cloud provider’s shared security responsibility or shared responsibility model. This means that the provider must take every measure to secure their infrastructure and clients’ data. On the other hand, users must also take measures to secure their applications and utilize strong passwords and authentication methods.

When a business chooses the public cloud, it relinquishes physical access to the servers that contain its data. Insider threats are a concern in this scenario since sensitive data is at risk. Thus, cloud service providers do extensive background checks on all personnel having physical access to the data center’s systems. Data centers are also checked regularly for suspicious behavior.

Unless it’s a private cloud, no cloud provider stores just one customer’s data on their servers. This is done to conserve resources and cut costs. Consequently, there is a possibility that a user’s private data is visible or accessible to other users. Cloud service providers should ensure proper data isolation and logical storage segregation to handle such sensitive situations.

The growing use of virtualization in cloud implementation is another security concern. Virtualization changes the relationship between the operating system and the underlying hardware. It adds a layer that needs to be configured, managed, and secured properly. 

These were the vulnerabilities in the cloud. Now let’s talk about how we can secure our cloud, starting with security controls:

Cloud Security Controls

An effective cloud security architecture must identify any current or future issues that may arise with security management. It must follow mitigation strategies, procedures, and guidelines to ensure a secure cloud environment. Security controls are used by security management to address these issues.

Let’s look at the categories of controls behind a cloud security architecture:

Deterrent Controls

Deterrents are administrative mechanisms used to ensure compliance with external controls and to reduce attacks on a cloud system. Deterrent controls, like a warning sign on a property, reduce the threat level by informing attackers about negative consequences.

Policies, procedures, standards, guidelines, laws, and regulations that guide an organization toward security are examples of such controls.

Preventive controls

The primary goal of preventive controls is to safeguard the system against incidents by reducing, if not eliminating, vulnerabilities and preventing unauthorized intruders from accessing or entering the system. Examples of these controls are firewall protection, endpoint protection, and multi-factor authentication like software or feature implementations. 

Preventive controls also consider room for human error. They use security awareness training and exercise to address these issues at the onset. It also takes into account the strength of authentication in preventing unauthorized access. Preventative controls not only reduce the possibility of loss event occurrence but are also effective enough to eliminate the system’s exposure to malicious actions. 

Detective Controls

The purpose of detective controls is to detect and respond appropriately to any incidents that occur. In the event of an attack, a detective control will alert the preventive or corrective controls to deal with the problem. These controls function during and even after an event has taken place. 

System and network security monitoring, including intrusion detection and prevention methods, is used to detect threats in cloud systems and the accompanying communications infrastructure.

Most organizations go as far as to acquire or build their security operations center (SOC). A dedicated team monitors the IT infrastructure there. Detective controls also come equipped with physical security controls like intrusion detection and anti-virus/anti-malware tools. This helps in detecting security vulnerabilities in the IT infrastructure.

Corrective Controls

Corrective control is a security incident mitigation control. Technical, physical, and administrative measures are taken during and after an incident to restore the resources to their last working state. For example, re-issuing an access card or repairing damage are considered corrective controls. Corrective controls include: terminating a process and implementing an incident response plan. Ultimately, the corrective controls are all about recovering and repairing damage caused by a security incident or unauthorized activity.

Here are benefits of selecting a cloud storage solution.

Security and Privacy

The protection of data is one of the primary concerns in cloud computing when it comes to security and privacy.

Millions of people have put their sensitive data on these clouds. It is difficult to protect every piece of data. Data security is a critical concern in cloud computing since data is scattered across a variety of storage devices. Computers, including PCs, servers, and mobile devices such as smartphones and wireless sensor networks. If cloud computing security and privacy are disregarded, each user’s private information is at risk. It will be easier for cybercriminals to get into the system and exploit any user’s private storage data.

For this reason, virtual servers, like physical servers, should be safeguarded against data leakage, malware, and exploited vulnerabilities.

Identity Management

Identity Management is used to regulate access to information and computing resources.

Cloud providers can either use federation or SSO technology or a biometric-based identification system to incorporate the customer’s identity management system into their infrastructure. Or they can supply their own identity management system.

CloudID, for example, offers cloud-based and cross-enterprise biometric identification while maintaining privacy. It ties users’ personal information to their biometrics and saves it in an encrypted format.

Physical Security

IT hardware like servers, routers, and cables, etc. are also vulnerable. They should also be physically secured by the cloud service providers to prevent unauthorized access, interference, theft, fires, floods, etc. 

This is accomplished by serving cloud applications from data centers that have been professionally specified, designed, built, managed, monitored, and maintained.

Privacy

Sensitive information like card details or addresses should be masked and encrypted with limited access to only a few authorized people. Apart from financial and personal information, digital identities, credentials, and data about customer activity should also be protected.

Penetration Testing

Penetration testing rules of engagement are essential, considering the cloud is shared between customers or tenants. The cloud provider is responsible for cloud security. He should authorize the scanning and penetration testing from inside or outside. 

Parting words

It’s easy to see why so many people enjoy using it and are ready to entrust their sensitive data to the cloud. However, a data leak could jeopardize this confidence. As a result, cloud computing security and privacy must build a solid line of protection against these cyber threats.

If you’re overwhelmed with the sheer possibilities of threats in cloud computing or lack the resources to put in place a secure cloud infrastructure, get on a call with us for a quick consultation.