How to monitor Disk Health on Linux/Windows?

Recently, we published an article about Disk Health Checks on our blog and explained what they are and the benefits they offer to system administrators. Today, we want to delve deeper into this topic and explore how disk health checks work on Windows and Linux servers.

What is a Disk Health Check?

A disk health check is the process of evaluating the condition and performance of a storage device, such as a hard disk drive (HDD) or solid-state drive (SSD). This includes checking for potential issues like bad sectors, wear and tear, read/write errors, and other signs of degradation that could lead to data loss or drive failure.

How to Check Disk Health on Linux

  1. Using smartctl from the smartmontools Package:
    • Installation:
      sudo apt-get install smartmontools # Debian/Ubuntu
      sudo yum install smartmontools # CentOS/RHEL
      sudo pacman -S smartmontools # Arch
    • Check Disk Health:
      sudo smartctl -a /dev/sda
    • Run a Self-Test:
      sudo smartctl -t short /dev/sda # Short test
      sudo smartctl -t long /dev/sda # Long test
    • View Test Results:
      sudo smartctl -l selftest /dev/sda
  2. Using badblocks:
    • Check for Bad Blocks:
      sudo badblocks -v /dev/sda
  3. Using fsck for Filesystem Check (only for unmounted partitions or during boot):
    • Check Filesystem:
      sudo fsck /dev/sda1
  4. Graphical Tools:
    • GNOME Disks (also known as gnome-disk-utility):
      sudo apt-get install gnome-disk-utility # Debian/Ubuntu

      Open GNOME Disks and select the drive to view SMART data and run tests.

How to Check and Monitor Disk Health on Windows

  1. Using Windows Built-in Tools:
    • CHKDSK (Check Disk):
      chkdsk C: /f /r
      • /f fixes errors on the disk.
      • /r locates bad sectors and recovers readable information.
    • Windows Disk Management: Access through Control Panel -> Administrative Tools -> Computer Management -> Disk Management.
    • Windows Explorer: Right-click on the drive -> Properties -> Tools tab -> Check.
  2. Using wmic (Windows Management Instrumentation Command-line):
    • Check SMART Status:
      wmic diskdrive get status

      Returns “OK” if the drive is healthy.

Monitoring Disk Health

Monitoring disk health on Linux and Windows is crucial for any business, even if you are running a system that doesn’t handle sensitive data. If a hard drive in your servers fails, your system goes down, data can be lost, and you will face significant challenges in restoring your data, even from backup files. Therefore, it is always better to monitor your hard drive health to get notified right away before a failure occurs.

There are several ways to monitor your disk health. Xitoring has recently developed an automated system that gathers information from your system, monitors your disk health, and notifies you immediately if something goes wrong.

To use an automated monitoring system for your hard drive health check, you need to register your server with Xitoring and activate the Health Check integration on your Linux/Windows server in just a few seconds.

For more information, please follow the documentation (Disk Health Integration) and get ready for your peace of mind.

 

How to check SMTP Health on Linux?

SMTP stands for Simple Mail Transfer Protocol. It is an Internet standard communication protocol for email transmission across IP networks. SMTP is used to send messages from an email client to an email server or between servers. It operates on the application layer of the TCP/IP protocol suite and uses port 25 by default, though it can also use port 587 for messages submitted by email clients to servers and port 465 for secure communication over SSL/TLS.

The primary purpose of SMTP is to set up communication rules between servers, enabling them to relay email messages to the correct destination. SMTP handles the sending part of the email delivery process, while the receiving side typically involves additional protocols like IMAP (Internet Message Access Protocol) or POP3 (Post Office Protocol version 3) for email retrieval and management by the end user.

Monitoring SMTP  on Linux can be approached through various methods, including command-line tools, logging, and network monitoring tools. Here are some examples to get you started:

1. Using telnet or nc (netcat)

You can manually test an SMTP server’s responsiveness and simulate sending an email using telnet or nc. This method allows you to directly interact with the SMTP server.

telnet smtp.example.com 25

Or using nc:

nc smtp.example.com 25

You then follow the SMTP protocol steps manually (HELO, MAIL FROM, RCPT TO, DATA, etc.).

2. Using swaks

Swaks (Swiss Army Knife for SMTP) is a versatile, scriptable tool that can test various aspects of SMTP servers, including TLS/SSL support, authentication, and custom headers.

swaks --to user@example.com --from test@example.com --server smtp.example.com

3. Monitoring with tcpdump

tcpdump is a network packet analyzer that allows you to capture and display the TCP/IP packets being transmitted or received over a network to which the computer is attached. This can be used to monitor SMTP traffic.

tcpdump -i any port 25 -A

4. Using logwatch or logcheck

These tools can monitor your SMTP server logs for you, summarizing and highlighting important events. logwatch and logcheck can be configured to send daily summaries or alerts based on specific log patterns, which is useful for spotting issues or understanding usage patterns.

5. Setting up Nagios or Zabbix

Both Nagios and Zabbix are powerful monitoring systems that can be set up to monitor SMTP services. They can check SMTP server availability, queue lengths, round-trip email delivery, and more, providing alerts and detailed reports. It is very time and resources consuming to configure such self-hosted monitoring tools, which requires also a lot of maintenance and work in the same time, therefore we suggest you to use SMTP Monitoring of Xitoring, which will do the monitoring in package for you.

6. Using iftop or nethogs

For real-time network usage monitoring, iftop and nethogs show the bandwidth usage on the interfaces. While they don’t monitor SMTP specifically, they can be useful to identify unusual levels of network activity that might indicate an issue with your SMTP server.

Monitoring SMTP effectively often involves a combination of these tools and methods to ensure the server is performing as expected, secure, and not being abused for spam.

MySQLTuner

MySQLTuner – Optimizing MySQL Performance

In the world of database management systems, MySQL is one of the most popular open-source relational databases. Its adaptability and scalability make it a popular choice for running a wide range of online applications, from tiny blogs to large-scale business systems. However, like with any program, improving MySQL speed is critical for guaranteeing smooth operations and application responsiveness. This is when tools like MySQLTuner come in handy.

What is MySQLTuner?

MySQLTuner is a powerful Perl script that analyzes your MySQL database configuration and suggests adjustments to optimize its performance and stability. It provides valuable insights into your MySQL server’s current state and offers recommendations tailored to your system’s specific workload and resources. By following MySQLTuner’s suggestions, you can fine-tune your MySQL server configuration for better efficiency, throughput, and reliability. 

According to the Githhub page, MySQLTuner supports around 300 indicators for MySQL/MariaDB/Percona Server in this latest version and is actively maintained supporting many configurations such as Galera ClusterTokuDBPerformance schema, Linux OS metrics, InnoDBMyISAMAria, and even more.

(more…)

How to increase database performance

Did you know that a one-second delay in website load time can result in an 11% reduction in page visits and a 7% decrease in conversions?

Databases are the foundation of several applications and services. When a database is housed on a Linux server, increasing its speed is critical to ensuring application responsiveness, customer pleasure, and operational efficiency. Performance difficulties in a database might degrade user experiences and result in possible corporate losses.

In this post, I’ll look at advanced ways for improving the speed of databases running on Linux servers. I’ll focus on popular open-source databases such as MySQL, PostgreSQL, and MongoDB, which are known for their robust features and smooth interaction with the Linux environment.

(more…)

How to Fine-Tune Linux Kernel Parameters

Servers work really hard. They’re like the busy bees of the computer world!  Sometimes, servers can get a little tired and slow, and we don’t want that. It’s important to give them a boost and help them become faster and stronger. One way to do this is by playing with something called Linux kernel parameters.

Linux is known for its flexibility and ability to be customized, which allows system administrators to adjust its behavior to achieve top performance. The secret to tapping into this potential is knowing how to fine-tune its kernel parameters. Adjusting these parameters affects the Linux kernel’s resource management, impacting networking, file systems, and memory.

This guide covers Linux kernel optimization by focusing on TCP/IP settings, file system parameters, and memory management for improved server performance.

What Are Kernel Parameters?

The Linux kernel is the core of the Linux operating system. It’s responsible for controlling hardware and software resources, like your computer’s memory and network connections. Kernel parameters are configuration options that let you change how the kernel works, letting you fine-tune the system’s performance.

(more…)

Optimizing PHP-FPM for Better Performance

Introduction

If you are here, you probably hosting a PHP-based website or web application, The good news is you are reading the correct article.

Optimizing PHP-FPM can have deep impacts on your Website or Web application performance and efficiency, This is because PHP-FPM (PHP FastCGI) is a server module that is responsible for handling PHP requests mainly when you are using NGINX as your webserver. By optimizing PHP-FPM you can achieve:

  • reduce the resource usage
  • reduce the response time for PHP requests
  • handle more requests concurrently
  • prevent performance degradation
  • improve the reliability of your web server over time

All that will be leading to a faster and more efficient website or web application.

Note that all of the steps in this article are in an environment containing: PHP 8.1.14

Optimize PHP-FPM

Optimizing PHP-FPM is not rocket science if you know the variables and their purpose. So we are going to briefly explain the most important ones:

The Process manager types (pm)

The process manager configuration in PHP-FPM (pm) refers to the setting that controls how PHP-FPM manages the worker processes that handle PHP requests, These settings stated how many workers should be deployed or created to manage the PHP requests and how they should respond to changes in demand.

We have Three pm types:

Ondemand

This process manager creates a new process for each request that is received by PHP-FPM. This ensures that every request is processed by a separate process, which can be useful for reducing the risk of one request affecting another, but it can also be resource-intensive and slow down the server.

Dynamic

This process manager uses a pool of worker processes to handle incoming requests. The number of worker processes can be configured and can be adjusted based on the current demand. This process manager is more efficient than the on-demand process manager, as it uses fewer resources and can handle requests more quickly, but it may not provide the same level of isolation as the on-demand process manager.

Static

The static option as it’s clear from the name Will fix the number for all of the sub-options of the pm configurations and it will not adopt in any circumstances.

Most of the tips and configurations provided in this article are valid when the Process Manager type (pm) is set to dynamic, so it is assumed that the pm is set to dynamic.

Tuning the Configurations

To access the main configuration file of the PHP-FPM on an RHEL based OS you need to execute the following command:

vim /etc/opt/remi/php81/php-fpm.d/www.conf

The configuration file location could be different based on your Linux Distro or even based on your installation method.
After selecting your pm configuration based on the type and load of your website or web application, you can tweak the following variables to get the most out of the PHP-FPM.

pm.max_children

This setting determines the maximum number of child processes that can run simultaneously. Setting it too low can cause PHP-FPM to spawn new processes too frequently, leading to overhead.

pm.max_children = 50

pm.start_servers

The number of worker processes to start when PHP-FPM is launched. This option is only applicable when using the dynamic process manager.

We are going to set it to:

pm.start_servers = 10

pm.min_spare_servers & pm.max_spare_servers

The minimum and maximum number of idle worker processes to keep available at all times. This option is only applicable when using the “dynamic” process manager.

pm.min_spare_servers = 5
pm.max_spare_servers = 15

pm.max_requests

The maximum number of requests that each worker process should handle before it is terminated and replaced with a new process.

We are going to uncomment it and set it to:

pm.max_requests = 500

Handling Unexpected behavior

These three options in a PHP-FPM configuration file are related to the management of worker processes in the event of an error or unexpected behavior.

  • emergency_restart_threshold: This setting specifies the number of worker processes that need to exit in a short period of time before PHP-FPM will trigger an emergency restart. An emergency restart will terminate all worker processes and restart PHP-FPM, which can help resolve problems caused by a malfunctioning worker process.
  • emergency_restart_interval: This setting specifies the interval of time in which the emergency_restart_threshold must be exceeded before an emergency restart is triggered. In this example, the interval is set to 1 minute.
  • process_control_timeout: This setting specifies the amount of time PHP-FPM will wait for a worker process to exit gracefully before forcing it to terminate. If a worker process does not exit within the specified time, PHP-FPM will kill it and start a new worker process to take its place. This setting helps to prevent worker processes from hanging or slowing down the system, and helps to ensure the overall stability of PHP-FPM.

We are going to set them like below:

[global]
emergency_restart_threshold = 10
emergency_restart_interval = 1m
process_control_timeout = 10s

PHP Monitoring

Make sure to use PHP monitoring to decrease bottlenecks and the overall performance of your application and server.

Conclusion

Optimizing PHP-FPM is an important step in ensuring the performance and stability of your web server. By properly configuring PHP-FPM, you can control the number of worker processes, manage resource utilization, and improve response times for PHP requests.

In addition to optimizing PHP-FPM, it’s also important to optimize your PHP configuration file (php.ini) and your web server software (such as Nginx). This can involve setting appropriate values for memory limits, enabling caching, and tweaking other settings to match the specific requirements of your web applications.

To get the most out of your web server, it’s recommended to take a comprehensive approach to optimization, which includes optimizing PHP-FPM, PHP, and Nginx. There are many online resources available that can help you learn more about optimizing these components, and it’s always a good idea to stay up-to-date with the latest best practices and techniques. So, take the time to read more about optimizing PHP and Nginx, and start maximizing the performance of your web server today!

Setup local DNS caching with DNSmasq on CentOS 8

Introduction

In today’s interconnected world, reliable and efficient network infrastructure is crucial for smooth online experiences. Whether you’re a seasoned system administrator or an enthusiast looking to optimize your network, setting up a local DNS caching server can significantly enhance your network’s performance and reduce latency.

DNS, short for Domain Name System, plays an important role in translating domain names into IP addresses. When you access a website or any online service, your device needs to query a DNS server to get the corresponding IP address. By default, these DNS queries are sent to remote DNS servers, which can cause delays and increase network traffic.

To overcome these challenges, we will implement a local DNS caching server using DNSmasq on CentOS 8. DNSmasq is a lightweight and versatile DNS forwarding and DHCP server that can be easily configured to provide local DNS caching capabilities. This setup enables your CentOS 8 machine to cache DNS responses locally, reducing the reliance on external DNS servers and accelerating the overall network performance.

Throughout this blog post, we will guide you step-by-step through the process of installing and configuring DNSmasq on CentOS 8. We’ll cover the necessary prerequisites, and explain the key concepts behind DNS caching, By the end, you’ll have a fully operational local DNS caching server that optimizes DNS resolution and improves your network’s responsiveness.

Whether you’re running a home network, a small business infrastructure, or a larger enterprise setup, implementing local DNS caching with DNSmasq on CentOS 8 can have significant advantages. It not only reduces the load on external DNS servers but also enhances the reliability and security of DNS resolution within your network.

Prerequisites

Before proceeding with the installation and configuration of DNSmasq, make sure you have the following:

  • A CentOS 8 machine with root or sudo privileges.
  • A stable internet connection.
  • Basic knowledge of the Linux command line.

Install DNSmasq

The first step is to install DNSmasq on your CentOS 8 machine. Open a terminal or SSH into your CentOS server and run the following command:

dnf install dnsmasq

You can also use yum to install DNSmasq:

yum install dnsmasq

Configuring DNSmasq

Once DNSmasq is installed, it’s time to configure its settings. The main configuration file for DNSmasq is located at /etc/dnsmasq.conf. Open the file using a text editor:

vim /etc/dnsmasq.conf

In the configuration file, you’ll find different options to customize DNSmasq. Some important settings to consider are:

listen-address

Specify the IP address on which DNSmasq should listen for DNS queries. set it to 127.0.0.1 if you are using DNSmasq as a local DNS caching service.

resolv-file

Set the path to the file containing upstream DNS servers. you can create any file where for example we are creating a file named “resolv.dnsmasq” in “/etc” with the following content:

vim /etc/resolv.dnsmasq
nameserver 8.8.8.8
nameserver 1.1.1.1

These configurations will enable DNSmasq to query the DNS records from the 8.8.8.8 and 1.1.1.1 and cache locally.

cache-size

Define the maximum number of DNS records to cache. The cache-size value represents the maximum number of DNS records that can be stored in the cache. It is defined in terms of the number of DNS resource records (RRs) rather than the amount of memory consumed. Each cached DNS record takes up a certain amount of memory, and as the cache size increases, so does the memory usage of the DNSmasq process.

The appropriate value for cache-size depends on factors such as the available memory on your CentOS 8 machine and the expected DNS query load. It’s important to strike a balance between maximizing cache utilization and avoiding excessive memory consumption.

cache-size=2000

no-resolv

Uncomment this line to prevent DNSmasq from using the “/etc/resolv.conf” file.

no-poll

Uncomment this line to enable asynchronous DNS resolution.

When DNSmasq receives a DNS query, it typically sends the query to the configured upstream DNS servers and waits for a response. During this waiting period, DNSmasq uses a polling mechanism to periodically check for the arrival of the DNS response. This polling approach introduces some delay and can impact the responsiveness of DNS resolution, especially in high-traffic scenarios.

By enabling no-poll, DNSmasq switches to an asynchronous mode of operation. Instead of continuously polling for the response, it allows the DNS resolution process to be event-driven. When a DNS query is sent, DNSmasq immediately moves on to process other tasks, and when the DNS response arrives, it is handled asynchronously. This approach improves the responsiveness of DNS resolution by reducing the delay caused by polling.

Start and Enable DNSmasq

Now that you have configured DNSmasq, you need to start and enable the service before making the final changes to the Linux network settings. execute the following command to start the DNSmasq service and make it run at the startup:

systemctl start dnsmasq
systemctl enable dnsmasq

Set default DNS to DNSmasq

If you have uncommented the “no-resolv” option in the DNSmasq config you don’t need to edit the “/etc/resolv.conf”.

As the last step, you need to make one change in the “/etc/resolv.conf” file. you need to comment on all lines that refer to “nameserver” and write a new one with 127.0.0.1 as the value, see the following example:

nameserver 127.0.0.1
#nameserver 8.8.8.8
#nameserver 1.1.1.1

Also, it’s recommended to apply this change in your network configuration in the “network-scripts” file:

vim /etc/sysconfig/network-scripts/YOUR_NETWORK_INTERFACE_NAME

Set the DNS1 value to 127.0.0.1 and set the DNS2 to another DNS server as the backup.

After that you need to restart your network interface for changes to take effect:

nmcli device reapply YOUR_NETWORK_INTERFACE_NAME

If you want to find your Network Interface name you can use “ifconfig” command.
To install “ifconfig”:

yum install net-tools
ifconfig

Conclusion

The installation and configuration process detailed in this article provides step-by-step instructions, enabling system administrators and network enthusiasts to seamlessly deploy a fully operational local DNS caching server on their CentOS 8 machine. By following these guidelines, users can effectively optimize DNS resolution, streamline network performance, and elevate the overall online experience.

How to Configure Jenkins with SSL Behind Nginx on Ubuntu 20.04

Jenkins is an open-source tool automation solution that enables the continuous delivery of software. It is used to set up the full software delivery pipeline. This allows developers to manage and control software delivery processes throughout the product’s lifecycle, allowing them to build, test, and reliably deploy their software.

Jenkins has an extendable architecture with a dynamic and active community. The programming language used is Java. In most cases, Jenkins operates as a self-contained Java servlet application. Java servlet containers like Apache Tomcat and GlassFish can also be used to run the program.

Organizations can use Jenkins to automate and speed up the software development process. Jenkins combines all development lifecycle stages, including build, document, test, package, stage, deploy, static analysis, and many others.

Plugins assist Jenkins in achieving Continuous Integration. DevOps stages can be integrated thanks to plugins. Installing the utility’s plugins is necessary to incorporate that tool. Git, Maven 2 projects, Amazon EC2, HTML publishers, etc. are a few examples.

(more…)

How to install and use TCPflow (TCPDump alternative)

On Unix-like systems like Linux, TCPflow is a free, open-source, and potent command line utility for network traffic analysis. It records information sent or received across TCP connections and saves it in a file for subsequent examination in a way that makes protocol analysis and debugging possible.

Since it processes packets from the wire or a saved file, t is a program similar to tcpdump. It is compatible with the same potent filtering expressions as its sibling. The sole distinction is that tcpflow organizes all TCP packets into separate files (one for each direction of flow) and assembles each flow for later analysis.

Its feature set also includes a sophisticated plug-in system for reversing MIME encoding, decompressing HTTP connections that have been compressed, and calling external programs for post-processing, among other things.

Tcpflow has a wide range of applications, including understanding network packet flows, forensics, and disclosing the contents of HTTP connections.

(more…)

How to Add a User to Sudoers in AlmaLinux or Rocky Linux

What is sudoers in Linux?

Have you ever wondered why it takes “sudo” or “su” to make system-wide changes in a Linux terminal? Su means “super user,” while sudo means “super user do.” With this command, you’re requesting root access and the status of a super user. If your name is not on the list, Linux checks a specific file to see if you are authorized to be given root access, much like a VIP CLUB. While you can still obtain root capabilities, you must log in as root to do so. This is not a very secure course of action. Reason: If you have root access, your system’s doors are wide open, making it vulnerable. The commands “sudo” and “su” permit you to run a specific program that you specify.

That individual file already has the maintenance user account configuration in certain distributions. You type:

command sudo

And enter your user account’s password, or

su root

And then, type the command after entering the root password. I’ve come to understand that not all distributions support this simple process, and you might need to manually add your username to the sudoers file. We just took the VIP list from the guard dozing off, and we’ll teach you how to add your name. (more…)