How to check SMTP Health on Linux?

SMTP stands for Simple Mail Transfer Protocol. It is an Internet standard communication protocol for email transmission across IP networks. SMTP is used to send messages from an email client to an email server or between servers. It operates on the application layer of the TCP/IP protocol suite and uses port 25 by default, though it can also use port 587 for messages submitted by email clients to servers and port 465 for secure communication over SSL/TLS.

The primary purpose of SMTP is to set up communication rules between servers, enabling them to relay email messages to the correct destination. SMTP handles the sending part of the email delivery process, while the receiving side typically involves additional protocols like IMAP (Internet Message Access Protocol) or POP3 (Post Office Protocol version 3) for email retrieval and management by the end user.

Monitoring SMTP  on Linux can be approached through various methods, including command-line tools, logging, and network monitoring tools. Here are some examples to get you started:

1. Using telnet or nc (netcat)

You can manually test an SMTP server’s responsiveness and simulate sending an email using telnet or nc. This method allows you to directly interact with the SMTP server.

telnet 25

Or using nc:

nc 25

You then follow the SMTP protocol steps manually (HELO, MAIL FROM, RCPT TO, DATA, etc.).

2. Using swaks

Swaks (Swiss Army Knife for SMTP) is a versatile, scriptable tool that can test various aspects of SMTP servers, including TLS/SSL support, authentication, and custom headers.

swaks --to --from --server

3. Monitoring with tcpdump

tcpdump is a network packet analyzer that allows you to capture and display the TCP/IP packets being transmitted or received over a network to which the computer is attached. This can be used to monitor SMTP traffic.

tcpdump -i any port 25 -A

4. Using logwatch or logcheck

These tools can monitor your SMTP server logs for you, summarizing and highlighting important events. logwatch and logcheck can be configured to send daily summaries or alerts based on specific log patterns, which is useful for spotting issues or understanding usage patterns.

5. Setting up Nagios or Zabbix

Both Nagios and Zabbix are powerful monitoring systems that can be set up to monitor SMTP services. They can check SMTP server availability, queue lengths, round-trip email delivery, and more, providing alerts and detailed reports. It is very time and resources consuming to configure such self-hosted monitoring tools, which requires also a lot of maintenance and work in the same time, therefore we suggest you to use SMTP Monitoring of Xitoring, which will do the monitoring in package for you.

6. Using iftop or nethogs

For real-time network usage monitoring, iftop and nethogs show the bandwidth usage on the interfaces. While they don’t monitor SMTP specifically, they can be useful to identify unusual levels of network activity that might indicate an issue with your SMTP server.

Monitoring SMTP effectively often involves a combination of these tools and methods to ensure the server is performing as expected, secure, and not being abused for spam.


MySQLTuner – Optimizing MySQL Performance

In the world of database management systems, MySQL is one of the most popular open-source relational databases. Its adaptability and scalability make it a popular choice for running a wide range of online applications, from tiny blogs to large-scale business systems. However, like with any program, improving MySQL speed is critical for guaranteeing smooth operations and application responsiveness. This is when tools like MySQLTuner come in handy.

What is MySQLTuner?

MySQLTuner is a powerful Perl script that analyzes your MySQL database configuration and suggests adjustments to optimize its performance and stability. It provides valuable insights into your MySQL server’s current state and offers recommendations tailored to your system’s specific workload and resources. By following MySQLTuner’s suggestions, you can fine-tune your MySQL server configuration for better efficiency, throughput, and reliability. 

According to the Githhub page, MySQLTuner supports around 300 indicators for MySQL/MariaDB/Percona Server in this latest version and is actively maintained supporting many configurations such as Galera ClusterTokuDBPerformance schema, Linux OS metrics, InnoDBMyISAMAria, and even more.


Optimizing PHP-FPM for Better Performance


If you are here, you probably hosting a PHP-based website or web application, The good news is you are reading the correct article.

Optimizing PHP-FPM can have deep impacts on your Website or Web application performance and efficiency, This is because PHP-FPM (PHP FastCGI) is a server module that is responsible for handling PHP requests mainly when you are using NGINX as your webserver. By optimizing PHP-FPM you can achieve:

  • reduce the resource usage
  • reduce the response time for PHP requests
  • handle more requests concurrently
  • prevent performance degradation
  • improve the reliability of your web server over time

All that will be leading to a faster and more efficient website or web application.

Note that all of the steps in this article are in an environment containing: PHP 8.1.14

Optimize PHP-FPM

Optimizing PHP-FPM is not rocket science if you know the variables and their purpose. So we are going to briefly explain the most important ones:

The Process manager types (pm)

The process manager configuration in PHP-FPM (pm) refers to the setting that controls how PHP-FPM manages the worker processes that handle PHP requests, These settings stated how many workers should be deployed or created to manage the PHP requests and how they should respond to changes in demand.

We have Three pm types:


This process manager creates a new process for each request that is received by PHP-FPM. This ensures that every request is processed by a separate process, which can be useful for reducing the risk of one request affecting another, but it can also be resource-intensive and slow down the server.


This process manager uses a pool of worker processes to handle incoming requests. The number of worker processes can be configured and can be adjusted based on the current demand. This process manager is more efficient than the on-demand process manager, as it uses fewer resources and can handle requests more quickly, but it may not provide the same level of isolation as the on-demand process manager.


The static option as it’s clear from the name Will fix the number for all of the sub-options of the pm configurations and it will not adopt in any circumstances.

Most of the tips and configurations provided in this article are valid when the Process Manager type (pm) is set to dynamic, so it is assumed that the pm is set to dynamic.

Tuning the Configurations

To access the main configuration file of the PHP-FPM on an RHEL based OS you need to execute the following command:

vim /etc/opt/remi/php81/php-fpm.d/www.conf

The configuration file location could be different based on your Linux Distro or even based on your installation method.
After selecting your pm configuration based on the type and load of your website or web application, you can tweak the following variables to get the most out of the PHP-FPM.


This setting determines the maximum number of child processes that can run simultaneously. Setting it too low can cause PHP-FPM to spawn new processes too frequently, leading to overhead.

pm.max_children = 50


The number of worker processes to start when PHP-FPM is launched. This option is only applicable when using the dynamic process manager.

We are going to set it to:

pm.start_servers = 10

pm.min_spare_servers & pm.max_spare_servers

The minimum and maximum number of idle worker processes to keep available at all times. This option is only applicable when using the “dynamic” process manager.

pm.min_spare_servers = 5
pm.max_spare_servers = 15


The maximum number of requests that each worker process should handle before it is terminated and replaced with a new process.

We are going to uncomment it and set it to:

pm.max_requests = 500

Handling Unexpected behavior

These three options in a PHP-FPM configuration file are related to the management of worker processes in the event of an error or unexpected behavior.

  • emergency_restart_threshold: This setting specifies the number of worker processes that need to exit in a short period of time before PHP-FPM will trigger an emergency restart. An emergency restart will terminate all worker processes and restart PHP-FPM, which can help resolve problems caused by a malfunctioning worker process.
  • emergency_restart_interval: This setting specifies the interval of time in which the emergency_restart_threshold must be exceeded before an emergency restart is triggered. In this example, the interval is set to 1 minute.
  • process_control_timeout: This setting specifies the amount of time PHP-FPM will wait for a worker process to exit gracefully before forcing it to terminate. If a worker process does not exit within the specified time, PHP-FPM will kill it and start a new worker process to take its place. This setting helps to prevent worker processes from hanging or slowing down the system, and helps to ensure the overall stability of PHP-FPM.

We are going to set them like below:

emergency_restart_threshold = 10
emergency_restart_interval = 1m
process_control_timeout = 10s

PHP Monitoring

Make sure to use PHP monitoring to decrease bottlenecks and the overall performance of your application and server.


Optimizing PHP-FPM is an important step in ensuring the performance and stability of your web server. By properly configuring PHP-FPM, you can control the number of worker processes, manage resource utilization, and improve response times for PHP requests.

In addition to optimizing PHP-FPM, it’s also important to optimize your PHP configuration file (php.ini) and your web server software (such as Nginx). This can involve setting appropriate values for memory limits, enabling caching, and tweaking other settings to match the specific requirements of your web applications.

To get the most out of your web server, it’s recommended to take a comprehensive approach to optimization, which includes optimizing PHP-FPM, PHP, and Nginx. There are many online resources available that can help you learn more about optimizing these components, and it’s always a good idea to stay up-to-date with the latest best practices and techniques. So, take the time to read more about optimizing PHP and Nginx, and start maximizing the performance of your web server today!

Setup local DNS caching with DNSmasq on CentOS 8


In today’s interconnected world, reliable and efficient network infrastructure is crucial for smooth online experiences. Whether you’re a seasoned system administrator or an enthusiast looking to optimize your network, setting up a local DNS caching server can significantly enhance your network’s performance and reduce latency.

DNS, short for Domain Name System, plays an important role in translating domain names into IP addresses. When you access a website or any online service, your device needs to query a DNS server to get the corresponding IP address. By default, these DNS queries are sent to remote DNS servers, which can cause delays and increase network traffic.

To overcome these challenges, we will implement a local DNS caching server using DNSmasq on CentOS 8. DNSmasq is a lightweight and versatile DNS forwarding and DHCP server that can be easily configured to provide local DNS caching capabilities. This setup enables your CentOS 8 machine to cache DNS responses locally, reducing the reliance on external DNS servers and accelerating the overall network performance.

Throughout this blog post, we will guide you step-by-step through the process of installing and configuring DNSmasq on CentOS 8. We’ll cover the necessary prerequisites, and explain the key concepts behind DNS caching, By the end, you’ll have a fully operational local DNS caching server that optimizes DNS resolution and improves your network’s responsiveness.

Whether you’re running a home network, a small business infrastructure, or a larger enterprise setup, implementing local DNS caching with DNSmasq on CentOS 8 can have significant advantages. It not only reduces the load on external DNS servers but also enhances the reliability and security of DNS resolution within your network.


Before proceeding with the installation and configuration of DNSmasq, make sure you have the following:

  • A CentOS 8 machine with root or sudo privileges.
  • A stable internet connection.
  • Basic knowledge of the Linux command line.

Install DNSmasq

The first step is to install DNSmasq on your CentOS 8 machine. Open a terminal or SSH into your CentOS server and run the following command:

dnf install dnsmasq

You can also use yum to install DNSmasq:

yum install dnsmasq

Configuring DNSmasq

Once DNSmasq is installed, it’s time to configure its settings. The main configuration file for DNSmasq is located at /etc/dnsmasq.conf. Open the file using a text editor:

vim /etc/dnsmasq.conf

In the configuration file, you’ll find different options to customize DNSmasq. Some important settings to consider are:


Specify the IP address on which DNSmasq should listen for DNS queries. set it to if you are using DNSmasq as a local DNS caching service.


Set the path to the file containing upstream DNS servers. you can create any file where for example we are creating a file named “resolv.dnsmasq” in “/etc” with the following content:

vim /etc/resolv.dnsmasq

These configurations will enable DNSmasq to query the DNS records from the and and cache locally.


Define the maximum number of DNS records to cache. The cache-size value represents the maximum number of DNS records that can be stored in the cache. It is defined in terms of the number of DNS resource records (RRs) rather than the amount of memory consumed. Each cached DNS record takes up a certain amount of memory, and as the cache size increases, so does the memory usage of the DNSmasq process.

The appropriate value for cache-size depends on factors such as the available memory on your CentOS 8 machine and the expected DNS query load. It’s important to strike a balance between maximizing cache utilization and avoiding excessive memory consumption.



Uncomment this line to prevent DNSmasq from using the “/etc/resolv.conf” file.


Uncomment this line to enable asynchronous DNS resolution.

When DNSmasq receives a DNS query, it typically sends the query to the configured upstream DNS servers and waits for a response. During this waiting period, DNSmasq uses a polling mechanism to periodically check for the arrival of the DNS response. This polling approach introduces some delay and can impact the responsiveness of DNS resolution, especially in high-traffic scenarios.

By enabling no-poll, DNSmasq switches to an asynchronous mode of operation. Instead of continuously polling for the response, it allows the DNS resolution process to be event-driven. When a DNS query is sent, DNSmasq immediately moves on to process other tasks, and when the DNS response arrives, it is handled asynchronously. This approach improves the responsiveness of DNS resolution by reducing the delay caused by polling.

Start and Enable DNSmasq

Now that you have configured DNSmasq, you need to start and enable the service before making the final changes to the Linux network settings. execute the following command to start the DNSmasq service and make it run at the startup:

systemctl start dnsmasq
systemctl enable dnsmasq

Set default DNS to DNSmasq

If you have uncommented the “no-resolv” option in the DNSmasq config you don’t need to edit the “/etc/resolv.conf”.

As the last step, you need to make one change in the “/etc/resolv.conf” file. you need to comment on all lines that refer to “nameserver” and write a new one with as the value, see the following example:


Also, it’s recommended to apply this change in your network configuration in the “network-scripts” file:

vim /etc/sysconfig/network-scripts/YOUR_NETWORK_INTERFACE_NAME

Set the DNS1 value to and set the DNS2 to another DNS server as the backup.

After that you need to restart your network interface for changes to take effect:

nmcli device reapply YOUR_NETWORK_INTERFACE_NAME

If you want to find your Network Interface name you can use “ifconfig” command.
To install “ifconfig”:

yum install net-tools


The installation and configuration process detailed in this article provides step-by-step instructions, enabling system administrators and network enthusiasts to seamlessly deploy a fully operational local DNS caching server on their CentOS 8 machine. By following these guidelines, users can effectively optimize DNS resolution, streamline network performance, and elevate the overall online experience.

How to Configure Jenkins with SSL Behind Nginx on Ubuntu 20.04

Jenkins is an open-source tool automation solution that enables the continuous delivery of software. It is used to set up the full software delivery pipeline. This allows developers to manage and control software delivery processes throughout the product’s lifecycle, allowing them to build, test, and reliably deploy their software.

Jenkins has an extendable architecture with a dynamic and active community. The programming language used is Java. In most cases, Jenkins operates as a self-contained Java servlet application. Java servlet containers like Apache Tomcat and GlassFish can also be used to run the program.

Organizations can use Jenkins to automate and speed up the software development process. Jenkins combines all development lifecycle stages, including build, document, test, package, stage, deploy, static analysis, and many others.

Plugins assist Jenkins in achieving Continuous Integration. DevOps stages can be integrated thanks to plugins. Installing the utility’s plugins is necessary to incorporate that tool. Git, Maven 2 projects, Amazon EC2, HTML publishers, etc. are a few examples.


How to install and use TCPflow (TCPDump alternative)

On Unix-like systems like Linux, TCPflow is a free, open-source, and potent command line utility for network traffic analysis. It records information sent or received across TCP connections and saves it in a file for subsequent examination in a way that makes protocol analysis and debugging possible.

Since it processes packets from the wire or a saved file, t is a program similar to tcpdump. It is compatible with the same potent filtering expressions as its sibling. The sole distinction is that tcpflow organizes all TCP packets into separate files (one for each direction of flow) and assembles each flow for later analysis.

Its feature set also includes a sophisticated plug-in system for reversing MIME encoding, decompressing HTTP connections that have been compressed, and calling external programs for post-processing, among other things.

Tcpflow has a wide range of applications, including understanding network packet flows, forensics, and disclosing the contents of HTTP connections.


How to Add a User to Sudoers in AlmaLinux or Rocky Linux

What is sudoers in Linux?

Have you ever wondered why it takes “sudo” or “su” to make system-wide changes in a Linux terminal? Su means “super user,” while sudo means “super user do.” With this command, you’re requesting root access and the status of a super user. If your name is not on the list, Linux checks a specific file to see if you are authorized to be given root access, much like a VIP CLUB. While you can still obtain root capabilities, you must log in as root to do so. This is not a very secure course of action. Reason: If you have root access, your system’s doors are wide open, making it vulnerable. The commands “sudo” and “su” permit you to run a specific program that you specify.

That individual file already has the maintenance user account configuration in certain distributions. You type:

command sudo

And enter your user account’s password, or

su root

And then, type the command after entering the root password. I’ve come to understand that not all distributions support this simple process, and you might need to manually add your username to the sudoers file. We just took the VIP list from the guard dozing off, and we’ll teach you how to add your name. (more…)

Use bitlocker with powershell on Windows

BitLocker is an encryption solution for volumes initially made available in Windows Vista and Windows Server 2008, respectively. BitLocker Drive Encryption (BDE) may have some of the same issues that plague other Microsoft products, but many individuals use it all over the world to keep their data secure when it is dormant.

What is PowerShell?

Microsoft created PowerShell as an object-oriented automation engine and scripting language with an interactive command-line shell to assist IT professionals in automating administrative activities and configuring systems. PowerShell is part of the PowerShell family of tools.

In contrast to most command-line shells, which are built on text, PowerShell, based on the.NET framework, works with objects. Because of its scripting features, PowerShell is used as a tool for automation by system administrators working in internal IT departments and other entities such as managed service providers. These administrators are employed in both internal and external IT departments.

The original version of PowerShell was a closed-source solution exclusive to the Windows platform. In 2016, Microsoft released PowerShell as open-source software and made it compatible with macOS and Linux. (more…)

Linux Crontab Tutorial

Crontab Introduction

A daemon’s name is Cron. Daemons are utility programs in Linux that operate in the background, monitoring and carrying out activities in response to triggering events and programmed schedules. Daemons can also be used to automate repetitive tasks.

The origin of the term “cron” can be traced back to the Greek word “Chronos,” which can be translated as “time.” Cron is a daemon that operates according to a set timetable or calendar, as this indicates.

A long-running piece of software known as the cron daemon is a system tool responsible for executing commands at specific dates and times. When using cron daemons, you can schedule computer activities as one-time events, occasional events, or as jobs that are scheduled repeatedly and on a regular basis.

Cron scheduling is useful for many businesses because it can automate repetitive operations, edit databases, data, or files, send bulk email messages, and conduct administrative tasks on a predetermined schedule.

The scheduling syntax that cron utilizes is also often used by software that does not run on operating systems. An example is Zuar’s Mitto data pipeline solution. Mitto can automate a wide range of processes by utilizing cron scheduling, including manipulating data within data warehouses, pulling data from other software, and many more.

The term ‘Cron Table’ can be abbreviated to ‘Crontab,’ a component of Cron. It is a file containing the cron schedule that needs to be executed and the commands used to automate operations and activities. When you make a new cron job, its information will be saved in the crontab file.
System administrators can only modify the system crontab file. However, many administrators are supported by Unix-like operating systems. Everybody can make a crontab file and add commands to it anytime.

Users may automate system upkeep, disk space monitoring, and backup scheduling with cron jobs. Cron jobs are ideal for servers and other machines that operate continuously because of their nature.

Cron jobs can be useful for web developers even though system administrators often utilize them.

As a website administrator, you could, for instance, set up three cron jobs: one to check for broken links every Monday at midnight, one to back up your site every day at midnight automatically, and one to delete the cache of your site every Friday at noon. (more…)

Backup and restore GPG keys on Linux


The issue of privacy is becoming more and more controversial. Users of Linux can encrypt files with public-key cryptography by using the gpg command. If you were to lose your encryption keys, this would be a disastrous situation. This is how you may support their claims.

OpenPGP and GNU Privacy Guard

One benefit of electronic files over paper hard copies is the ability to encrypt them so that only authorized users may access them. It won’t matter if they end up in the wrong hands. The contents of the files are only accessible to you and the intended recipient.