Tonics is getting even more robust, in case you don't know what it is, Tonics is a General Multi-Purpose Modular CMS, you can read more about its Architecture & Features (Quick Overview).
If you did read the Architecture and Features above and you like what you see, I am looking for sponsors or funds to continue the development of the project, perhaps, if you have a job related to anything software engineering, or architecture design or you need an extra hand, you can reach out to me: olayemi@tonics.app or devsrealmer@gmail.com. Thanks for your consideration.
Back to the guide...
In today's guide on the Road To Release category, I am sharing some best practices and techniques I learned while working on the daemon that manages Tonics Job and Schedule Manager.
Let's go...
Concerns About Using PHP for Long-Running Tasks
Many people believe that PHP is unsuitable for building daemons or long-running tasks because it is commonly used for web development and is often associated with generating dynamic web pages.
Some of the concerns people raise about using PHP for long-running tasks or daemon processes include issues with memory management (the garbage collector can sometimes lead to memory leaks, which can be problematic in long-running processes), resource consumption, and stability.
However, these issues can often be addressed through careful coding practices, proper use of system resources, and proactive monitoring and maintenance of running processes.
Another common criticism of using PHP for daemons or long-running tasks is that it is not as performant as other languages that are optimized for low-level system programming, such as C or Go. While there may be some truth to this assertion in certain use cases, the reality is that PHP can still be highly performant when used appropriately and optimized for specific tasks.
Furthermore, it's worth noting that many developers may not fully understand the underlying principles of daemon and forking, and may simply follow the conventional wisdom of avoiding PHP in such scenarios.
Personally, I believe in investigating and experimenting with technologies to come to my own conclusions, which has saved me time and resources in the past, and some of my conclusions would be shared in this guide, the summary is, PHP can be a powerful tool for building long-running processes and daemons that are both efficient and reliable.
That aside, let's look at some of the common mistakes and the best practices and or techniques you might deploy...
Common Mistake #1: Mis-Handling Forking
The pcntl_fork
is commonly used in long-running tasks to split some tasks into their own processes, this is called forking, it splits the process into two identical but separate processes.
The child process runs a copy of the parent process code, but each process has its own memory space.
However, forking can be tricky and there are common mistakes people make that can cause issues in a long-running task or daemon. One such mistake is not properly handling the forking process, which can result in unexpected behavior or even crashes.
For example, file descriptors (and database connections) are shared between the parent and child processes after forking, if the parent or child closes a file descriptor or database connection, it will also be closed in the other process, potentially causing data corruption or other issues.
The above is not entirely true, but keep reading...
How pcntl_fork
works in PHP
Before showing an example, here is how pcntl_fork
works:
$pID = pcntl_fork();
if ($pID === -1){
exit(1);
}
// is this a parent process
if ($pID){
// continue the execution of parent task
}
// is this a child process
if ($pID === 0){
// do...child task
exit;
}
The function pcntl_fork()
creates a copy of the current process, creating a parent process and a child process. After calling the pcntl_fork()
function, the $pID
variable will contain a different value in the parent and child processes.
Let me expand on the $pID variable containing different value in the parent and child processes...
When
pcntl_fork()
is called, it creates a copy of the current process. Here is where it gets interesting, both the parent and child processes continue executing the same code after thepcntl_fork()
function call, but the value of$pID
will be different in each process.
In the parent process, $pID will be set to the process ID of the child process that was created, while in the child process, $pID will be set to 0.
This allows the code to differentiate between the parent and child processes and execute different code for each process.
To clarify, the pcntl_fork() function creates a new process and both the parent and child processes continue executing the same code from where the pcntl_fork() function was called.
The difference is that they have different values for $pID.
Back to the code block:
The first if statement checks whether the pcntl_fork()
function was successful. If it wasn't, the program exits with an error code of 1.
The second if statement checks whether the value of $pID
is truthy in the parent process. If it is, the program executes the code inside the if block, which is the code for the parent task.
The third if statement checks whether the value of $pID
is 0 in the child process. If it is, the program executes the code inside the if block, which is the code for the child task.
The exit statement at the end of the child task ensures that the child process does not continue executing the parent code. How is that possible?
In the child process, after it executes the code for the child task, it will continue executing any code that comes after the if block where $pID is checked and found to be 0.
To prevent the child process from executing any more code from the parent process, the exit statement is used to terminate the child process after it has completed its task.
The exit statement causes the child process to stop executing immediately, so it does not continue executing any code that comes after it in the parent process.
Examples of How Forking Can Be Mis-Handled in PHP
There are a couple of good use cases for forking, one is if you want to run multiple processes at the same time, perhaps, speeding up some sort of task.
For example, let's say you have a script that appends a line to a log file, and you want to run this script 10 times concurrently.
To achieve this, you can use the pcntl_fork()
function in PHP to create multiple child processes, each of which runs a separate instance of the script.
The child processes inherit a copy of the parent process's memory, including open file descriptors, so they can all write to the same log file without interfering with each other.
Here is an example:
Note: I use the sleep(rand(...)) function to simulate the fact that some child would take some time to exit than others
$fp = fopen("test.log", "a");
$pIDS = [];
for ($i = 0; $i < 10; ++$i) {
$pID = pcntl_fork();
if ($pID == -1) {
die("Could not fork process.");
} else if ($pID) {
// parent process
$pIDS[] = $pID;
} else {
// child process
sleep(2);
fwrite($fp, "Child process ".$i." running.\n");
sleep(rand(2, 10));
exit(0);
}
}
fclose($fp);
exit();
If the above code is executed properly, you should get the following:
Child process 0 running.
Child process 1 running.
Child process 2 running.
Child process 3 running.
Child process 4 running.
Child process 5 running.
Child process 6 running.
Child process 7 running.
Child process 8 running.
Child process 9 running.
By merely looking at the code, there doesn't seem to be an issue, right?
Beside, sharing file descriptor across forked processes doesn't seem to pose no issue which is in contrary to what I said above, for now, let's keep going and you would later understand why sharing file descriptor across forked processes isn't that wise.
To keep things simple, let's do some graphical investigation of what's going on:
What is going on here is that the parent process exited before all of the child processes were complete (if you look at the process with the name: php bin/console --run --onStartUp
, that is the parent of all the beneath children, you'll notice that after a while, it exits).
What happens in this scenario is that the child process becomes an orphan (a process with no parent), and when a child becomes an orphan, it would be adopted by another process, meaning that process would become its new parent process for the child process.
As long as the new parent process is still running and able to handle the child processes, they will continue running until they complete their tasks or are terminated by the new parent process.
In most cases, the new parent process for an orphaned process is the init process, which is the first user-space process that is started when the system boots.
The init process is responsible for managing system processes, including orphaned processes, and is designed to handle all orphaned processes if the original parent process is unable to do so.
However, it's important to note that this does not necessarily have to be the main init process (PID 1).
In some cases, sub-init processes may be configured to handle orphaned processes for specific users or sessions. If you take a good look at the video, you'll see I am using WSL and I can open multiple instances of a terminal.
When you open a new terminal, a new login session is typically created for that terminal. The login session is responsible for managing processes that are related to that terminal and is typically associated with a sub-init process that handles orphaned processes for that session.
So if a child process becomes orphaned in that session, it would be adopted by the sub-init process associated with that session rather than the main init process. This allows the sub-init process to manage processes that are specific to that session and ensures that the main init process is not overwhelmed with orphaned processes from multiple sessions.
If you kill the sub-init of that terminal, you'll see that it also destroys the terminal instance.
The point is if you let a parent process exit before all of its child processes are complete, the child processes become orphaned and are adopted by another process. While this may not necessarily cause any immediate problems, it can lead to issues down the line.
When a child process becomes orphaned, it can be adopted by any other process that has the capability to adopt processes.
This means that the new parent process may not have the knowledge or resources to properly manage the child process, which can lead to problems such as performance issues, resource leaks, or security risks.
Additionally, relying on another process to manage your child's processes can make it difficult to track and troubleshoot issues that may arise.
If a problem occurs with one of your child's processes, it can be difficult to identify which process is responsible if multiple processes are managing your child's processes.
By properly managing your child's processes within your own code, you can more easily track and troubleshoot any issues that may arise, as you have full control over the processes and their management.
So, here is a fix:
$fp = fopen("test.log", "a");
$pIDS = [];
for ($i = 0; $i < 10; ++$i) {
$pID = pcntl_fork();
if ($pID == -1) {
die("Could not fork process.");
} else if ($pID) {
// parent process
$pIDS[] = $pID;
} else {
// child process
sleep(2);
fwrite($fp, "Child process ".$i." running.\n");
sleep(rand(2, 10));
exit(0);
}
}
foreach ($pIDS as $pID) {
pcntl_waitpid($pID, $status);
}
fclose($fp);
exit();
The fix for the code above involves properly managing the child processes by ensuring that the parent process waits for each child process to complete before exiting.
This is accomplished using the pcntl_waitpid
function in a loop to wait for each child process to complete.
Here's a breakdown of how the fix corrects the mistakes:
Properly managing child processes: The use of
pcntl_waitpid
ensures that the parent process waits for each child process to complete before exiting. This prevents orphaned child processes and ensures that the parent process properly manages its child processes.- Moving fclose: In the original code,
fclose()
was called before waiting for the child processes to complete. In the fixed code,fclose()
is called after the loop that waits for all child processes to complete. Clean exit: The addition of
exit()
at the end of the script ensures that the script exits cleanly and prevents any lingering child processes from continuing to run after the script has been completed.
Here is an illustration with a video:
You can see that this time, the parent waits for all of its child's processes to complete before exiting.
Common Mistake #2: Mis-Handling Forking Resources
When you fork a process in PHP, the forked (child) process inherits all variables and resources from the parent, including the file descriptors, that is the reason we were able to write to the same file from several processes as outlined in the previous example code.
Again, while it is possible to share file descriptors between parent and child processes, it may not always be practical or safe to do so. This is especially true when it comes to resources like database connections.
To be on the safer side, open and close the resource you'll like to use in the child itself, this can even help with scalability, as each child process can handle its own requests without relying on a shared resource.
So, here is the wrong way from the previous code:
Bad Example - Opening and Closing a File
$fp = fopen("test.log", "a");
$pIDS = [];
for ($i = 0; $i < 10; ++$i) {
$pID = pcntl_fork();
if ($pID == -1) {
die("Could not fork process.");
} else if ($pID) {
// parent process
$pIDS[] = $pID;
} else {
// child process
sleep(2);
fwrite($fp, "Child process " . $i . " running.\n");
sleep(rand(2, 10));
exit(0);
}
}
foreach ($pIDS as $pID) {
pcntl_waitpid($pID, $status);
}
fclose($fp);
exit();
and here is a good way:
Good Way - Opening and Closing a File
$pIDS = [];
for ($i = 0; $i < 10; ++$i) {
$pID = pcntl_fork();
if ($pID == -1) {
die("Could not fork process.");
} else if ($pID) {
// parent process
$pIDS[] = $pID;
} else {
// child process
sleep(2);
$fp = fopen("test.log", "a");
fwrite($fp, "Child process " . $i . " running.\n");
fclose($fp);
sleep(rand(2, 10));
exit(0);
}
}
foreach ($pIDS as $pID) {
pcntl_waitpid($pID, $status);
}
exit();
Here is another example that shows both bad and good ways of opening a database connection when dealing with a forked process
Bad Example - Opening and Closing a Database Connection
$db = db();
$pIDS = [];
for ($i = 0; $i < 10; ++$i) {
$pID = pcntl_fork();
if ($pID == -1) {
die("Could not fork process.");
} else if ($pID) {
// parent process
$pIDS[] = $pID;
} else {
// child process
echo $db->Select('*')->From("tonics_global")->Limit(1)->FetchFirst()?->key;
sleep(rand(2, 10));
exit(0);
}
}
foreach ($pIDS as $pID) {
pcntl_waitpid($pID, $status);
}
$db->getTonicsQueryBuilder()->destroyPdoConnection();
exit();
While opening a file and using the same file descriptors works in several forked processes, that would not apply to database connections, because as soon as one child closes, it brings down the db connection along with it, which in turn affects other processes that might be trying to use it, if you do this, you might get the error: "General error: 2006 MySQL server has gone away"
The fix is to manage the resource you want in the child process:
Good Example - Opening and Closing a Database Connection
$pIDS = [];
for ($i = 0; $i < 10; ++$i) {
$pID = pcntl_fork();
if ($pID == -1) {
die("Could not fork process.");
} else if ($pID) {
// parent process
$pIDS[] = $pID;
} else {
// child process
$db = db();
echo $db->Select('*')->From("tonics_global")->Limit(1)->FetchFirst()?->key;
echo "\n";
$db->getTonicsQueryBuilder()->destroyPdoConnection();
sleep(rand(2, 10));
exit(0);
}
}
foreach ($pIDS as $pID) {
pcntl_waitpid($pID, $status);
}
exit();
As you can see, this one opens the db connection in the child process, and as soon as we are done, we close it by destroying the PDO connection (setting the pdo object to null), this way, we avoid a zombie MySQL process.
Common Mistake #3: Letting PHP Daemon Linger
When a PHP daemon process lingers or runs indefinitely, it can lead to memory build-up over time. This is because PHP, like most garbage-collected programming languages, dynamically allocates memory for variables, arrays, objects, and other data structures.
When a daemon process runs for a long time, it can continuously allocate memory for various tasks, and if it doesn't release that memory properly, the system can run out of memory.
This can cause other processes on the system to slow down or crash due to the lack of available memory.
In addition, lingering daemons can also accumulate other types of resources such as file handles, network sockets, and database connections. These resources can also become depleted over time, leading to system instability.
The cruise of the matter is, no matter how you free the memory, perhaps manually garbage collecting it, it would continue to build up memory, so, the fix is to gracefully restart the daemon perhaps after some seconds ( an hour maybe) or after it has exceeded a certain memory.
Let me expand on what I mean by graceful restart:
In a typical scenario, a parent process may spawn multiple child processes that perform certain tasks.
Now, if the parent process receives a signal to terminate or restart (e.g., when an app update is being applied), it should first send a signal to its child processes to gracefully shut down. This signal can be in the form of a SIGTERM signal, which allows the child process to do any necessary cleanup before exiting.
During this cleanup phase, the child process may close files, release shared resources, flush buffers, and perform other cleanup tasks before it finally exits.
This ensures that there are no dangling processes or resources left behind, which could otherwise cause problems during a subsequent restart of the program.
Once all child processes have exited, the parent process can safely exit, and the program can be restarted with a clean slate.
You can use systemD
to manage the daemon (you can configure it to automatically restart the daemon once it is down), you can use pcntl_alarm
to set the time you want the process to terminate, once the termination is done, set the pcntl_alarm
to 0 so you can cancel the alarm.
It is important to note that, the parent would send a SIGTERM
signal to all child processes, this is telling them to gracefully shut down as it is about to restart, you can handle the signal by using: pcntl_signal
and listening to SIGTERM
.
This tutorial is getting longer than expected, so, I am going to quickly round up with a use case of how it's done in Tonics.
I have left some pointers as to how to properly shut down any process, and prevents lingering, if you want a guide with code examples on how to specifically prevent lingering, you can leave a comment.
Note: We didn't touch aycnchronously handling of signals where you won't have to wait for child process to complete in the parent process but can let them do there thing while aycnchronously shutting them down when they are done, but the above examples is enough for typical use cases.
Use Case: How Tonics Manage The Job and Schedule Manager With PHP Daemon
In Tonics CMS, there are two different types of jobs - the JobManager and the ScheduleManager.
The JobManager is designed for short-burst tasks, meaning that they are expected to complete within a reasonable amount of time, example of such is a Job that is tasked with sending a reset email to a user.
On the other hand, we also have the ScheduleManager which is also a type of job but is intended for longer-running tasks.
These tasks may take some time to complete and can be scheduled to run at specific times and intervals.
The ScheduleManager also has some additional capabilities that the JobManager does not have.
For instance, a schedule can be nested, which means that you can schedule one task to run after another has been completed. The ScheduleManager can also prioritize tasks, allowing you to determine which tasks should be completed first.
An example of a scheduled job is App update
, Purging Old Sessions
, Deleting Temp Files
, Periodically Calling an API to Accomplish a Task
and many more that relate to schedules.
Both the JobManager and ScheduleManager are controlled by a Transporter, which is responsible for managing the jobs and schedules, the default Transport in Tonics is the DatabaseTransport, but you can add whatever you want to use (in-memory, etc, etc).
For the JobsManager, there is the DatabaseJobTransporter that runs each job one after another in the same process, without spinning off a new process for each job. This approach is suitable for jobs that are expected to complete quickly.
In contrast, the DatabaseScheduleTransporter is designed to handle long-running tasks. Each scheduled job is spun off into its own process, which allows the ScheduleManager to run in parallel. This approach is suitable for tasks that may take longer to complete, as each task runs in its own isolated process.
I won't show the complete code of how the transporter and manager work, but I would give you an idea of how it works, first, let's start with the following image:
At the right tab are the processes in a tree-like format, in Tonics, you only need one command to spin any additional command, so, as you can see the parent process is the OnStartUpCLI
command which further spins additional two commands (child processes): the ScheduleManager
and the JobManager
.
As I said previously, any scheduled jobs would be split in their own process, this is the reason you can see Core_DiscoverUpdate
under the parent ScheduleManager
.
In contrast, the JobManager is spinning no child processes, it is been run one after the order, however, this is the way it works in the DefaultTransport for the JobManager, so, depending on how you implement yours, you might change the architectural reasoning to soothe your taste.
The next question is then, how do we apply new updates to the Module or Apps in the event that there are new updates or changes to the application modules?
This would depend on what you are using to manage the daemon, by default, there is a systemD service file for Tonics, and the services watch over a file called: restart_service.json
, whenever there is a change, the content of the file would be updated, it can be anything, but I am using a timestamp, when the content is updated, the service-watcher detects that, and it restarts the service file.
When there is a restart, SystemD sends a SIGTERM signal, we listen to that and gracefully shut down whatever we are doing and then it restarts itself, here is a video demo:
That's it.
If you have any questions or you would like to work with me, please reach out to me: olayemi@tonics.app or devsrealmer@gmail.com
// Tonics //