0 is valid fd, so I recommend initializing fds to -1.
signalfd was just off-hand mentioned, but for writing anything larger, like lets say a daemon process, it keeps things close to all the other events being
reacted to. E.g.
#include <signal.h>
#include <unistd.h>
#include <stdio.h>
#include <stdlib.h>
#include <sys/timerfd.h>
#include <sys/signalfd.h>
#include <sys/epoll.h>
static int signalfd_init(void)
{
sigset_t sigs, oldsigs;
int sfd = -1;
sigemptyset(&sigs);
sigemptyset(&oldsigs);
sigaddset(&sigs, SIGCHLD);
if (!sigprocmask(SIG_BLOCK, &sigs, &oldsigs))
{
sfd = signalfd(-1, &sigs, SFD_CLOEXEC | SFD_NONBLOCK);
if (sfd != -1)
{
// Success
return sfd;
}
else
{
perror("signalfd");
}
sigprocmask(SIG_SETMASK, &oldsigs, NULL);
}
else
{
perror("sigprocmask");
}
return -1;
}
static int timerfd_init(void)
{
int tfd = timerfd_create(CLOCK_MONOTONIC, TFD_NONBLOCK | TFD_CLOEXEC);
if (tfd != -1)
{
struct itimerspec tv =
{
.it_value =
{
.tv_sec = 5
}
};
if (!timerfd_settime(tfd, 0, &tv, NULL))
{
return tfd;
}
else
{
perror("timerfd_settime");
}
close(tfd);
}
else
{
perror("timerfd_create");
}
return -1;
}
static int epoll_init(int sfd, int tfd)
{
int efd;
if (!sfd || !tfd)
{
return -1;
}
efd = epoll_create1(EPOLL_CLOEXEC);
if (efd != -1)
{
struct epoll_event ev[2] =
{
{
.events = EPOLLIN,
.data =
{
.fd = sfd,
}
},
{
.events = EPOLLIN,
.data =
{
.fd = tfd
}
}
};
if (!epoll_ctl(efd, EPOLL_CTL_ADD, sfd, &ev[0]) &&
!epoll_ctl(efd, EPOLL_CTL_ADD, tfd, &ev[1]))
{
return efd;
}
else
{
perror("epoll_ctl");
}
close(efd);
}
else
{
perror("epoll_create1");
}
return -1;
}
int main(int argc, char *argv[])
{
int exit_value = EXIT_FAILURE;
int sfd = signalfd_init(),
tfd = timerfd_init(),
efd = epoll_init(sfd, tfd);
if (sfd != -1 && tfd != -1 && efd != -1)
{
int child_pid = fork();
if (child_pid != -1)
{
if (!child_pid)
{
argv += 1;
if (-1 == execvp(argv[0], argv)) {
exit(EXIT_FAILURE);
}
__builtin_unreachable();
}
else
{
int err;
struct epoll_event ev;
while ((err = epoll_wait(efd, &ev, 1, -1)) > 0)
{
if (ev.data.fd == tfd)
{
// Read the signalfd for the possible SIGCHLD and
exit_value = EXIT_SUCCESS;
}
else if (ev.data.fd == tfd)
{
// Timer triggered, kill the child process.
}
}
if (err == -1)
{
perror("epoll_wait");
}
}
}
else
{
perror("fork");
}
}
close(sfd);
close(tfd);
close(efd);
exit(exit_value);
}
I have to disagree here. Not recommending signalfd for the mentioned use cases might be reasonable, just as reasonable as it is to use threads for a specific use case. For a single threaded non-blocking-FD using client/server signalfd removes the risk of doing too much in the signal handler and brings signals nicely into the event loop. This just happens to be 99% of the functionality I have to do.
I'd only use more than one signalfd if each signalfd only catches a specific signal. E.g. main context handles Sigterm and a background process library handles sigchld.
> removes the risk of doing too much in the signal handler
Not a concern here, since the signal handler is restricted from running at any time other than during epoll_pwait(), so the usual async-signal-safety concerns don't apply. In fact I think the code ends up cleaner than with signalfd, and there are fewer syscalls (no separate read() needed).
This is getting into taste territory but sure just getting the signal in epoll_pwait removes the need for FD reading. But introduces the need for the thread local global context and the need to handle EINTR from epoll right? I can see how checking if signals were called in the EINTR-branch is nice though.
Just skimmed through the article, since I'm just here to testify that the most important revelation for me on writing APIs was that you can put and epoll_fd in an epoll_fd. This allows the API to have e.g. a single epoll_fd that contains all outbound connections, timers, signalfds and inotifys mentioned in the articled. Then the e.g. daemon using the APIs can have an epoll_fd per library it is using and just be sitting in the epoll_wait loop ready to fire library_x_process() call when events arrive.
Another use case for this: Say you have a set of "jobs" each composed of many "tasks" (each waiting for some event). The "jobs" are able to run concurrently on different threads, but the "tasks" must not run concurrently with other tasks in the same job because they might share data structures without synchronization.
(This is a pretty common pattern in a lot of big servers.)
Now you want to make sure you utilize multiple cores effectively. The naive approaches are:
1. Create a thread per job, each waiting on its own epoll specific to the job. This may be expensive if there are many jobs, and could allow too much concurrency.
2. Have a single epoll and a pool of threads waiting on it. Each thread must lock a mutex for the job that owns the task it's going to run. But a thread could receive an event for a task belonging to a job that's already running on another thread, in which case it has to synchronize with that other thread somehow, which is a pain. Be careful not to create a situation where all threads are blocked on the mutex for one job while other jobs are starved.
Epoll nesting presents a clean solution:
3. Create an epoll per job, plus an outer epoll that waits on other epolls. A pool of threads waits on the outer epoll, which signals when a per-job epoll becomes ready. The thread receiving that event then takes ownership of the per-job epoll until the event queue is empty.
I'm handwaving a little because I haven't actually built something like this yet.
But I imagine you'd add the per-job epoll to the global epoll with EPOLLONESHOT, so that once an event is reported, it is unregistered with the global epoll. Whatever thread received that event then owns the job. When that thread decides there's nothing more to do, it adds the job epoll back to the global epoll with EPOLLONESHOT again.
This is why I opted for Shelly on my house build. The dimmers keep dimming and everything works regularly without a hub/HA/Wifi. Leaving the modules inside the light switches. No need for special bulbs and having switches with springs allows any light to be dimmed as long as the fixture/bulb supports it.
There's room for a future product to discover everything within a house. Find all the Shelly devices, figure out which bulbs are Hue, discover any small sensors left behind, etc.
It'll be easy to find the Shellys. Just look for light switches that don't cause the light to turn on or off, or lights that turn on randomly. We put a Shelly 2 on every light switch in our new build 2 years ago (about 30 Shellys) and over 50% of them have already failed.
Usually its either the relay refusing to switch on or off. Or its able to switch on and off via the app, but not via a light switch that it was happily doing the day before. Or it disappears off the WiFi and refuses to connect no matter how many resets / power cycles are done.
After replacing 10 or so of them, I decided it was easier to rip them all out.
Personally I'd prefer if Zigbee just took over everything, then everything would be interoperable and we would avoid being in yet another dystopian hellscape.
signalfd was just off-hand mentioned, but for writing anything larger, like lets say a daemon process, it keeps things close to all the other events being reacted to. E.g.