3. echo (socketset)

import std.socket : InternetAddress, Socket, TcpSocket, SocketOptionLevel,
    SocketOption, Address, SocketSet;
import std.stdio : writefln;
import std.typecons : Unique, Nullable;

enum
    PORT = 4444,
    MAX_CONNECTIONS = 40;

void main() {
    Unique!TcpSocket listener = new TcpSocket;
    listener.blocking = true;
    listener.setOption(SocketOptionLevel.SOCKET, SocketOption.REUSEADDR, 1);
    listener.bind(new InternetAddress(PORT));
    listener.listen(10);
    scope sockets = new SocketSet(MAX_CONNECTIONS + 1);
    Nullable!Socket[MAX_CONNECTIONS] reads;
    Nullable!Address[MAX_CONNECTIONS] addrs;
    writefln!"Listening on %d."(PORT);

    ubyte[4096] buf;
    ptrdiff_t len;

    while (true) {
        sockets.reset;
        sockets.add(listener.handle);
        foreach (sock; reads)
            if (!sock.isNull)
                sockets.add(sock.get.handle);
        Socket.select(sockets, null, null);

        foreach (i; 0 .. reads.length) {
            if (reads[i].isNull) continue;
            if (!sockets.isSet(reads[i].get.handle)) continue;
            if (0 < (len = reads[i].get.receive(buf[]))) {
                reads[i].get.send(buf[0..len]);
            } else {
                writefln!"Lost connection from %s."(addrs[i].get);
                reads[i].get.close;
                reads[i].nullify;
                addrs[i].nullify;
            }
        }
        if (sockets.isSet(listener.handle)) {
            Socket client = listener.accept;
            ptrdiff_t i = -1;
            foreach (j; 0 .. reads.length) {
                if (reads[j].isNull) {
                    i = j;
                    break;
                }
            }
            if (i == -1) {
                writefln!"Rejected connection from %s; too many connections %d."(
                        client.remoteAddress.toString, reads.length);
                client.close;
            } else {
                addrs[i] = client.remoteAddress;
                reads[i] = client;
                writefln!"Received connection from %s."(addrs[i].get.toString);
            }
        }
    }
}

This server accepts up to 40 connections and uses a SocketSet to respond to read events from any of them + a listener socket. On a read event from the listener socket, it potentially accepts a new connection. On a read event from any other socket, it works like echo serial, reading and then immediately sending on the socket.


what's with the Nullable types?

This is just an alternative to using a dynamically-sized container for connections, like a dynamic array or an associative array. The server loops over its arrays and populates the SocketSet when an element isn't null.

why have a connection limit at all?

For desktop software, directly run by a user who controls the machine, the Zero, one, infinity rule is appropriate as a rejection of arbitrary limits. But for server software, with resource use driven by less-trusted sources, and with calculable per-connection resource consumption, and with (in some environments) much more important services running alongside your microservice, tunable limits are the way to go. This echo server hitting a connection limit and becoming unavailable might be a much more acceptable condition than a mere echo server consuming all the resources of the hardware available to it, and crashing other services.

isn't this a lot of work per event?

Yes, every single bit of server I/O is preceded by the server looping over its clients and adding them to a set, then having Socket.select mutate that set. And then the server has to loop over all clients again, rather than just those in the resulting set. This select()-style event handling is older than dirt, replaced (on Linux) by poll() (populate array once, mutate it only when clients arrive or leave, reuse it constantly) and epoll() (same, but now can only loop over exactly those sockets that have events) -- which are still old. Some other alternatives are having a thread per connection and using blocking I/O, or using io_uring, or using a portable asynchronous I/O library, or having completely separate processes that use SO_REUSEPORT (not to be confused with SO_REUSEADDR) on top of other measures.

what's with the indented enum assignments at the top?

This is a style not supported by dfmt, so it won't show up again, but I think it looks nice, resembling idiomatic style in Nim and even Go's const blocks.

is this server fit for purpose?

It has some faults that would bite it in production: the big one is the "receive bytes, send them right back" model that assumes that the same amount of bytes are sent back. Short sends are possible, in which case this server would silently discard the remainder after the buffer's reused.