Thoughts on Attacking Slow TCP Connections

I'll eventually write some TCP servers, such as for Gopher and HTTP, and have given thought to how I
may elegantly retard some common forms of attack.  I've realized making any guesses as to the nature
of connections to be pointless; instead, it's much better to use a policy made solely ahead-of-time.

A protocol in the question-and-answer style has a nice solution with only two time limits specified:
time spent waiting for the question, and time spent answering it.  The former time limit should be a
second or fraction thereof for most protocols, and the latter time limit should be dynamic, based on
the length of the answer and the minimum speed permitted.  This is easy to mull over and to program.

My first thought with such an Ada TCP server was to have an executioner task which measured all time
limits and killed those expired tasks performing the prime work, but further thought has revealed to
me how this may be hard, with how a limited type can be difficult to reference and whatnot.  A nicer
method is, probably, to have one task serving as master over the others, making tasks which serve as
executioner to the single task they themselves make.  This avoids all problems of referencing tasks.
The only problem with this approach is the doubling of needed tasks, but it probably matters little.
The control-flow is then each executioner task making its servant task with the first time limit set
at task creation, and the servant task forming and setting the second time limit only after it gains
the required information.  This seems like the one reasonable and true way to program such a server.

Unlike UDP, which I found rather easy to wrap, TCP is very frustratingly entwined with the remainder
of the system for reasons of efficiency and, while Ada can expose the high-performance event systems
without directly requiring their use, I noticed associating data with each connection in a clean way
would still be unpleasant in many ways.  It's rather necessary to split the connection handling into
a state machine, by hand, which is ridiculous when considering the machine could do so without input
from a man.  I'm led to believe Ada may be unsuited to a TCP server; Erlang looks so much nicer than
the crowd, because its implementation already contains this state machine fuckery, from what I know.

It seems unreasonable that modern machines supposedly struggle with millions of threads of execution
when they run at many gigahertz.  I'd expect a machine with most threads waiting on a TCP connection
to be more than feasible, but everything I've read states otherwise; this is clearly another failure
of modern systems.  It seems everything is twisted out of a shape friendly towards the minds of men.