This is really my key frustration with this series (which is otherwise quite good): you’ve defined the constraints in a way that is, I think, a bit misleading.
The thing is, Akka without clustering isn’t really Akka, IMO. The Akka team made a very conscious decision, years ago, to be cluster-centric. Indeed, they made the rare decision to remove functionality that didn’t work in a distributed environment. The whole point is that programs work mostly the same whether they are running on a single node or a large cluster.
Yes, you can use Akka on a purely single-node problem, but I’ve always considered that to be mostly a waste of time — there are usually better options. (And mind, I’ve been a serious Akka enthusiast from the very beginning.) My usual description of Akka’s use case is “managing state at scale” — its real niche is for when you have a problem that is too big for a single node.
I strongly suspect that there’s a pure-functional system yet to be built, likely on top of Akka, that tackles these problems properly. In particular, a functional approach to the problem of large-scale sharding and persistence would be a blessing for many applications. The Akka team is making step-by-step progress towards making some necessary underlying pieces available, and I hope to see this become possible within the next couple of years.
But until then, I honestly consider the comparison to be a bit apples-to-oranges — you’re passing over precisely the use cases where Akka is the right tool to use…