You'd have a serverless Lamda hit Elasticache/Redis and then still run into scaling issues because the code you wrote wasn't optimized. Given enough Lamdas and no connection pooling, you'll still exhaust Elasticache and have to do something about it. You'll run into the question about how the bandwidth bill is going to look like in the morning and want to figure out a way to not bankrupt yourself before going to sleep. Worse, AWS is known to gouge for bandwidth, and Lamda doesn't give you the same absolute rate limit control as the author's setup did.
If you're just gonna give up if/when you hit eg Redis rate limits, that doesn't inspire confidence in serverless.
If you're just gonna give up if/when you hit eg Redis rate limits, that doesn't inspire confidence in serverless.