Or you know, don't use lambda. If you need that kind of computer power spinning up a large instance, running the job, and then shutting it down is completely viable. They have per-second billing now on AWS, and you can save even more by using spot instances.
Lambda is not cost effective in the CPU/RAM that you get compared to regular instances. It only comes out ahead for very intermittent workloads.
Your points are valid if your priority is to reduce cost. Totally agree. That said, Lambda has an API and SDK, and concurrency is managed. Warm up is a nightmare if you don't develop specifically for Lambda, but scalability is easy. There are trade-offs, it sucks for many use cases, but I don't think you should just shrug it off.
If you have to use beefy lambda instances with multi-threaded workloads? I don't think lambda is the right tool for the job there. I'm not saying it has no value - but that's the opposite of playing to its strengths.
It all depends, I think. I once used multi-threading to speed up the initialization of a Lambda which was just a "port" of a beefy rest service. It was ported to cut costs and for fault isolation. I managed to gain 2-3 seconds of initialization time, but it was something.
A Lambda deployment has the advantage that no one needs to manage it, and faults are isolated between clients. You could argue that if you need these properties your engineering organization has deeper flaws, and you'd be right, but in many companies engineers need to make do with what they get.
Sure, I was perhaps too hyperbolic. In my case, the advantage was specifically fault isolation for a multi-tenant, low traffic, demanding workload that was traditionally deployed as a physical service.
Yes and no: Lambdas (and cloud functions) are to the cloud what Triggers are to Databases.
They're very useful to react to events; but they also impose a lock-in.
For (very) light usage I feel they're ok. But this is an opinionated feeling.
If you want to process a lot of stuff, you need one (or several) instances.
If you take a close look at other 'serverless' offerings like KNative in K8s. They do something similar to launching daemon processes but they do launch pods and containers. If you already have K8s, it could be very useful. If not, use a message queue and instances.
On the subject of AWS Lambda cost and time optimization: also check out AWS Lambda Power Tuning [^1] and its ecosystem of projects. It also supports 1ms billing and 10GB of RAM.
That's not the model. Lambda is function oriented and handles the concurrency on your behalf. The new container support is really just about providing another delivery mechanism for functions - the underlying model remains the same. If you want something more Cloud Run-esque on AWS, you could use Fargate to rig it up.
The exasperating thing about AWS is that it inevitably turns engineering teams into equities and derivative traders. Yes, all programming is about balancing storage, CPU, memory, network I/O against each other, but AWS introduces the complexity of needing to reason about these things over time horizons that are long by a startup's standard.
If all of your business's metrics are sharply up and to the right, you probably don't need to worry so much, but if your goal is operational cost efficiency for a relatively stable workload, AWS is a challenge. That may be a blessing in disguise, as self hosting becomes a reasonable thing to consider if your needs are nontrivial and relatively inelastic.
My best take away is that if you're running the usual collection of interpreted languages (JavaScript/Python/etc.) you'll probably never get the benefit of having multi-threaded lambda functions for reducing runtime. Given that we now have milisecond billing, this can really result in real savings at scale.
Lambda is not cost effective in the CPU/RAM that you get compared to regular instances. It only comes out ahead for very intermittent workloads.