Serverless Docker Beta

By Rauchg - 2 days ago

Showing first level comment(s)

And so ZEIT, my favorite serverless provider, keeps getting better. Highlights:

- "sub-second cold boot (full round trip) for most workloads"

- HTTP/2.0 and websocket support

- Tune CPU and memory usage, which means even smoother scaling

And all that for any service you can fit in a Docker container - which is also how you get near-perfect dev/prod parity, an often overlooked issue with other serverless deployment techniques.

On top of all that, ZEIT has one of the best developer experiences out there today. Highly recommend trying it out.

And for the perpetual serverless haters out there: this is not a product for you, FANG developer. I think people underestimate how well serverless fits for the majority of web applications in the world today. Most small-medium businesses would be better off exploring serverless than standing up their own k8s cluster to run effectively one or two websites.

andrewtorkbaker - 2 days ago

Am I the only one having problems to follow .gif "demos"?

When I get to the image, it is in the middle of everything and I don't really have an idea what is going on. Even watching it multiple times, I am not sure where it starts, ends, what the individual steps are.

Or is it because I just don't know enough about this stuff?

Sujan - 2 days ago

Yep. Pretty soon. You write code. You create a docker file. You find a place to run the docker file with your code [cheapest!]. Run it through your tests. Monitor it. The end. No vpcs,salts, puppets, sshs,chefs, horses,anisbles, cats,ec2s,devops,noops, sysadmins, kubernetes or chaos monkeys required.

ransom1538 - 2 days ago

Looks great for basic websites but it's missing the biggest and most difficult piece of cloud infrastructure. The DATABASE!

Today you'd have to open up your cloud DB provider to the world since Zeit can't provide a list of IPs to whitelist. This is a showstopper for me unfortunately.

watty - 2 days ago

Shout out to the RCE I found in the zeit.now deploy button:

https://github.com/zpnk/deploy.now/issues/27

Hopefully someone from Zeit reading this can get my fix merged, it seems to be quite a popular service

orf - 2 days ago

> “A very common category of failure of software applications is associated with failures that occur after programs get into states that the developers didn't anticipate, usually arising after many cycles. In other words, programs can fail unexpectedly from accumulating state over a long lifespan of operation. Perhaps the most common example of this is a memory leak: the unanticipated growth of irreclaimable memory that ultimately concludes in a faulty application.

> Serverless means never having to "try turning it off and back on again"

> Serverless models completely remove this category of issues, ensuring that no request goes unserviced during the recycling, upgrading or scaling of an application, even when it encounters runtime errors.

> How Does Now Ensure This?

> Your deployment instances are constantly recycling and rotating. Because of the request-driven nature of scheduling execution, combined with limits such as maximum execution length, you avoid many common operational errors completely.

Somehow this sounds very expensive to me (like restarting Windows 2000 every hour just to avoid a BSoD, except that here it’s not that time consuming a process) and seems to leave aside caching, state management and other related requirements on the wayside for someone else to handle or recover from.

Or it’s likely that I’ve understood this wrong and that this can actually scale well for large, distributed apps of any kind. Sounds like magic if it’s that way.

newscracker - 2 days ago

Awesome! While I was at AWS Summit in NY, I asked a round circle of AWS ECS/EKS users (Container orchestration products) about thoughts on a Docker container service that could execute like a FaaS product and there seemed to be none anyone knew of. I have a portion of a legacy application that's used infrequently and too costly to decompose but works fine Dockerized.

Looking forward to using your product!

tfolbrecht - 2 days ago

I'm confused about pricing. I come from using AWS Lambda, where you pay for the amount of memory allocated for your function, how many times it runs and how long each run is.

Looking at Now, it looks like you are billed by the 'plan' that you choose, and that decides how many deployment instances you are limited to. What does a deployment instance mean for something that is 'serverless'?

EDIT: Whoops, I see that there are 'on demand' prices for deployment instances too--now I just need to figure out how deployment instances map to serverless.

robrtsql - 2 days ago

So this is different from running docker containers on heroku because heroku has 1 to n autoscaling but not 0 to n?

What are the other fundamental differences?

tango12 - 2 days ago

So I've been messing with Fn + Clojure + Graal Native Image and I'm seeing cold start times around 300-400ms and hot runs around 10-30ms. TLS adds something like 100-150ms on top of that. I was excited about seeing improved docker start times, but it seems like you guys are pretty much at the same place I am with it.

Here's my question, being relatively ignorant of Docker's internals: _is it possible_ to improve that docker create/docker start time from 300-400 ms (all in) to <100ms? 300-400ms is kind of a lot of latency for a cold boot still, and people still do things like keepalive pings to keep functions warm, so it would be pretty great to bring that down some more.

jgh - 2 days ago