By Rauchg - 2 days ago
Showing first level comment(s)
- "sub-second cold boot (full round trip) for most workloads"
- HTTP/2.0 and websocket support
- Tune CPU and memory usage, which means even smoother scaling
And all that for any service you can fit in a Docker container - which is also how you get near-perfect dev/prod parity, an often overlooked issue with other serverless deployment techniques.
On top of all that, ZEIT has one of the best developer experiences out there today. Highly recommend trying it out.
And for the perpetual serverless haters out there: this is not a product for you, FANG developer. I think people underestimate how well serverless fits for the majority of web applications in the world today. Most small-medium businesses would be better off exploring serverless than standing up their own k8s cluster to run effectively one or two websites.
andrewtorkbaker - 2 days ago
When I get to the image, it is in the middle of everything and I don't really have an idea what is going on. Even watching it multiple times, I am not sure where it starts, ends, what the individual steps are.
Or is it because I just don't know enough about this stuff?
Sujan - 2 days ago
ransom1538 - 2 days ago
Today you'd have to open up your cloud DB provider to the world since Zeit can't provide a list of IPs to whitelist. This is a showstopper for me unfortunately.
watty - 2 days ago
https://github.com/zpnk/deploy.now/issues/27
Hopefully someone from Zeit reading this can get my fix merged, it seems to be quite a popular service
orf - 2 days ago
> Serverless means never having to "try turning it off and back on again"
> Serverless models completely remove this category of issues, ensuring that no request goes unserviced during the recycling, upgrading or scaling of an application, even when it encounters runtime errors.
> How Does Now Ensure This?
> Your deployment instances are constantly recycling and rotating. Because of the request-driven nature of scheduling execution, combined with limits such as maximum execution length, you avoid many common operational errors completely.
Somehow this sounds very expensive to me (like restarting Windows 2000 every hour just to avoid a BSoD, except that here it’s not that time consuming a process) and seems to leave aside caching, state management and other related requirements on the wayside for someone else to handle or recover from.
Or it’s likely that I’ve understood this wrong and that this can actually scale well for large, distributed apps of any kind. Sounds like magic if it’s that way.
newscracker - 2 days ago
Looking forward to using your product!
tfolbrecht - 2 days ago
Looking at Now, it looks like you are billed by the 'plan' that you choose, and that decides how many deployment instances you are limited to. What does a deployment instance mean for something that is 'serverless'?
EDIT: Whoops, I see that there are 'on demand' prices for deployment instances too--now I just need to figure out how deployment instances map to serverless.
robrtsql - 2 days ago
What are the other fundamental differences?
tango12 - 2 days ago
Here's my question, being relatively ignorant of Docker's internals: _is it possible_ to improve that docker create/docker start time from 300-400 ms (all in) to <100ms? 300-400ms is kind of a lot of latency for a cold boot still, and people still do things like keepalive pings to keep functions warm, so it would be pretty great to bring that down some more.
jgh - 2 days ago