Hacker Newsnew | comments | show | ask | jobs | submit | TheIronYuppie's comments login

That's not really correct. EBITDA is not earnings before bad stuff - that stuff is all % of your income and/or write downs that have no impact to the business, so financial folks like to exclude it as noise (since all it really does is reduce your taxes). Examples:

- You bought a company and now it's worthless - You made money and now you have to pay taxes on it - You bought a bunch of computers and now they're three years old and worth less than when you bought them - You bought something 10 years ago, and, instead of paying for it all up front, you paid over time


Depends. For example, in some states, it's still illegal to put someone intoxicated in a car driving on private vehicles on private roads.

http://www.lawyerinlongbeach.com/Torrance-DUI-Attorney.html

-----


No, there's a private network that's shared between all the containers running on the pods, but as long as the nodes can see each other, you're good.

Full disclosure: I work at Google on Kubernetes

-----


Networking on AWS works fine with no additional networking as well.

Overlay networking is not required if you're running within a bunch of nodes that can see each other. Only if you get more complex will you require something, and there are quite a few solutions (Flannel, Weave, Calico, etc)

Full disclosure: I work at Google on Kubernetes

-----


But most of them suck, and they suck even more when you configured them badly (badly in terms of you used an option which comes by default). However with the bigger vxlan adoption most performance issues are fixed, still, could have some improvmenents. I also think that IPv6 could fix a lot of these things...

-----


Kubernetes does this as well:

cluster/kubectl.sh exec pod_name -ti bash

Full disclosure: I work at Google on Kubernetes

-----


Awesome, good to know. Based on this issue[1] I didn't think it did.

[1]: https://github.com/GoogleCloudPlatform/kubernetes/issues/152...

-----


I definitely don't want this to come off as a sales pitch, but you can get started in one click using Google Container Engine (and $300 in free credit) as well.

Full disclosure: I work at Google on Kubernetes

-----


Except: "Sorry, you aren't eligible for a free trial at this time. The free trial is for new customers only."

Apparently, the fact that I've been curious enough to experiment with other Google developer products in the past means I'm not part of the target audience.

-----


Sorry about that! Free trials have a timeout :(

Can you submit a support request and we'll see what we can do?

Also, spinning up a cluster should be incredibly cheap if you just want to mess around for a little bit - we do billing by the minute :)

Full disclosure: I work on Google on Kubernetes

-----


Yeah, sorry if that came off as snarky; I appreciate the suggestion.

I guess I can understand the cost-cutting mentality that drives Google, AWS, etc. to limit these kinds of offers to "new customers" only. Just remember to consider what kind of incentives you're creating. By effectively punishing developers for being early adopters/experimenters, you're making them wary of signing up early for whatever new and interesting stuff you announce in the future.

-----


Any suggestions for incentive systems that would be motivational for you? We want to help!

Full Disclosure: I work at Google on da Cloudz

-----


There should be several types of free trials

1. Current type for new customers. Here's $500. Do whatever you want

2. For old customers who haven't ever used a free trial, give credit without limits (same as new customers)

3. For old customers who have used a free trial give credit only for services they haven't used

-----


It's an interesting problem - the issue is that our trials are both money & time based ($300 for 60 days). So technically you've "used" your trial even if you do nothing for 2 months.

We do appreciate the feedback and are looking hard at the right next way to solve this. If it wasn't for bitcoin miners and/or bot nets, this would all be a lot easier :(

Full disclosure: I work at Google on Kubernetes.

-----


Ah, interesting -- I thought it was for a year. Then I don't feel like I'm missing out quite so much, because I would have a hard time spending that much credit in 2 months anyway :)

-----


And #3 can give the positive effect of converting existing paying customers on one product into paying customers on new products.

-----


Got a fairly quick response from Google Cloud Billing Support:

"Unfortunately, the system is developed by design to only apply the free trial credit to new email address creating a new billing account and we can't apply it for already existing emails." Bummer.

-----


Or they assume your wallet is already open.

-----


Then they are wrong.

I actually find this a common issue with a presumed sales pipeline I encounter.

They think:

1. He finds us. 2. He's interested and signs up for a trial 3. We hopefully convert before the trial is over

What actually tends to happen

1. I find something that looks interesting 2. I sign up 3. Real work intervenes 4. Several months later I have some time to look again but my trial has expired.

To be fair most companies respond to a quick email but they could be proactive and do the following:

1. If no activity is detected after the first day pause the trial 2. Some time later send an email saying "We've paused your trial. Please choose either: 1. to reactivate it, 2. be reminded in another x weeks or 3. never hear from us again.

-----


(this would be more readable if Markdown was less idiotic)

-----


The problem isn't that Markdown is idiotic; the problem is that Hacker News doesn't use Markdown at all.

GitHub and Reddit have conditioned us to think that any halfway decent discussion system must use it :-P

-----


We were looking at this, but noticed that you have to one run cluster per availability zone. Any plans for being able to run a cluster across an entire region within GCE?

-----


Yes, we've heard from a number of people who want that and will improve regional support.

Current ideas are either a single regional cluster or via federation of multiple zonal clusters.

See eg https://github.com/GoogleCloudPlatform/kubernetes/blob/maste... for an proposal on the latter.

-----


FWIW, I'd love to have Kubernetes clusters spanning a region with multiple regions/providers managed by Ubernetes. That would be the sweet spot for our particular usage case.

This is only one point of data for you, of course.

-----


Kubernetes Cluster Federation (proposal) "Ubernetes"

"Today, each Kubernetes cluster is a relatively self-contained unit, which typically runs in a single "on-premise" data centre or single availability zone of a cloud provider (Google's GCE, Amazon's AWS, etc)."

https://github.com/GoogleCloudPlatform/kubernetes/blob/relea...

-----


While cute, that name would make Kubernetes appear to be the KDE version of Ubernetes.

-----


Agreed. That's the only reason we're not on Kubernetes right now. It really dramatically increases the amount of infrastructure we need to run when we're forced to run three Kubernetes clusters to run a single MongoDB replica set. But I love everything else Kubernetes is doing and so I'm very anxious to see that be addressed.

-----


Well, it's certainly not a sales pitch anymore, given the brilliant customer support on display here.

-----


FWIW, Kubernetes provides its own load balancer, which you can put behind ELB.

Other than that, Kubernetes works on AWS out of the box, with a one line setup.

Full disclosure: I work at Google, on Kubernetes.

-----


I'm not sure that's what I would call an integration... AWS provides for easy host management and elastic scaling traditionally through the integration of the ELB with autoscale groups, and now with life-cycle hooks. I'm not aware that kubernetes integrates with this stuff in any way or provides a sufficient alternative. Reading through the documentation I was not able to find information about connection draining on rolling updates, taking hosts out of service for maintenance/scaling/replacement, and so on. I am aware that kubernetes will run on AWS now and there is a guide for setting it up.

However this really wasn't the point of my comment, which is that security for application secrets(and AWS API access) is currently a sore spot. It would be nice if kubernetes would adopt some of hashicorps stuff like consul, templates, and vault. Maybe that's too far up the container stack though and a popular bundling of technologies will appear.

-----


Docker and Kubernetes works hand in hand. That is to say, if you choose Docker as your container format, Kubernetes runs Docker on every node to run your container. Kubernetes focuses on _what_ Docker should run, and how to move those container workloads around.

Docker also has Docker Swarm, which can be thought of as a competitor in some ways. But Google will be a heavy supporter of their container format for a long time to come.

Full Disclosure: I work at Google on Kubernetes

-----


So Kubernetes compliments Docker, but how it complements it.

I had tested Docker just for fun, thinking that maybe I could implement it in the way I work, and sure it is a super tool for developing (far better than Virtual Machines), but deploying was kind of nightmerish, for what I understood Docker wasn't at the time ready for being a deployment tool.

Does Kubernetes fixes or extends Docker in this way

-----


Think of them as different layers. If you're a front end web dev, it's sort of like SASS vs CSS: the former is a layer on top of the latter that makes it more powerful/convenient/easier to use.

At the bottom of the stack (most low level) is the Docker runtime. It knows how to run containers on the local machine. It can link them together, manage volumes, etc but at the core it is a single machine system. (That's probably why Docker, Inc has developed their proprietary orchestration tools like swarm).

Layered on top of that are container-native OSes like CoreOS. CoreOS provides facilities for running distributed containers on multiple physical nodes. It handles things like replication and restarting failed containers (actually fleet "units"). This is a huge improvement over vanilla Docker, but it's still pretty low level. If you want to run a real production application with dependencies it can be tedious. For example, linking together containers that run on different nodes. How does container A find container B (which may be running on any one of N nodes)? To solve this you have to do things like the Ambassador Pattern[1]. Any complex application deployment involves essentially building discovery and dependency management from scratch.

Layered on top of this is Kubernetes (it runs on CoreOS but also Ubuntu and others). As said elsewhere in this post, k8s provides an opinionated workflow that allows you to build distributed application deployments without the pain of implementing things like low-level discovery. You describe your application in terms of containers, ports, and services and k8s takes care of spawning them, managing replica count (restarting/migrating if necessary) and discovery (via DNS or environment variables).

One of the very convenient things about k8s (unlike vanilla Docker) is that all containers within a pod can find each other via localhost, so you don't have to maintain tedious webs of container links. In general it takes the world of containerization from "Cool technology, but good luck migrating production over" to "I think we could do this!".

1. https://coreos.com/blog/docker-dynamic-ambassador-powered-by...

-----


was about to write a response, but bkeroack did a perfect job with the above :)

Full disclosure: I work at Google on Kubernetes

-----


I have a docker image I want to run "myapp". So I tell Kubernetes "run me 5 instances of the image 'myapp', and expose port 8080"

Kubernetes jobs is to start, monitor, and load balance those docker containers.

Docker's job is to run each container.

-----


I have good news for you :)

https://blog.kismatic.com/running-rkt-on-kubernetes/

Full Disclosure: I work at Google on Kubernetes

-----


Yes you can use rkt for docker images, but I want to be able to use ACI's. which doesn't seem possible yet[0].

But awesome that there's at least some support! :D

[0] - https://github.com/GoogleCloudPlatform/kubernetes/issues/720...

-----


What are you looking for in API management? There's key management, DDos protection, etc.

-----

More

Applications are open for YC Winter 2016

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: