Hacker Newsnew | comments | show | ask | jobs | submit | josh2600's comments login

Approximately 100 open positions for all kinds of things (soft/hard engineering, supply chain, operations, sales, everything!!!) at Daqri. LA, Sunnyvale and Dublin offices.

We're building Augmented Reality tools that are used by Industrial organizations to provide context to workflows. In particular, we're designing and deploying a Smart Helmet which will help Industrial teams work smarter.

Check out open positions here: http://daqri.com/careers/

Looks great.

this product looks awesome!

The title of this article would suggest that there is a delta between where we are now and where we are after the bill. Is there a material difference in the way NSA will behave after the bill? Are surveillance powers any weaker if they can still query CDRs from operators?

Isn't this, on some level, a subsidy to ATT, Verizon, T-Mobile and Sprint?


Well before this bill (yesturday) bulk collection of phone data was illegal. Now it is legal.


At least now we get to see the invoice. Don't imagine that Room 641A was set up for free.


I actually wouldn't be surprised if it were.

"Hey $ATT_EXECUTIVE - see what happened to [Qwest's CEO] Joe Nacchio? Good. Now go splice me some fiber or you'll be joining him."


I would be surprised. Both Ma Bell and the clandestine service are always looking for ways to move more money in more directions. A typical scenario: from NSA to some secret shell company, from there to ATT for the project, from there to a favored systems integrator to run the project, from there to the agent-in-charge's nephew for unspecified services, from there back to the shell company, etc.


The penalty would probably be worse if a commercial company did work for free for the government. It's a big no-no. You can't even give government employees things like lunch during a working lunch meeting unless it's a low enough amount and they provide some way for the govies to optionally pay.


Two words: burning platform.


There was just no reason to write a memo like that. It destroyed their business, almost singlehandedly, and no, I'm not exaggerating. The consequences of that memo were insane: Nokia's handset sales died all over the world all at once.


We don't have more than quarter-level visibility. All we know is that Q1 2011 sales crashed and Elop wrote his memo. Which is the cause and which the consequence?

In Q4 2010, Nokia stuffed the channel with outdated products. In a market that was growing accustomed to iPhone and Android, Nokia tried to sell consumers products based on the first touch edition of Symbian S60 -- the same software that was considered outdated when it first shipped in Nokia 5800 years earlier. (Because of the way Nokia's product development pipeline worked, the mid-range phones they released in late 2010 contained software that was several years old.)

Isn't it possible that Nokia's Q1 2011 sales crashed simply because the channel was full of Symbian phones that just were not selling? As CEO, Elop would have visibility into that when he wrote his infamous memo.

A point against the "evil memo destroyed sales" theory is that purchasers at large operators don't turn on a dime. When they stopped buying Nokia's phones in Q1 2011, the decision was already made earlier.


The fallout from that memo was instantaneous and widespread. I worked in wireless at the time and everyone I knew at all of the carriers began to pull Nokia inventory. This was not true in many markets.

Even if the burning platform concept was correct, releasing it in a companywide email instead of strategically shifting the company and then announcing the change in a measured, considered fashion would've been much better for their cash flows.

Purchasers at large operators don't turn on a dime, usually, but I would argue that the CEO of a handset manufacturer declaring their entire existing lineup of products is going to be discontinued in viscerally graphic terms is one of the things that might spur such quick change.


> There was just no reason to write a memo like that

There was a very good reason to write it. Internally Nokia was delusional and confused, and desperately needed a wake-up call. Nothing less would have sufficed.


Nokia was delusional internally, but can you really make an argument that what they did was reasonable? Is there any world where it was better for them to choose Windows over Android?

That only happened because of Elop's previous Microsoft connections and it was clearly the wrong decision for the shareholders of the company.


"By every measure that the industry uses, Nokia Symbian smartphone sales grew from Q3 of 2010 to Q4 of 2010, literally the last full quarter before Elop released his Burning Platforms memo."

Q4 sales were higher than Q3 sales? I'm, uh... how about year-over-year comparisons? A seasonally-adjusted smoothed trendline? Anything OTHER than comparing sales figures in one quarter where there's Christmas and one quarter where there isn't?


Do you see fleet existing in a Kubernetes centric universe? If so, in what role?


This is really cool. I want to see more of this.

Oh man, I just went to the home page. Tons of cool stuff! This is great.

I could see something like this being integrated with codeacademy as a way of learning after you finish the intro lessons, sorta like how chess masters watch a lot of chess games to learn how the game works.


I mean, it's not decades away...

If you want < 10ms ping, you need to:

1) Be within 10ms of target datacenter (1,860 miles roughly)

2) rip out the TCP/IP handlers in all of the hardware along the route and replace with highly optimized, workload specific IP handlers (you'll want to do this in the kernel, OR in an ASIC if you're really serious about speed).

3) own all of the fiber along the route.

Basically, if you can eliminate all of the finger pointing and just own the network end to end, you could probably implement this today, it's just that no one is at that scale except for maybe Google (note: I have no idea how Google runs their internal network, but if I had to guess, this is probably what they're doing).

Edit: You can get this now, it's just hard and not worth the cost or effort to do (since no one will bear the cost of building a network like this for gaming, or really for any application I can think of... there just isn't a need for this kind of network performance).


The things I hear from the Unikernel crowd sound a lot like the stuff I hear from Docker, although I note that part of where Amir seems to be going is where Docker came from[0]. I personally think that immutability is an important component of building modern distributed systems (things that are in production should be as immutable as possible; we've even experimented with reverting machines to initial state on a rolling basis, for example). If you have stateless services, it makes sense to constantly bring up boxes that are at the initial state as memory leaks or other errors compound over time.

At Terminal we do this by having RAM-perfect snapshots (think VMWare style snapshots, but without a hypervisor) and rolling new instances from an initial state. The snapshotting works by taking the RAM-state, CPU cache and Disk state at a given moment and committing it to Disk for later restoration. Once you have the primitive of being able to treat machine state like a file system, you get a lot of properties that you might not otherwise have access to (like being able to bring up machines with state in the time it takes to read the RAM from disk).

I am overall quite bullish on Unikernels, but I do think there's quite a bit of distance to cover between where we are now and where companies will feel comfortable trusting their infrastructure to MirageOS.

I am particularly interested in running MirageOS without a hypervisor, but I understand why that's not yet possible (please do correct me if I'm wrong). If MirageOS is running in a container somewhere, I'd like to see it.



Given what you've described, I think you might be interested to look further at Irmin [1]. It's not quite ready for prime-time but certainly stable enough to kick the tires.

Regarding commercial uptake, the nice thing about the library approach is that companies get to pick and choose the components they want (without even going 'full Unikernel'). For example, I'm aware of the cohttp library having commercial users. The real issue is legacy code but I mostly sidestep that when I write (but we're well aware of it).

Interesting that you mention MirageOS in containers. I don't see any reason the two can't be compatible but it would good to hear more about what you'd like to achieve (or how you'd expect it to be in terms of workflow).

[1] http://openmirage.org/blog/introducing-irmin (I think it's time we put together an overview page with all the links).


So the overall goal is to reduce the footprint of each users machine/application as much as possible within a larger aggregated pool of resources. I see Unikernel as one potential way of reducing the amount storage/memory each user or application needs.

With containers, you can do RAM deduplication under some circumstances, and you can get a lot higher resource utilization doing that, but I think we can always do better, and so that's kinda why I have my eye on unikernels (also because unikernel seems like a reasonable way of squeezing more performance out of systems in some cases).


I am working on rump kernels as unikernels in userspace (contact details in profile). We are looking at running Mirage on rump too. Running without a hypervisor is not much different, and there is a good snapshotting option.


Out of curiousity, what's your underlying runtime system, if it's not a type 1 or type 2 hypervisor?


At terminal we're attempting to deliver a VMWare-like experience without a hypervisor. The only way to do this that I know of is containers without a hypervisor[0]. I think it works pretty well, check it out at terminal.com if you'd like.

Let me know if you'd like more clarification. We currently have live-migration working without a hypervisor in production, but we haven't built out a lot of the extra tooling that VMWare has (particularly around alerting and monitoring). We'll get around to that eventually.


Edit: there are lots of benefits you get from having a hypervisor (like being able to run different kernels) but there are big performance penalties from virtualizing the kernel. If you can get the process isolation benefits of hypervisors but without the slow boot times and performance hits, that's probably what you want for most applications.


It turns out I sorta-kinda agree with you about the tradeoffs: http://www.livestream.com/pivotallabs/video?clipId=pla_98503...


Yes it is Angular 1.x and as we all know the world is not static. It's fair to say that learning Angular 1.x is still probably a valuable skill. I do think that abstracting away nuances of JS is actually important in facilitating learning, but I could see how that might hamper your ability to apply this learning to the real world.

Do you think that the concealing of complexity is harmful in the long run? I certainly don't. I think there will be some shock when you try to go straight from codeacademy to the real world, but it's probably less of a shock than going from 0 to real world coding.

In the long run, I can easily see codeacademy building a suite of courses that get progressively less and less "magical" until there are no longer any training wheels. That's going to be quite compelling if they can make it work.

Full disclosure: Terminal powers codeacademy's production and staging workloads.


Terminal.com is hiring.

We work on next-generation virtualization technology.

We are a hard-working, fast-moving startup based in San Francisco and Palo Alto. We value highly-optimized, correct code.

We are building the next generation of computing that is cloud-first and web-first. We build an technology for clouds, and are deep into scientific computing, network routing, HTML5 web development, kernel hacking, virtualization, and other trends in software technology.


JVM developer here that is interested in learning C++, Rust and Go (among others)... any interest?


Feel free to send a resume to the email address in my profile.

We want people that are willing to work hard and dig into difficult problems. We're pretty serious about engineering over here, so if you're prepared to work hard, we're interested. There is a high standard for engineers on our team, but if you make it in I'm sure you'll enjoy it quite a bit. Best of luck!


As a very effective substrate for hosting containers, we at Terminal think a lot about which of the container mechanisms will win over the long term.

My personal opinion is that the winner will be the group that successfully gets Enterprises to change their workload design. I also don't think that Rocket is necessarily a superior format to Docker, but I think they're both dealing with the recognition that any big change in Enterprise behavior represents an opportunity for value capture.

There's a real question as to where any of these abstraction layers fit in if Docker wins (and there's some possibility docker is going to win). If that's the case, coreos doesn't want to look back a few years down the road and wish they'd been working on a container format.

It's 2015. One of the battlegrounds for enterprise dollars is containers. It's going to be a delightful thing to watch.

It's also worth noting that many companies have their own cgroups implementations which are neither docker or rocket based. I rather like the position of dispassionate observer in this war (at Terminal we run all of the containers and also apps without containers).


> It's also worth noting that many companies have their own cgroups implementations which are neither docker or rocket based.

See for example Garden (née Warden)[0][1], which is the basic building block of Cloud Foundry.

[0] https://github.com/cloudfoundry-incubator/garden [1] http://blog.pivotal.io/cloud-foundry-pivotal/features/cloud-...



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact