Hacker Newsnew | comments | show | ask | jobs | submit | dgallagher's comments login

I'd like to C++ support added to Swift, similar to Objective-C with Objective-C++. There are some C++ frameworks which are nice to use in Obj-C, like Box2D. Currently to get them to work with Swift, you have to write an Obj-C or C wrapper around a C++ framework, and then import that into Swift.


Also true of using Sqlite without CoreData in Swift


Semi-related topic, NPR put a podcast up earlier today talking about simplicity vs. complexity in pop songs (4:29): http://www.npr.org/blogs/allsongs/2015/04/24/401925095/all-s...



That's an early VR demo of TxK, being ported from PS Vita, which is a Tempest remake (fast paced twitch shooter). I haven't played it myself, but those who have say great things about it. An example of a non-simulator game. What's remarkable is Jeff (one of the dev's) doesn't have the ability to see stereoscopic vision.

Most of TxK is played facing towards a web without lots of head turning. Certain games like this will benefit simply by being in 3D, along with VR's total-immersion effect.


This AnandTech overview of nVidia's G-Sync is worth reading (meshes a bit with what Carmack mentioned about CRT/LCD refresh rates in that talk): http://www.anandtech.com/show/7582/nvidia-gsync-review

It's a proprietary nVidia technology that essentially does reverse V-Sync. Instead of having the video card render a frame and wait for the monitor to be ready to draw it like normal V-Sync, the monitor waits for the video card to hand it a finished frame before drawing, keeping the old frame on-screen as long as needed. The article goes into a little more detail; they take advantage of the VBLANK interval (legacy from the CRT days) to get the display to act like this.


Does anyone have experience using Configuration Management software in a heterogeneous environment? For example, I've seen large environments running Windows 2008/2008R2/2012/2012R2, various flavors and versions of Linux including Ubuntu Server, CentOS, SUSE, etc... What's the pretty? What's the ugly?

I understand consolidation and standardization of operating systems is usually the best state to be in, but in a lot of larger companies running legacy software it's not economically feasible to do.


We are very heterogenous--something like 60/40 Windows/Linux split.

Traditional Windows folks don't really use configuration management or even have any clue about it. Or at least that's my impression. I'm a Linux guy and have been fighting a one-man battle to CM-ize our infrastructure. I have no interest in using Microsoft's DSC on the Windows side (their brand-new CM-like solution in PowerShell) and something else on the Linux side, and since I'm a Python developer I gravitated to Salt.

I love SaltStack (no real experience with Ansible). Although it supports Windows in a sense, it's very rough around the edges. Many modules will fail or have weird edge cases on Windows. I've gotten to the point where the only module I really trust to work 100% of the time is cmd.run (which executes arbitrary shell commands). That said, it's been a total win so far. I've almost completely replaced ad hoc Windows server provisioning with version controlled, documented Salt states. It's glorious.


Mm, I'd say you're right on about some things, but slightly off the mark on others. Traditional windows folks certainly know at least some things about CMS, or rather, CM like functionality. WMI/WDS and friends are surprisingly robust when it comes to things like provisioning and patching, and powershell has been (and I say this as a primarily linux weenie) a breath of fresh air in the windows ecosystem, although I can't speak for its capability specifically as a CM utility. What I'd say is true is that windows folks don't typically know about linux CM, and visa versa. (At least, I certainly didn't know squat about windows CM when I started working in a heterogeneous system).

We made a similar choice as you did, going with salt for certain functionality (because as you found, weird edge cases/fragility of salt on windows) but at the root of things, you use the tool that works well for the system. And in some situations, that means living in a bipartisan world (WDS for windows deployment, spacewalk for linux) or looking for a solution that plays well in the sandbox with both (well), which is a bit rarer, ala salt.

I'm sure there are people who solved this problem way more elegantly, but for being pretty damn understaffed and new to devops when we started, it worked surprisingly well by the end of things :)


>Traditional Windows folks don't really use configuration management or even have any clue about it.

That's a tad unfair, I could say just as easily say the same thing about some of the Linux admins I've worked (and interviewed) with but that's not taking the discussion down a constructive road.

CM/DSC methodology is about awareness of the technologies available. There are a lot of admins out there, regardless of OS expertise, who've never heard of it full stop. I learned about it whilst working as a developer in the banking sector 12 years ago but using eye-wateringly expensive tooling from the likes of IBM and CA.

We have a 65/35 Windows/Linux environment, I have for years wanted to "CM-ize" our environments but we have two different silos of scripts and tomfoolery that get stuff done, we have a lot of friction points because of this. But one of the problems with CM tooling such as Chef, Puppet, Ansible and Salt has been the lack of sane support for Windows. Puppet seems to be getting better at it compared to the other three contenders. For example handling reboots sensibly [0] (and you know how Windows loves its reboots, and in the right order after some MSI or MSU has executed).

There is also a somewhat blinkered world view with regards to Windows i.e. "yuk, windows, not touching that", and at the risk of offending some, it's snobbery and cargo-cultism. A lot of the young folks around here have probably never tried modern Windows server management, it ain't that bad these days. If you can be bothered to learn bash and all this clever stuff on Unix, you can get a handle on learning Windows config management with Powershell which is very bloody good now.

The result is that we have silos of C/VBscript and Powershell code that go and built Windows environments in their own special Windows way because previously tools such as Chef, Ansible et al and their respective development teams don't (rightly but mostly wrongly) don't see any value in Windows support.

I speak as a platform agnostic devops person who has to live in both worlds and has supported Windows and Linux/Unix for longer than most of you have been alive :)

[0]: https://forge.puppetlabs.com/puppetlabs/reboot


Here's a blog about Ansible windows support for those interested: http://www.ansible.com/blog/windows-is-coming

1.7 comes out this week, and we're going to continue to improve it in 1.8.


I'm eagerly awaiting when the SSL cert setup is more streamlined and maybe encapsulated if possible?

I could hack away at the powershell that MS makes available but if you guys are going to put work into it, I will wait even more eagerly for it.


We recently updated the docs to point to a new setup script you might not have seen yet - https://github.com/ansible/ansible/blob/devel/examples/scrip...

But yeah, stop by the -project or -devel list if you have questions or ideas for it, that would be great!


I work for a cloud service provider, and we use Chef in a heterogeneous environment. Several flavors of Linux, and Windows 2003-2012 (both 32 and 64 bit). The pretty is that Chef supports Windows very well, and the mature community cookbooks have good support for Windows as well. The ugly is that it makes testing more complex, but things like ChefSpec and ServerSpec + TestKitchen and Jenkins make it possible to release robust code.

The other CM software may have good Windows support as well, but I don't have any direct experience with it. Either way, the testing is the more critical component here, no matter what CM platform you choose.


ChefSpec and Test-Kitchen are really awesome. I tend to see Chef as a framework for automating infrastructure, not as a scripting language/environment to define resources.

Chef pays off in large scale infra or highly dynamic environments but chef-solo is still a bit lame (I juse knife-solo for that[1]). So most people seem to start with no-devops, shellscripts, puppet/ansible… later they will understand why there are more complex/flexible solutions out there.

It also depends on the background of the DevOps people: Coming from software engineering, you're probably familar with concepts like DRY, YAGNI and principles of clean and robust code. However when your team consists of people with admin-background, they have probably no experience and will write very bad code especially in less strict scripting languages. They are probably happier and more productive with strict configuration files (e.g. YAML) but in the end, they need to start programming…

[1] https://github.com/matschaffer/knife-solo


This is the sort of question that needs a blog post to answer, IMO.

I have not had enough time with any of these tools to speak to the pretty, but I can speak to the ugly. The chief issues with these tools on Windows are package management, overall speed, and community focus on not-Windows.

Package management is the worst, IMO, and it stems from Windows and the majority of it's 'software universe' being commercial. Software is expected to install on many editions of Windows; it is not common to see edition-specific packages for anything not otherwise edition-specific. Software can be packaged and installed many different ways, some of which do not support unattended installation. It's not always clear whether a package is installed at all. It's usually difficult to repackage software that doesn't work the way you want it to, and even if it's easy to do you probably can't redistribute the result.

So yeah, in general, package management is the ugly.


In theory Puppet would be good for the Linux servers at least because it lets you declare things in an abstract way that can hinge on variables like distro, release, etc.

In practice the Puppet language is only tolerable to the extent that it provides (or helps you create) abstractions for everything, and now you have two problems as they say.


What if Apple introduced workstation-class ARM another way; made a really powerful iPad/iPhone which could sit in a dock with keyboard/mouse/monitor, and run both iOS and ARM OSX?

x86 computers could continue to exist for high-end users, but typical-users might be content with a hybrid tablet/computer. As ARM increased in power, x86 might dissapear entirely.

Your comments on Windows compatibility/virtualization are spot on. Cloud streaming, Citrix/XenApp, can help in some situations here. In a few years virtually all major apps will probably be cloud/browser based (Microsoft Office, Adobe Suite, streaming games/apps, etc). Windows on Mac might not be as important then. It's possible Microsoft might release ARM Windows too (besides RT). If ARM gets popular in the datacenter you may see Windows Server 2015 ARM Edition. They've done this in the past with Itanium.


I think that it would be more likely that they'd just drop OS X in that scenario, and just have a beefed-up version of iOS with better keyboard and touchpad support.

The number of users who would actually benefit from OS X vs iOS, but would not be ticked off at an inability to dual boot Windows or run any legacy applications, is very small indeed.

iOS for ARM and OS X for x86 seems likely to remain in the future, but I do think that Apple could do a netbook or dockable tablet running ARM/iOS if they really thought there was demand. I'm not sure there is, but if the product was good enough they have shown in the past an ability to manufacture demand where it didn't exist previously...


What you're describing (a mobile device that docks into a workstation environment and powers the KVM) sounds like the dream mobile product designers have been having for well over a decade. I remember Canonical recently coming up with a concept of a phone that would do this (see: Ubuntu Edge[0]). That being said, the roadmap implied by iOS 8 and Yosemite strongly suggests that they're just going to bridge these gaps over the Internet. Google is also pretty clearly going in this direction, and given the ubiquity of Internet access (especially compared to available KVM terminals), I think this is the direction we're all going to go in.

0: http://en.wikipedia.org/wiki/Ubuntu_Edge


It's been done several, dare I say many times. Badly, of course. I saw one at fry's where the base even had a more powerful CPU and when you docked it, the tablet was just the monitor... but you could access the tablet functionality while docked through something that looked like your typical telivisions "Picture in Picture" functionality. There was also a phone, a while back, that would dock into a video/keyboard in a laptop form factor. (Many phones these days, I am given to understand, have hdmi out, so this is largely a mechanical engineering problem. Well, that and making the software usable on both screens.)

So yeah, if it is, in fact, a useful form factor (and I have my doubts) it's in the perfect place in the technology curve for apple; the tech is all there, but nobody has implemented anything usable.


One big thing Zacolo has over Dropbox is cheaper pricing. Dropbox for Business is $15/mo [1] per user (can be up to 30% cheaper if you pay in-full for a year), where-as Zacolo is $5/mo [2] per user. That'll put downward pressure on Dropbox's corporate pricing unless they have much better service, or much better features, than Amazon.

[1] https://www.dropbox.com/business/buy [2] https://aws.amazon.com/zocalo/pricing/


I'm a dropbox customer. I'll be staying with them for the foreseeable future, but I'm glad to see competition might bring me more space and/or cheaper pricing.


Must be because Dropbox still use Amazon services to store client files.


Location: Marlborough, Massachusetts

Remote: Yes, though I much prefer to work in-office.

Willing to relocate: Yes. Very interested in relocating to the San Francisco area.

Technologies: Objective-C, Python, VMware, Windows Server 2008/2012, Ubuntu Server, iOS/OSX Development

Resume: Looking either for a sysadmin or programming role: http://dave-gallagher.net/pics/Software%20Engineer%20-%20Dav...

Email: dave@dave-gallagher.net


> I've heard of the phobia method, as your body eventually exhausts its adrenaline stores and you're able to address the issue more rationally instead of under physiological duress.

If you're talking about Adrenal Fatigue, then that's pseudoscience not backed by the medical community[1] (though you will find plenty of "solutions" for it on the internet, for your money of course ;) ).

[1] http://en.wikipedia.org/wiki/Adrenal_fatigue


It's quicker sometimes.

This morning I made some oatmeal. I took a 1/2 measuring cup, scooped out some oatmeal, shook it quickly to remove the overflow, threw it in a pot, and dropped the measuring cup in my sink. It took around 5-10 seconds.

If I had to do the same by weight, I would have had to get my scale, put on a bowl, zero the scale, slowly start pouring oatmeal onto the scale until it reached the desired weight, throw that into a pot, put the bowl in my sink, and put the scale away. That'll probably run around 15-20 seconds. If I screw up and poured too much into the bowl by accident, it'll take a lot longer to correct than briefly shaking a pre-sized 1/2 measuring cup.

Other times a scale is much easier to use too; it just depends on the context.


You would put the pot directly on the scale, surely. No need for an intermediate container.


You'd put a pot of boiling water on a scale? Wouldn't that risk damaging it?


I don't make oatmeal, so I don't know the exact procedure, but if one is bring the water to a boil before putting in the oats, then the simple solution is just to use your serving bowl — the one you will eat the cooked meal from — instead. I do that with pasta, for example.


The extra 10 seconds are an acceptable cost when it comes to trying to figure out how add 3/4 of a cup of margarine.



Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact