Rust has two types called Rc and Arc that have this behavior. When you clone an Rc pointer, the internal reference count increments, and as each clone disappears, the reference count decrements. (Arc works like Rc, but it uses atomic increment and decrement operators, so you can share it across threads.)
... but most of the time you don't need them, because the single-ownership (+ multiple read-only borrowers) model built into the type system already does what you need. There is no 'naked delete' in Rust, deletes are implicitly inserted by the compiler at the point where the owner of a piece of memory lets it go out of scope.
That was probably just a random student who learned some fun stuff in Security class and slept through the Ethics lesson. I can't speak for UMich, but security research at my university (NC State) has a very strict "don't attack civilians" policy.
> hosting a .com commercial site
First off, .com sites are not necessarily commercial. Second, this isn't a commercial site, it's an informational page about a recently discovered TLS vulnerability.
In the first case I read you as saying it's OK to commit a crime against a civilian in the United States as long as [the person didn't mean to] and in the second case that since not all .COM domains are used for commercial purposes and since this one seems to be information only at the moment; that our tax dollars which helps Universities across the United States to run can be used to fund whatever .COM sites students feel so inclined to register and for whatever reason they feel is justified.
Funnily, this is the reason I adore Slim so much. In the PHP world, it gives me exactly what I require in a framework, and nothing else. Slapping it on top of a well-defined class heirachy becomes a simple exercise. I like it so much, that I'm porting it to Hack/HHVM!
It seems to me that the author's system was in an incredibly unstable state to begin with. He's running a Debian prerelease with an unsupported filesystem, that had gone without updates for months at a time. I have a hard time seeing how changing the init system would not cause problems in a setup like that.
Agreed. I run Debian Testing and was able to upgrade ~5 machines without a hitch. Yes, I do have some odd behavior with the brightness key (but only when a second display is plugged in...), but I'm running Testing so I'm happy to trade an occasional annoyance for more recent packages.
The biggest problem I've had with systemd (I now have about 200 machines running it/Jessie) is that I tend to blame everything on systemd... DHCP problems on boot! Must be systemd! (It wasn't...)
Yup, let's face it, RedHat have bet the farm on systemd and the facilities/programs that go with it, so it will have to work and work well for 10+ years minimum.
RHEL7 and the clones have systemd v208. Debian Testing just had v215 arrive. Freeze is quite soon, so it looks like 215 will be the systemd version that ships, so the packagers will have to backport security fixes to it for the life of Jessie (say 2 years or so). Just wondering how much work that will be if systemd continues to iterate at the speed it is doing...
PS: why so many 'machines' (desktop clients, workstations?) running an unstable Debian I'm idly wondering...
It's worth mentioning both that Debian Testing is not called 'stable' because, in their own words, there are occasional significant breakages. It's also worth mentioning that right now, Debian Testing is being prepped for a freeze to become Debian Stable in less than a month, and it's absolutely chaotic as everyone is applying patches all over the place to get them in before the freeze.
After the freeze there'll be two months to sort out critical bugs like the ones in the article, but this possibly wasn't the best time to do a big upgrade of an edge-case system. Well, unless you want to find and report bugs :)
> it's absolutely chaotic as everyone is applying patches all over the place to get them in before the freeze.
No, it's not. That said, if you want to run Debian testing, do yourself a favour and also install apt-listbugs -- it'll hook into apt, and alert if the update has any bugs filed against it (the update you are installing, that is).
I recently moved my laptop from Debian/stable w/backports to Jessie (testing) -- and it's been fine (had to do a reinstall anyway due to shifting around partitions, because I needed more space for windows so I could create an installer for windows 8.1 -- didn't have enough free space to ... don't get me started).
I don't understand why people would choose a non rolling-release distro (e.g. Debian) over a rolling release distro (e.g. Arch). Because the former come with the implicit guarantee that every few months, you have to update EVERYTHING all at once. Seems like a needless risk and pain.
Genuinely curious here. There must be an upside to this model, can someone tell me what it is?
I switched from a rolling release distribution (Chakra) to Debian because I got tired of the rolling release system constantly breaking things. If you fall too far behind on updates in Chakra, the maintainers basically tell you you're screwed. I finally complained about it in their forums and was politely told to go away.
My computer is just a tool. I just want it to work so I can get things done. Debian has been good about that in the past and enjoys the support of a very large, dedicated, active developer community.
Maybe Arch is better than Chakra at managing their updates (although Chakra was basically a modified Arch distribution), but my experience with Chakra left a really bad taste in my mouth for the rolling release approach.
FWIW, Debian's "update EVERYTHING" every few months never broke things as badly for me as Chakra's frequent smaller updates did.
Yeah, Debian stable is basically what I'm on now. (Sort of. It's complicated.)
If I can be unabashedly honest for a moment, I think the problem with Linux advocacy isn't the software any more, it's the advocates. The advocacy seems to be coming from a lot of people that are always after the latest, newest, greatest thing, people who are actually uncomfortable when features aren't changing all the time. They tend to be very vocal about recommending anything that comes with frequent updates, whether it's browsers or Linux distributions.
But that's not what the larger untapped home market wants. They just want things that work, and they want things not to change on them all the time. They only have enough space in their daily lives to learn new features every once in a while; any more than that, and they get frustrated. I hear this from our customers all the time, and as I get busier, I'm really starting to understand their point of view better. (As an example, we've recently been seeing more of our customers switching back to Internet Explorer on Windows 8 systems.)
So as far as signposting goes, I think boring, stable, slow-moving distributions like Debian stable should be the default recommendation, and then Arch or similar recommended as the really cool cutting edge thing for hobbyists and early adopters. That is, Linux by default should be marketed as stable and safe and not changing all the time, with the option of making it a bit more fun.
We almost opted to recommend Linux to a bunch of our customers who were moving off of Windows XP this Summer, but the feedback wasn't great on a couple of experiments and finally we decided we were too small to really afford the support costs. It was a big missed opportunity though and I'm going to take another look at doing it next year.
I run a rolling distro (Debian Testing) on personal-use machines (laptop; work development desktop; personal server).
I run fixed release distros on production servers/devices. It's important to be able to install a new package on a machine and not have a hundred dependencies need upgrade, break or conflict. And lots of packages are available in backports when you really do need a newer kernel on Wheezy (https://packages.debian.org/wheezy-backports/kernel-image-3....).
Not when you have a few hundred servers. Then, you want consistency. Consistency between servers, between datacenters and between pre-production and production environments.
Now, you can get there with your own repos and a rolling distro. Or you can accept a test/upgrade cycle for a fixed release distro every half year or every year. I personally think the latter is less error prone.
I am unfamiliar with Manjaro and Chakra. My experience with Arch is limited to my Cubox-i. It's a nice distro, with an excellent community. I liken it to Gentoo in its glory days.
Having defined my limitation in the observation, I'd just like to point out that, when it comes to distribution stability, for large server deployments, there is safety in numbers. Should a problem occur in a "stable" package, the odds that you are the first one to find the error are smaller with popular server distros (RHEL, Centos, Debian) than with less popular distributions. It's not a statement regarding the intrinsic quality of the distribution. It is a statement regarding the overall quality of the distribution + installed base.
All in all, for a distribution to dislodge entrenched players, for this use case, it will have to be an order of magnitude more stable.
Using a fixed release only requires that you apply the update once every few years, so you can plan for it. You can do it at a quiet time when you are available to fix any problems. With a rolling release you have no idea if something is going to break at an inconvenient time.
Debian Testing branch is always rolling except for a couple weeks/months before a freeze for a release.
I suppose if you really needed to keep rolling you could temporarily switch to unstable.
"Seems like a needless risk and pain."
And the purpose of the freeze is to iron all that out so it doesn't happen.
(edited to add, I'm sad seeing OP get downvoted. His post history shows hes an Arch guy and likely genuinely doesn't understand release tagging. As a Debian user since '97 I am not surprised that there exist both people who don't know the peculiar arcana of release tagging and there are people who are experts at it, so his confusion adds a little value to the conversation. Down arrow should mean a decrement of net worth of the conversation. People are learning things from his mistake, a down arrow is not an assessment that OP got a technical test question wrong. And I had to edit this about ten times to phrase it correctly.)
I don't know much about Debian. I may have slightly mischaracterized things because I don't understand. That's why I asked.
I am getting the impression that Debian Testing isn't really used in the sense of, "Be a nice volunteer and run this thing that has problems so we can fix them," which is what (to me) is implied by the name "testing." Rather, it seems to be "here is a (mostly) rolling release version of Debian, if that's what you want." In other words, it almost seems like "Debian Rolling" might be a more apt name, in a sense.
The thing is, as a user of Arch Linux (without a ton of other experience), I just don't have problems with a rolling release. For me, things don't break, and it just works. So it feels to me like Debian's whole release philosophy is based around the idea that things have to break all the time and be fixed carefully, and that just doesn't sync up with my experience. So I'm trying to figure out what is missing from my worldview.
Thanks for defending me. I think might be somebody (maybe multiple) who doesn't like me and just downvotes all my comments, but I don't have any hard evidence.
I think (could be wrong) that Debian Sid is the nearest to your idea of Debian Rolling. Debian Testing is a testbed for the next release, especially when the freeze happens (and just before freeze when people are trying to get patches in).
You generally want to test changes to your environment before rolling them out, and you generally want to keep disruptions in production as minimal as possible. That's why. It's a need that makes Red Hat alone a billion dollar company.
If you're going to wait a few months to update, you are much better off on an actual non-rolling release distro than Arch. Arch frequently does not test upgrades for packages more than several versions out of date, whereas distros with proper releases will test upgrading from the previous stable version.
Debian testing, however, isn't a proper released distro -- it's sort of a mishmash between a perpetual beta and a rolling release. Debian stable has proper releases, and Ubuntu was started largely as a result of people who wanted more frequent Debian releases.
To be clear, the whole idea with Debian stable is that you get security patches all the time, but no new (or depricated) features/apis. So you update every night, but you upgrade only when a new release comes out.
Compared to that testing does both: typically similar frequency of security-related patches (but not guaranteed!) as stable -- and also migrations of new packages from unstable as soon as they "settle down" (and in "reasonable" sets, so that dependencies work).
So, you want a backported fix for the bash bug, in bash 4.2, but not upgrading to bash 4.3 -- possibly breaking somehting depending on 4.2 behaviour (something other than an exploit for shellshock, that is).
(Now, bash is pretty stable, so may not be the best example -- but the point remains).
If you're running testing, in addition to apt-listbugs, you want to have a look at "aptitude safe-upgrade/upgrade" vs "aptitude dist-upgrade" (or apt-get upgrade vs dist-upgrade). A dist-upgrade can be a little bit more invasive, and typically warrants some more vigilance than a mere "safe-upgrade". I don't think I can remember a "safe-upgrade" ever breaking anything in my ~14 years of using Debian. It's pretty safe to script to run automatically, unless you have very strict policies on uptime/predictability.
I run Debian sid, and I upgrade almost every day, with a few exceptions (like "during or immediately prior to travel"). Doing so lets me catch issues before others hit them, file bug reports, obtain the latest fixes and improvements, and participate in the community that shapes future architectural decisions for the distribution.
> implicit guarantee of "you have to change EVERYTHING
> all at once every few months"
This is a strange conclusion to draw. What makes you think that debian/stable requires updating everything all at once every few months? To be honest this does not even sound like a fair characterization of running unstable.
Funny. I upgraded my ZFS laptop to Utopic with systemd, and everything is more or less fine. There's a few residual issues - I need to install the ZFS targets since zfs-mount doesn't quite have the right mount order (but that's been a problem since well before ZFS hence using zfs-mountall).
But otherwise everything else basically just worked.
Oddly enough, back when the cylindrical Mac Pro was first announced, a coworker and I discussed how exactly one would rack them. We came up with something very similar, but slightly asymmetric, and designed to hook into a standard 19" rack.
(This was prompted by the question of what Apple's Web site runs on if they don't make servers any more.)
Flashing reds are based entirely off of how long it takes an "average" person to cross. As a relatively tall young adult, I walk much faster than the "average," so I can frequently make it across by walking even if it's been flashing red for a few seconds. Furthermore, the "Don't Walk" sign becoming solid is timed to correspond to the yellow lights appearing for the parallel traffic, so the opposing traffic won't start moving for a few seconds after the "Don't Walk" signs appear.
(Disclaimer: This is based on Raleigh, NC's traffic lights. I don't know what level this is standardized at.)
My lightning talk - "Traversing the Montreal Métro with Python" - was accepted, and I'm super-excited!
But also I'm super-excited about the rest of the schedule. There are 36 talks that I'm really interested in seeing, and I don't think that's even physically possible. So I guess I have to decide which ones I'll have to settle for watching on pyvideo.org.
The "moderators on StackExchange" shot was good, but I really take offense to the "PyCon 2013" one. The reason stuff went wrong at that conference was because of the actions of one attendee, who specifically did not go through the process the conference organizers set up for dealing with reports of harassment.