No prizes for guessing what it was this time.
In other cases it's probably because your new accounts are breaking the site guidelines. If you're sincere about wanting to use HN as intended, you're welcome to email firstname.lastname@example.org and we'll look into it for you and be happy to help. But could you please not create accounts to break the site guidelines with?
Edmonton has awful winters, but it has a beautiful river valley, decent arts scene, and good local culture. Summers are nice there. I hated much about it when I lived there in my teens and early 20s but enjoy going back to visit now.
(It's nice to see that all North American countries are as bad at making websites; practically a tradition at this point!)
Not Rust level capabilities, but pretty good.
And you answered your own question.
If you aim for performance, "pretty good" doesn't cut it.
Rust still looks like Rust and it’s still safe when you have performant code.
There's nothing sad about it. You pick the right tool for the job. Swift is not designed to be a reference tool in performance-sensitive applications, thus you pick the tools that are.
There is no need to shoe-horn the wrong tool just because it's popular of fancy. A tool is just a way for technicians to express their skills, and just because someone is skilled at developing front-ends that doesn't mean he is a competent at developing performant systems-level application. Specialization matters, not only in tools.
Why? It's a drop-in replacement for Objective-C that allowed you to dip into C and C++ code in the same code file, even in one function.
Now Swift is faster for most higher level scenarios a front-end developer deals with but it's slower than the C performance Objective-C allowed.
C++ is an excellent example on the perils of developing a tool that's good at everything, because the cognitive load to do anything with it is simply not manageable.
Picking the right tool for the job is always the solution.
It does in the overwhelming majority of real cases.
But the reason companies as large as apple care so much about performance is because at their scale a 10% difference can easily mean 100,000 physical servers. So they do go to insane lengths to avoid "pretty good"
"Performance" isn't an absolute, and, going by the fact C++ is apparently acceptable in some "performance" codebases, there's more to it than directly controlling every single RAM allocation and making every single use of memory take maximal advantage of cache. You can't even get that in C without deep knowledge of internal compiler details.
Rust gives no finer control than C does, overall, but it installs some guard rails to make the language less accidentally unsafe. That's proof that "performance" isn't the primary goal with this codebase; if it were, it would be re-written in assembly, and the guard rails be damned.
So Swift isn't immediately out, unless profiling deems it so.
So I think they're "performance oriented Swift code". I'm not familiar with Swift, so sorry if I'm wrong.
Depending on your choice of compiler, a more legitimate concern with C is maybe the lack of computed goto. See here about the nonzero cost of missing this feature: https://eli.thegreenplace.net/2012/07/12/computed-goto-for-e...
But that's not what people usually mean with zero-cost, right? They mean that assuming you do use the same algorithm, then the abstracted version is just as fast and memory efficient as the specialized version.
And that is a bar that C++ at least clearly meets, and, by the looks of it, rust is in practice much closer than C. Other languages have various features that address this need too; it's not that rare. C really is particularly poor here, but even C has macros, and although it's horrible in terms of usability, people nevertheless manage to do some pretty nifty stuff with macros.
Still; it's fair to say C is pretty bad at zero-cost abstractions, and that widely used alternatives support that a lot better (even C++ alone is a solid chunk of the market, more than C - https://redmonk.com/sogrady/2020/02/28/language-rankings-1-2...) Implementing a custom quicksort in C is a pain, enough pain that I expect most hand-tuned special case quicksorts are likely slower than general versions available for C++ simply because people aren't going to bother doing a good job; it's too much work. And that's just quicksort; the same goes for lots and lots and lots of other things too; reimplementing everything from scratch is just too much pain - in practice people are likely to not bother, and just pay the costs.
So while "zero-cost" abstractions are a slightly nebulous goal, clearly C in particular is worse at providing those than all of its low level competitors and wannabe competitors (C++, Rust, D, Nim, etc).
Lots of other languages make it way easier than C to write "fused quicksort". Although it doesn't pay much regard to programming language theory, Jai is another interesting language in this area, made by a guy who's mostly concerned with being able to express the fully specialized/fused version of everything without creating extra work for the programmer.
It supports a few modes of operation, including codegen, and preprocessor tricks. It’s clearly maintainable, given its age and continued use.
Your third complaint is that C with preprocessor macros is ugly. That’s subjective.
I think C++ is more elegant than C + macros, but the GTK people clearly disagree.
I see companies adopting Rust internally. Instead of hiring "Rust developers" they just add Rust to the stack and let their devs learn it. For example, Cloudflare writes most new code in Rust, but Rust is barely mentioned in the job postings.
I think you might be over-stating this. We do write a bunch of Rust though!
That's depends on where you are living (if you are looking for a local job of course).
But sure, Rust is a pretty new language with a relatively high entry threshold.
Is it worth learning? Yes if you are planning to do system\relatively low-level stuff in the future.
However, once I signed away my rights, the experience at Google and Apple was quite different. At Apple, I waited months, with multiple follow up pings, to get approval from a lawyer for a one line trivial patch for an OSS project. I had to give an argument that my contribution provided a direct business benefit to Apple, and generating goodwill in the community was explicitly listed as a reason that is not valid. I couldn't contribute to any Google run OSS projects either (some issue with the CLA, not sure of the blame, TBH).
In contrast, at Google you are encouraged to contribute, don't need any approval for normal OSS projects, and I have easily gotten approval to release two personal projects.
It gets better: during the Windows 8 launch and for the life of Windows Phone 8/10 we were actively encouraged to build own apps for their respective App Stores (cynically this was in-part to boost app count numbers, but also to make us motivated to dogfood the platform, provide feedback, etc). IIRC we were expressly permitted today use company hardware too - just not during core business hours. That said, I openly worked on my never-released clone of “Lose Your Marbles” during my downtime in our teamroom office during the day - right in front of our team’s Partner-level director who didn’t bat an eyelid...
Apple seems to have inherited Microsoft's previous institutional paranoia about open-source software: the legal dept is concerned that if an employee casually browses GitHub and is inspired by some GPLv3 code that they could rewrite themselves for use in a proprietary Apple product then the lawyers consider that Apple product now possibly tainted (as rewriting code or being "inspired" by other code still legally counts as a derivative work, even if absolutely nothing was copy and pasted).
Microsoft lost that attitude around the same time Satya became CEO - I was at Microsoft when the transition happened and it was a refreshing change of policy that really came top-down. Of course we still had annual SBC training to remind us to always verify the licenses of any third-party or open-source components we use (and GPL is generally a big no-no, of course, without special dispensation from LCA) but the idea that a product's legal status could be tainted by casual browsing went by the wayside. I think a lot of the change came from a wave of sanity at the C-level when they realised the company was not actually being actively destroyed by meritorious - and meritless - open-source copyright violation lawsuits, and the few incidents that did occur (like the Windows ISO-to-USB tool debacle) were overblown with minimal impact to the company.
But Apple's paranoia isn't just about legal matters, but also out of concern that if Apple-watchers know who works for Apple and monitor their GitHub accounts then they'd be able to see which areas of technology interest those people, which may in-turn drop-hints about what Apple is working on (e.g. if we suddenly see a bunch of Apple folks Starring and forking LiDAR libraries for private use then that hints Apple is working on a LiDAR product... which they announced this week that they are: https://www.theverge.com/2020/3/18/21185959/ipad-pro-lidar-s...
Now, as someone who believes they'd otherwise make a great contribution to Apple as a valuable SE asset (yes, I'm being self-aggrandizing) this policy of theirs is a huge deal-killer for me. Not just because I own some popular GitHub repos with GPL licenses, but because I also have popular (and profitable) side-projects that only use a few hours of my time each month - and Apple is just as paranoid about those things as open-source projects are as vectors for leaking company movements, even unintentionally.
Heh - I remember shortly after I did NEO at MSFT and filled-out the "prior inventions" paperwork for my side-projects, my team lead admonished me for wasting his time looking at my dumb Google AdWords-supported websites - though he did agree the definition of "prior invention" was too vague.
(Footnote: if you're a recruiter or hiring-manager at Apple and you're reading this, and you agree that my side activities won't be a problem, please reply to this comment - I'd love to work on WebKit or Darwin at Apple :D!)
To be clear, Safari is closed source, while WebKit is worked on mostly in the open. XNU is semi-frequently released as source dumps.
Like an engineering blog and open source projects? I would be surprised if a company can restrict employee's hobbies outside of working time.
This sounds like it’s doing packet processing in software, which doesn’t seem scalable, especially for the traffic volume I would assume Apple handles. Anyone have a clue what kind of traffic volume and bandwidth we’re looking at here?
Granted, I might be overestimating the requirements given the industry I work in (service provider routing).
Not sure if this makes your point compelling. At Google's scale, if you were able to reduce global CPU usage by 0.1%, that would probably be a massive win.
I’m sure Google and others have evaluated this, but it’s just kind of surprising that they opt to do per-packet processing in software.
Disclaimer: I write router software at Cisco.
Regarding your points: I am not sure I completely follow.
Firstly, as far as I know, Google does not make its own switching or routing ASICs.
Secondly, virtually all switching and routing ASICs are highly programmable. So if you need a custom protocol, you can implement it using the vendor’s microcode (e.g., P4). In other words, you are not limited to the protocols that the vendor has implemented.
Given the above, I don’t see what kind of technical requirements Google has that would disqualify the use of routing ASICs.
giving credit where it is due, intel's role in development of latest pcie-4.0 and upcoming pcie-5.0 specs and ofcourse the latest cpu's make it less and less appealing to consider putting up with custom h/w + s/w imho
with dpdk for example, doing 10g/core with 64b packet sizes is considered pretty standard these days. and if you don't want to roll your own, you always have companies like 6wind to help you (and ofcourse they share the code as well)
facebook/google etc. have, by now, probably at least a decade of experience running their own sdn enabled networks. facebook even has fboss to commoditize switch from white-label manufacturers...
Even if you utilize every last cycle of CPU, the price/bps and energy/bit will still be way higher than almost any switching or routing ASIC on the market.
But from just the bandwidth and latency perspective, a custom ASIC makes more sense to me.
From the article:
> Based on a custom implementation of IPsec
Seems pretty clear.
What could possibly go wrong.
Sure, poor implementations can lead to problems, but the same holds true for just about anything.
As for how these offices exist; sometimes the team lead is senior enough to move the project there because they want it to be there, and other times a company is acquired that has offices there and there was no reason to move them. (I think Google got their NYC office by buying DoubleClick, for example.)
It's cool, technically, but maybe a little off-topic.
What is stopping people to download a D compiler and write software in it right now?
People forget that there's a big world out there. We're all doing our thing.
All we seem to be doing is make more layers of gray goo and leaky abstractions that we then plumb together to attempt to make reliable systems. It still doesn't feel right to me.
You must be living in some alternative universe. What I typically come across, even in fairly young companies, is tech debt ridden, buggy, endless cakes of layer upon layer of mostly cruft with a spiffy looking front thrown over the whole. Pretty icing on a crappy cake. As soon as you lift that top layer the illusion crumbles. I'd be happy to bet that most companies that run services out there would not survive a hostile audit of their code and systems.
Considering copyleft is arguably more restrictive than not, it seems incredibly unfair, and untrue, to say it's monetized.
Whether it was previously seems irrelevant now.
What's going on HN?
Fifteen years ago Apple underwent a similar endeavor to convert all of its C++ code to Objective-C or even C. Apple isn't a leader in this decision-space; they're just a victim of this meme.
Please ignore the naysayers. HN is diverse in degree and area of knowledge and some people even think electron is the only cross platform gui framework.
Had D gone with a (deterministic) ref counting like Swift and Vala, it would've been much more popular I think. Such memory management keeps the syntax cute :) while not sacrificing the determinism.
D doesn't change the syntax for O/B.
While I appreciate the kind words, I am not looking for compassion. D must stand entirely on its own merits. D has already succeeded quite spectacularly:
1. we have a wonderful D community which has been utterly indispensible
2. D is part of the Gnu Compiler Collection
3. D has been accepted into the HOPL conference, which is quite an honor
A couple things we are particularly proud of:
4. D has been used by companies to make a LOT of money. They've told me that D is their secret weapon. :-)
5. contributing to the D community has enabled many people to advance into well paying and high status careers
6. D innovations regularly find their way into other languages
TDPL sits next to KNR on my shelf
The response clearly is reasonable, but no reasonable person would say that on someone’s face. So I don’t think Marta _moreno really knew about Walter
You will find a handful of people who either relocated during the last year or were hired there.
This includes people like Jeff Davey, Steven Bromling, Tony Gong, Derek Hausauer and Lucas Wagner who are brilliant.
I was just down around the LA area, and I'll gladly take -40 winters for the rest of my life than deal with that traffic and air pollution.
You'll be shocked.
Swift still hasn't really escaped its niche of being a language for end-users to build clients for the Apple ecosystem with trade-offs in mind for those end-users who are just building clients (rather than, say, implementing a safer openssl).
For example, even Rust's web framework story is more mature and benchmark competitive than Swift's. Go to https://www.techempower.com/benchmarks/ and filter for just Rust and Swift.
I say all this sadly as someone who builds iOS apps. I would find it very weird that someone was using Swift of all language for infrastructure projects.
I wouldn’t build network software in Swift.
This rewrite isn’t only related to the current situation on Linux, but also to Swift’s current performance characteristics. Lack of full Linux support is just another hindrance, before you can consider Swift for this type of work at all.
Why have a half-way solution of porting to modern C++ when you can just write it in Rust?
If I happened to live in the area and was looking for a job I would definitely consider applying.
Edit: "only" 50x according to TIOBE.
Didn't mind the experience at all except for a few niggles which actually are nothing to do with the language itself and more just some of the community attitudes. Some of the evangelists are... less optimal.
IMO, Rust shares the same inner-simplicity that C has, but combined with ability to build and use bulletproof higher-level abstractions. The same way I roughly knew what machine instructions (roughly) my C code produces, I know what machine instructions my Rust code produces.
It is the equivalent of C++, not C. You can write code which you cannot guess the machine code, as soon as you use any of the abstractions, just like C++.
And, unlike most other languages I've seen, you can't just sit down and write a "bad" Rust program and then refine your abilities; the compiler won't let you do the "bad" things, so you have to get everything write from the word go.
If you are a competent C programmer - the kind that "doesn't make" memory errors, you have to be manually keeping track of lifetimes and borrows.
Rust takes that mental effort and offloads it to the compiler.
Lifetimes and the borrow checker will likely be jarring for people coming from languages where you don't need to worry about memory allocation and ownership, but if you are coming from C or C++, you will likely find the borrow checker a relief (I did).
> the compiler won't let you do the "bad" things, so you have to get everything write from the word go.
And it's wonderful! Finding and fixing problems at compile time is the fastest and cheapest way to do it, and is also a productive way to refine your abilities.
Which is why I think it would be easier to learn, you get immediate feedback.
I expect I'll run into more friction once I get to any point where I want to make more dynamically interlinked data structures, though that might also push my designs into directions that use fewer pointers and more indexes or other indirect references to start with (I'm not sure of that yet)—and I really like the idea that if I can pare some of it down to a working “core” data structure semantic that involves pointer juggling, I can put carefully constructed hard-packed blocks of the pointer juggling in their own unsafe zone and then not be able to accidentally break the invariants outside of that.
Which, again, is almost exactly the same thing I'd tend to do in C with inline wrapper functions and visibility stuff, and making sure I compute all the intermediary results in one phase and putting all the hard mutations in a second, unfailable (except possibly for hard crashes due to null pointers or such) phase so that if something goes wrong the data doesn't wind up crashing down to a persistent broken state from mid-flight!
Heck, I've even done the newtype pattern (single-element structs for safe reinterpretation/bounding of an underlying type) in C before!
I've described the way I write C as “as though I were hand-cranking the front end of a strict, low-level Haskell or Caml compiler in my head”. Rust is the closest I've seen thus far to moving a big chunk of that part of me into the digital part of the compiler. So I'm guessing my taste wasn't exactly uncommon.
In a sub-thread comparing language features of two languages to each other to compare their overall complexity, I think it pays to be precise.
I would imagine that many people are going to apply.
I would have loved to write Rust at Apple, if I ever got the chance to. And Canada is on my list of countries I would like to live in already anyway.
Here's a better one: https://www.reddit.com/r/rust/comments/27jvdt/internet_archa...
In other cases, they look at your academic and work history, and may ask you to take FE and/or PE exams and/or an ethics exam and may ask you to complete specific engineering courses. It's actually possible to obtain accreditation through work experience alone (10 years IIRC).
You can even take the FE and PE exams in Canada for this purpose (and for Canadian grads that want to work in the US). Note that if you get an engineering degree in Canada, you don't have to take the FE exam or equivalent to start as an engineer in training, because the engineering schools themselves are accredited with very similar curriculum.
Personal anecdote: I have a Canadian engineering degree but I personally had no problem using work experience abroad as part of my four years experience towards my P. Eng.
I'd encourage you not to discount professional bodies as irrelevant, or incapable of becoming more relevant, our industry could benefit from it in many ways - perhaps most frequently on-topic on HN is the ethical and whistle-blowing aspect. Also plenty of professional development and networking, and that only improves as more people that get involved from different (or rather one's own specific) areas.
One Canadian university's Comp Sci dept. started offering Software Engineering program one time and it ended up in a lawsuit the outcome of which is... complicated.
Canada is, I feel, very... lot of red tape.
The traditional term "engineer" isn't even a very good description of software dev anyway.
Software dev people should just make a new word up and abandon "Engineer".
 - https://www.canadianbusiness.com/lists-and-rankings/best-job...
For engineering disciplines in the railroad industry such as structural, mechanical and electrical, a P. Eng. is indeed required.
Sure, you have to have minimum a bachelor's from an accredited Canadian university, pass an ethics exam, and have supervised experience (or prove that you obtained the equivalent) to join in the first place, but if you don't pay the yearly fee you can only call yourself a "holder of an engineering degree".
I studied a degree called "Software Engineering", that was 4 years with first class honors. It is accredited by, and I'm a member of the Australian Institute of Engineers.
The only way for me to get accredited in Canada it 2 years of relevant work experience with an accredited Engineer in my field.
There are no Software Engineers in Canada.
The entire Google Canada workforce just had our titles changed from 'software engineer' to 'software developer' and were asked to update linkedin profiles to reflect this, as well.
Protected titles seem to have differing meanings in many places. Doctor is a pretty universally understood one; yet we throw around "Architect" pretty willy nilly and it's a protected title in many more countries than Canada.
Yet somehow Canada gets special consideration and other countries do not.
For context: I work for a French company, and we have many Architects despite that being a protected term. But we do not have Engineers in Canada (even if they have MSc CompSci engineering degrees).
You know, not that long ago in Alberta, you could use "Software Engineer" without a P.Eng license.
Then the engineering society (APEGA) sued someone (Raymond Merhej) who did that . But Raymond won in the courts.
So APEGA appealed. But Raymond won the appeal in the courts too.
So APEGA lobbied the Alberta Government to change the laws.
And APEGA won.
 An example from Quebec: SNC-Lavalin affair (https://en.wikipedia.org/wiki/SNC-Lavalin_affair ).
And from Alberta: "Kenney’s United Conservatives were elected last April on a promise to focus on oil and gas and bring jobs back to Alberta by reducing the corporate income tax rate and red tape" (Alberta government files red ink budget with focus on oil and gas, https://canada.constructconnect.com/joc/news/government/2020... )
Back in the 90s I ended up having to come out here to Ontario to get work in software development; not just because there wasn't any work in Edmonton really but because companies there really weren't even going to look at someone without a CS or engineering degree. Every once in a while I muse about moving back there and I take a look around at job postings and I think even with my 8 years as a SWE at Google (and a 20 year career in dev generally) I might have a hard time landing a job there; lots of postings heavily emphasizing the academic angle and obviously trying to pull people straight out of the university.
It's a more conservative business culture in some ways.
This is absolutely false. For example, please review the "Software Engineering Experience Review Guideline" for Saskatchewan . This was approved back in 2014.
The vast majority of us (definitely > 90% of my class) have moved to the US after graduation where we pretty much can use whatever role we want (eg. my official title is software engineer at my company), but from what I know of the few people left in Canada, their roles are officially "programmer" or "developer". I've actually heard stories of P.Engs on messaging people in my program on LinkedIn to change their job titles to not include the word engineer. I'm not really sure what the repercussions are.
Just for completeness: it’s OK to use the title “B. Eng.”, which means you have a Bachelor’s in engineering, without implying anything about being a professional engineer.
Are you saying it's a racist town, or yourself objecting to its current demographic?
That last part is key. Nothing like seeing code in a current language written like it's mainframe code. I've seen some really poorly ported code (and written plenty). The learning curve is pretty steep but it's great work if you enjoy it.
For me, my favorite part of working in software is the learning. New tools, languages, platforms, systems not to mention domain knowledge. In my career I've worked from government, banking, aerospace elearning, eCommerce and many things in between. What other career is such a great opportunity for continuous learning.
That doesn't say the job is ONLY converting code. It says that code was converted, and new code will be in Rust.