Hacker News new | past | comments | ask | show | jobs | submit | ppoint's comments login

I think the point you're missing is that senior engs in Meta do not just write code. Some of the senior engs almost do not write code at all. They design systems and move complex projects forward. So that's more of like a TPM/PM/mgmt skill there, in addition to excellent technical ability to write the code itself. In certain projects, especially in infra, writing code is usually the last part (and often the least complex one) of your avg large project driven by senior ICs; and often the code is written by engs across teams/orgs, not necessarily the senior eng themselves. So yea, deep language knowledge, while an excellent skill, is not something that defines a senior eng (at least in Meta and I presume in similar FAANG companies).


>I think the point you're missing is that senior engs in Meta do not just write code. Some of the senior engs almost do not write code at all. They design systems and move complex projects forward

I think your getting off on a different topic. Those guys aren't engineers anymore right? They're more technical managers; or architects.

>So yea, deep language knowledge, while an excellent skill, is not something that defines a senior eng (at least in Meta and I presume in similar FAANG companies).

I agree with this, but this is not what the others are talking about. The topic right now is about python developers. The more wider range role you're talking about here isn't a person with the title "Senior python software developer." Meta likely doesn't have that title.

Additionally, if a company works purely within the python ecosystem of web development, the "architecture" actually doesn't get too complex. Most of this stuff is trivial too, I give it an additional year to learn it.

If you're doing stuff with shared memory and utlra low latency applications then it's a completely different scenario, but you wouldn't be working in python if you had that requirement.


curious why you think there are no benefits to QUIC?


QUIC is purpose-built to circumvent middleboxes. Enterprise networking benefits greatly from providing security and accountability via such middleboxes. The effort required to properly secure a network trying to circumvent your network security is costly and more complex.


Some say "security and accountability" others say launch a MITM attack to spy on employees.


Depends on the company, obviously. I don't care if employees do whatever on work computers, I care about protecting my network and my users.

In practice, all of this attempt to bypass middleboxes does not help my users browse securely. It makes it harder to block ads, it makes it harder to detect phishing sites and other threats, and that's about it.


I mean, that's a trend that's been ongoing in both the HTTP and TLS world for a while now, independent of QUIC. Similar complaints arose with the announcement of HTTP/2 being TLS only, if I recall, and TLS 1.3 would probably already be widely deployed right now if not for some complications with deployed middleboxes causing connection failures in the wild (which I don't entirely blame on them; after all, bugs happen...)


If you simply obey the standard, everything works perfectly well. For example, the only standard way to build HTTPS middleboxes is to fasten a standards compliant client and server together, when a corporate user tries to visit google.com the server presents them with its Certificate for google.com from your corporate CA, and the client connects to the real google.com, now the flow is naturally plaintext inside your middlebox and you can do whatever you want.

If you did that, when TLS 1.3 comes along, your middlebox client connects to google.com, the client says it only knows TLS 1.2, google.com are OK with that, everything works fine. When the corporate user runs Chrome, they connect to the middlebox server, it says it only knows TLS 1.2, that's fine with Chrome, everything works fine. The middlebox continues to work exactly as before, unchanged.

So what happened in the real world? Well of course it's cheaper to just sidestep the standards. Sure, doing so means you fatally compromise security, but who cares? So you don't actually have a separate client and server wired together, you wire pieces of the two connections together so that you can scrimp on hardware overhead and reduce your BOM while still charging full price for the product.

The nice shiny Chrome says "Hi, google.com I'm just continuing that earlier connection with a TLS 1.2 conversation we had, you remember, also I know about Fly Casual This is Really TLS 1.3 and here are some random numbers I am mentioning"

The middlebox pretends to be hip to teen slang, sure, pass this along and we'll pretend we know what "Fly Casual This Is Really TLS 1.3" means, maybe we can ask another parent later and it sends that to the real google.com

Google.com says "Hi there, I remember your previous connection wink wink and I know Fly Casual This Is Really TLS 1.3 too" and then everything goes dark, because it's encrypted.

The middlebox figures, well, I guess this was a previous connection. I must definitely have decided previously whether connecting to Google was OK, so no need to worry about the fact it mysteriously went dark before I could make a decision and it takes itself out of the loop.

Or worse, the middlebox now tries to join in on both conversations, even though it passed along these "Fly Casual This Is Really TLS 1.3" messages yet it doesn't actually know TLS 1.3, so nothing works.


There are just multiple opinions on remote work. Simple answer is that there is no right or wrong - it just depends on each individual case.

There are people that would benefit from remote work - they might like living close to the mountains, no commute and they can still stay very productive while fully remote.

There are also people who don't care about mountains, live close to work and are more productive while working in the office among other people.

Both groups are right in what they value most, and as long as there is no detrimental effects to the business/employer - both seem like good options.


Facebook has been managing distributed teams for a long time. Nothing new for them.


Probably licensing. You can do whatever you want with BSD-licensed code, not so much with GPL.


For internal purposes like these, they're pretty much indistinguishable. Remember that the GPL only compels you to distribute the source to whoever you gave the binaries, not to everyone else. And distribution inside your own organization doesn't count.


These Netflix boxes get distributed to ISPs, so they may regard this as distribution (legally perhaps unclear).


Interesting read, thank you!

Have you considered moving into userland? E.g DPDK/netmap/etc + BSD stack extracted?


A team from Cambridge wrote an interesting paper for SIGCOMM 2017 "Disk|Crypt|Net: rethinking the stack for high performance video streaming" The main gain they get out of netmap, etc, is tighter latencies which allows them to make more effective use of DDIO (I/O caching in L3) to free up memory bandwidth. However, their results are on synthetic workloads, and the general feeling is that we would not see nearly as much of a benefit in the "real world".


Interesting. Usually moving networking operations into userland allows your application to "own" the NICs and the stack and reduce lock contention in the packet path down to virtually zero.

My impression was that profiling userland applications is easier too, but I haven't done any serious kernel profiling so I might be wrong.

The hardest part is, of course, ripping the stack out and keeping it up to date with the mainline kernel afterwards if you need TCP.


There is a port of the FreeBSD network stack called libuinet so that's not an unsolved problem, but it would need a bit more care and feeding. There are other issues like.. you need to fetch content off disks, and then you need to move it between address spaces, and then you need buffer types to pass around and then and then.. at some point you are reinventing a lot of wheels for the same goal: saturate some hardware limit like CPU, memory bandwidth, bus bandwidth, drive bandwidth and latency. The FreeBSD kernel works well for all this and is proven.

Userspace networking is a really big win for packet processing which doesn't have any of the above concerns. On FreeBSD with Netmap and VALE you can chain things together in really interesting ways where you can still use kernel networking where advantageous and userland networking where it's advantageous.


> general feeling is that we would not see nearly as much of a benefit in the "real world".

Wouldn't any benefit be interesting? Or is the engineering overhead just too expensive?


Agreed. No knowledge in Python. How am I supposed to know specifics of x.append() and x = y within the same function?


I think AMT needs to be provisioned on target computer to be exploitable.


There is also a local, non root user exploit to provision AMT described in Intel's announcement.


Yes. To be honest, Intel's announcement was a little dry, lacking details.


But to find out if it is, I need to go into the BIOS, right? Isn't it easier just to try going to a port?


I keep reading that if you "disable AMT" in the BIOS (on my Lenovo, anyway) it just resets the settings. A complete WTF but that's what I've been reading.


Yea, maybe. going to the port is less conclusive in general case. E.g. you have some firewalls/other networking devices/etc in the middle that would deny access to that port and it would make you think it's disabled but it's actually not.


+1 for "reject" fruit and vegetables. We would go to farmer's markets and ask for "seconds", which they usually have in abundance. It's much cheaper, perfectly good and most of the times more ripe/ready to be eaten now, hence very tasty.


Vendor? Sounds too good to be true.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: