Hacker News new | past | comments | ask | show | jobs | submit login
Python at Netflix (medium.com)
440 points by luord 48 days ago | hide | past | web | favorite | 237 comments



> "Python is the industry standard for all of the major applications we use to create Animated and VFX content, so it goes without saying that we are using it very heavily..."

Interesting... very curious to see Netflix using tools in the VFX industry, Shotgun & Nuke to name a few, I wish they can expand more on this.


I used to work for a major vfx house who did much work on Netflix original shows

there will be a certain amount of sharing needed between vendors, and so Netflix will need to have some vfx expertise in-house to coordinate that (similar to how a film will have a vfx supervisor that works for the film production company, who liases with supervisors at each vendor company)

but it would be very sensible for them to have an in-house asset library (which may included work written in python, not just images or textures) so that if they want to switch vfx vendor between seasons of a show it's easy to do so

in general, python rules in the vfx world. there's some c++ too. Sadly, because it's been this way for so long, many houses are still stuck in a python 2 world with huge legacy codebases


Netflix produces content too


A lot of it, which needs compositing, post-production, encoding, etc. Python is great for pipeline code to script the compiled libraries that do the number crunching.

Some of it is licensed/farmed out to places like DreamWorks however (new Voltron/She-ra), which is mostly a Java/Spring shop, to my knowledge.


Do you know what the Netflix culture is regarding remote work ?

I've been looking for a job for the first time in 10 years of Python, and Netflix is one of the rare big companies that I still have respect for.

https://jobs.netflix.com/ has several offers I would be a good fit for, but I'm not ready to relocate to the US.


We do have an office in Paris: https://jobs.netflix.com/locations/paris-france

Netflix is generally opposed to remote work, but some teams may make case by case exceptions (especially in Open Connect, the organization which manages the CDN). I'd encourage you to speak to a recruiter.


How come the Open connect team is more receptive to remote work? Harder to find people to work on FreeBSD and low level C code?


They're also working with a ton of remote and disparate organizations (ISPs), so locality is probably not as important.


Maybe because their hardware architecture is distributed.


Thank you. Will do.


As someone who has worked on fully remote game dev volunteer projects for nearly 10 years, it's unbelievable how much the cultural inertia of physical presence has prevented companies from making the rational move of eliminating all possible office space in favor of remote work and skype meetings. So much money wasted on rent, security, and upkeep and so much time (and even lives) lost to commuting when there isn't a single compelling reason programmers need to cohabitate a physical office to get their job done.


Communication between team members is a huge issue in remote work.

It works if everyone is on site. It also works if everyone is remote.

But when some are at the office and some aren't, communication and information sharing becomes a lot harder. Mostly it's a process and tool issue, but still humans will rather just turn around and ask the team than spend a minute writing their issue on Slack/whatever.


It's true. I worked at a company that was 100% remote when it started and kept the remote culture as it grew from 4 to 25 people, but one city in particular ended up having enough people that they decided to rent space in a big shared office, coworking-style. Ultimately, people who worked in that space ended up being more connected to what's going on, due to physical proximity. You wander past a hallway conversation and you end up joining, or at least knowing about it. You go to lunch with your coworkers and happen to get a little work talk in. You ask questions more easily from someone who's right across the desk from you because you can read social cues about how "interruptible" they are. Jokes come across better in person than via messaging. There are lots of subtle ways in which in-person interaction is just inherently different than remote, and the office people end up tighter.


It's not even remote vs non-remote. My employer had many medium sized offices of 100-200 people, and we merged with a company that had one single 5000+ person campus. That was a major culture clash for sure in terms of documentation and processes


doist?


But it's not actually hard, managers just aren't trained on how to deal with the split. Leadership can demonstrate how to manage both, and if there are still problems, you can just mix up the office seating.

Even in remote-heavy orgs, communication issues remain if the culture or process is shit. If you have 10 different Jira sites, 4 forms of communication, and documents in 15 different systems, it's going to suck no matter where people work.


Simplistic answer but it is that simple: just don't do that.


I like remote work to an extent, but its not that simple, and whether programmers "need" something or what "can" happen is a really not the best point to argue, because for example you don't "need" anything more than C if you require portable code to do anything that pretty much any other general purpose language does. I've also done these volunteer projects. They work because timelines are pretty up in the air, there's often little or no funding to be concerned with, everyone is working on it because they want to, and they're only working when they are well-motivated to do so. There are many more constraints in a proper company and job. Remote communication doesn't even come close to the interaction of a roughly synchronized office environment. Of course, there are downsides to the traditional office setup too, but in eliminating those concerns you introduce more problems presented by remote setup. I know people who even have the option of remote work and choose not to, and I can empathize with that sentiment.


Even though I would absolutely prefer to work remotely, I have to disagree with you.

I find that there are significant benefits to working in the same physical location.

It's just way easier to go to someone's desk and ask them something than to have to set up a video chat. Human interaction is just naturally an in person thing. So much is lost if you can only video chat. Sure it may not affect the work directly but I think it really affects a team's dynamic. Lots of great ideas happen just from random office interactions, whether it be lunch or just talking over coffee. These kinds of things are really hard to reproduce in a remote environment.

That's not to say there aren't reasons working remotely is beneficial. I just don't think it's fair to just say the only reason companies opt for physical proximity is cultural.


> It's just way easier to go to someone's desk and ask them something than to have to set up a video chat

That's not a benefit, that's a downside. Interrupting someone while they're working kills productivity. You're far better asking in chat and having someone you're not interrupting help.


I'm sorry that you feel that way. I consider being able to help another person to be a huge privilege.

What you describe sounds like it's putting one person's progress over another. Sure a mentor may be disrupted but by unblocking someone else they're still increasing the overall productivity of the team and thus is a net gain.

> You're far better asking in chat and having someone you're not interrupting help.

This only works when your question is general enough that many people can help you


Not every programmer does his/hers best working remote and communicating through slack/skype meetings. People have different personality traits. The remote devs in your volunteer projects were probably self-selecting.

Also, some dev positions require extensive communication and that is easier in person.


Companies invest a lot in their employees. They don't want them to leave. In many managers' eyes, physical presence in the office makes it much easier to gauge employee satisfaction and whether they may be out interviewing. Also, it seems that in many organizations still, the employees that arrive to work earliest, are most punctual, most consistent, most organized, best dressed are viewed as more competent and are favored.


> So much money wasted on rent,

Although this now translates to a cost on me needing an extra room in the house for an office.


I do fine using my normal desk. Open AWS and Skype at the start of the day, do work, close them at 5, that's it. Bring a laptop to a different corner of the house or a library/coffee shop if you can't adapt to using your personal space for work. Working outside is nice when the weather permits.


If I didn't work from home, I'd not have a room for an office. Working from home means I need an extra room realistically.


I do fine using my normal desk.

Not everyone has space in their apartment for a "normal desk".

Bring a laptop to a different corner of the house or a library/coffee shop if you can't adapt to using your personal space for work.

Not every coffee shop likes having people lounging around for 8 or more hours a day, making minimal purchase. Aside from the fact that most are too noisy to concentrate, have shit wifi, bathrooms that are frequently occupied / barely work, etc.


I get it personally. Remote requires better communication skill, and additional tooling and management to work around time differences.

Besides, I know many people that are not as productive at home.


Did I hear leftovers?


Especially when their stuff runs on AWS.


what that does have to do with remote work? I commute to work everyday, but my team's code doesn't run in a cpu under my desk.


Systems administration used to be done in-house.


> but I'm not ready to relocate to the US

Even if you wanted to, it's nearly impossible to immigrate to the US as a skilled worker legally.

I have a friend who is an really talented programmer (and a really diligent worker) who studied Computer Science at some of the best universities in Europe: École Centrale Paris and the EPFL (École Polytechnique Fédérale de Lausanne) in Switzerland. One of his biggest desires for a while was to move to the US, and live and work in New York. He spent more than 2 years trying to the US, and then gave up. He had even hesitated buying too much furniture while in France, because he was anticipating being able to move to the US.

The only viable option to move to the US for most people in your (and his) situation would be the H-1B visa, which is extraordinarily difficult to get, and requires a miracle to obtain. You need to: (1) have a bachelor's degree that's related to the field you want to work in -- e.g. Computer Science degree for software development work, (2) find a company that's willing to offer you a job that pay at least what Americans make doing the same job, (3) the company must be willing to wait 7-8 months, (4) you need to win a lottery in which your odds of success are about 1 in 3.

(1) and (2) are not too difficult. (3) is quite difficult -- the majority companies do not want to wait for 7-8 months for you to join. And (4) is completely up to chance, and out of your control. (By the way, this lottery is conducted at the beginning of April every year, and you can only start working in October.

My friend fulfilled criteria (1), (2), and the difficult (3). But he did not get selected in the H-1B lottery two years in a row.

The US is essentially a closed country when it comes to skilled immigration. Most of the immigrants who get green cards here are family members, refugees/asylees, and diversity lottery winners. Getting even temporary permission to work in the U.S. as a educated/skilled person requires a miracle from God.


(1) You don't need to have a degree to apply for an H1B as long as you can prove 3 years of experience to each year of college. So, if you are a developer with 12 years of experience that makes up for the lack of a 4-years degree. You can also combine years of college with experience to complete that requirement. Say, 2 years of college plus 6 years of experience.

(2) the salary must follow the prevailing wages in the area the employee is gonna work. That's not really an absurd thing specially considering that the market rates are usually higher than the prevailing wages.

(3) what I have seen about this point is that companies will usually find a way around that if they're really willing to hire you. Either work remotely as a contractor until the visa is stamped and you can move or something else. It's hard but not impossible.

(4) is really up to chance and I've had friends who were and were not selected.

I'd say it's hard but no miracles needed.


> Even if you wanted to, it's nearly impossible to immigrate to the US as a skilled worker legally.

Nearly impossible? Come on, this is a gross exaggeration. Get a job at a FAANG or unicorn in Europe, work there for 1 - 2 years, transfer to the US on an L1 visa, then apply for H1B each year until you get it. I know several people who have done this. Hell, Microsoft opened an office in Vancouver specifically to move failed H1B's there until they can get an L1.


You forgot other loopholes: 1. Get F1-visa and do MS. 2. Marry a US citizen 3. use a J1 visa?


> 3. use a J1 visa?

I've addressed this in this sibling comment: https://news.ycombinator.com/item?id=19789172

> 1. Get F1-visa

People that study here also have to go through the H-1B lottery, like everyone else. Graduates with MS/PhD have a slightly higher probability of winning the H-1B lottery, but it's like 0.5 for US MS/PhD versus 0.3 for everyone else. You are still forced to go through a lottery. (One thing is you get to work in the US for a 1-3 years with an F-1 visa under a program called OPT.)

There are many graduates with Masters and PhD degrees from American universities (incl. highly-ranked ones), who have well-paying jobs (under F-1 OPT), who lose the H-1B lottery, and as a result, employers are forced to fire them, and they have to leave the country (under the threat of forcible deportation and being banned from the US). This is the reality of US immigration today, and it's been like this for a long time. For example, here's a Harvard Crimson article from back in 2007 arguing for increasing or eliminating the H-1B limit: https://www.thecrimson.com/article/2007/4/9/raise-the-h-1b-c...

> 2. Marry a US citizen

Marrying someone for the purpose is immigration is considered "immigration fraud", and if they find out, you will be deported and permanently banned from the US. If you became a U.S. citizen through such a marriage, your citizenship can and will be revoked, and you could even be rendered stateless as a result. If the currently conservative U.S. Supreme Court overturns Zadvydas v. Davis[1], you could spend the rest of your life in prison.

[1] https://en.wikipedia.org/wiki/Zadvydas_v._Davis


> Marrying someone for the purpose is immigration is considered "immigration fraud"

No, it's not; Marrying someone for the purpose of evading immigration laws is. There is a difference; even if immigration is sought as a benefit of marriage, if there is genuine intent to enter into and maintain a bona fide marital relationship and that is maintained throughout the conditional period associated with immigration by marriage, there is no fraud.


You're right -- I'm aware of distinction; I should have used more precise language. I meant to refer to someone that was entering into a fake marriage. Anyways, my bad--I've gotten a bit imprecise/sloppy with my writing lately -- I'm definitely going to try to be more precise in the future (and make it a habit to be precise).

I personally am very against fake marriages (in my own life). I want my marriage to be genuine, and life-long. There was a post on HN back in 2015 about William Han's experience with the US immigration system: https://news.ycombinator.com/item?id=9764564 (article: https://www.vox.com/2015/6/23/8823349/immigration-system-bro... )

Some of the parts of his article, especially regarding marriage as an immigration pathway, are sad but enlightening (emphasis mine):

> Years spent as a student do not count. Neither do years on a work visa unless your employer is willing to sponsor your green card. Marrying an American works, as a thousand films and television shows have taught us, because it allows a change of status to permanent resident. But if you wish to follow the rules, as I do, then it must be a bona fide marriage. And if you take important personal decisions such as marriage seriously, then you may not wish to have their timing dictated by Homeland Security.

> I've already talked about my friends who think becoming a citizen is as easy as going to a government office and signing some papers. But even people whose job it is to understand the system often don't see how broken it is. At one firm where I worked, an HR manager told me to "just get married." Marriage solely for the sake of a green card is, of course, illegal — it is fraud upon the federal government. A bona fide marriage is fine, but that depends on you finding the right person and having your relationship progress according to Homeland Security's timeline. Once again there is the humiliating feeling that your life is not your own: the government may now effectively dictate when you get married.


You can join EU office of the company and get transferred via L1.


If you're European from a "good" country it's pretty easy to get an H1-B with a master / eng degree to work for a SV company.


Your country of citizenship or birth plays zero role in terms of getting an H-1B visa. It neither helps nor hurts. And it's certainly not "pretty easy" to get an H-1B in any sense.

For example, right now, if you wanted to work in the U.S. the earliest date you could start working would be October 1, 2020. (That's because this year's lottery is already over, you'll have to try your shot with the April 2020 lottery, and you can only start working 6 months after that at the earliest.)

There might be racial prejudices in terms of getting a job, where being a European helps, but even that's extremely unlikely in Silicon Valley. Most U.S. tech companies are concentrated in progressive (Democratic party voting) regions -- and most people in these regions are emphatically not racists, and many often go the extra mile and make a conscious effort to deal with any subconscious racism in their minds. So don't expect being European or white to help that much.

The government certainly doesn't care what country you're from, and what your skin color is, when it comes to approving H-1Bs -- at least there hasn't been any evidence to the contrary. The vast majority of H-1B visas go to people from India and China, of which the majority go to people from India.

----

Also regarding L-1 visas: Getting a job at a FAANG or a large multi-national company with offices in the U.S. and Europe/elsewhere, and that's willing to relocate you is not easy. If you're a valuable contributing member of your team, I doubt your manager would be jumping with job at the idea of you being transferred to the US.

Getting an L-1 visa requires not just a degree and 1 year of work experience with the company, it also requires "specialized knowledge". Lookup the definition. There have been denials on the basis of this "specialized knowledge" requirement.

Furthermore, it's unpleasant to be in the U.S. on the L-1 because you have no way to change jobs (unlike the H-1B) and lose legal status as soon as you're fired. I had a family member who after many years in the US on the L-1A (managerial) visa, had to suddenly pack up and leave the US. The company held off on applying for an EB-1 green card until the L-1A was nearing its 7-year limit, and then for unrelated financial reasons this company shut down overnight. Everyone was laid off, including US staff.

So you need a company that's nice enough to apply for your H-1B every year (knowing that it'll give you the freedom to change jobs as you wish), and one that's willing to relocate (at a loss to the team in your original country).


This is not how it works, if you have a master's degree and want to work for Google you will most likely get your H1-B visa.

https://www.immi-usa.com/h1b-masters-quota/

If you don't what they do is they ask you to work for them elsewhere for 1 year ( London / Switzerland ) then move you back to the US with an L1 visa.


> This is not how it works, if you have a master's degree and want to work for Google you will most likely get your H1-B visa. > https://www.immi-usa.com/h1b-masters-quota/

The Master's degree needs to be from an US university - which is pure genius, as this way a prospective immigrant needs to pay a lot of money (US degrees are not cheap) for the right to have a better chance of getting in - even if they already have a Msc, or could get one for free in their country of origin. America's basically found a way to profit on people wanting to move there. Of course, it's not new, with indentured servitude being common in previous centuries, but it's cool (in a creepy way) to see it still being alive.


> if you have a master's degree ... you will most likely get your H1-B visa

It's "most likely", if your definition of "most likely" is 51%.

Let's calculate the probability:

• In 2018, there were 95,885 applicants with a Masters degree or higher[1]. Out of 190,098 total applicants.

• There are 20,000 spots available for U.S. Master's degree holders, and 65,000 spots for everyone. (Ignoring the fact that there's a reservation of 1,400 for Chile and 5,400 for Singapore.)

• Your probability of rejection in the masters lottery is 1-(20000/95885) = 0.7914

• Your probability of rejection in the general lottery is 1-(65000/(190098-20000)) = 0.6179

• These are independent events; the probability of being rejected in both is: 0.6179 x 0.7914 = 0.489

• If you flip that, your probability of being selected in the lottery with a U.S. Master's degree is 0.51, ie. 51%.

[1] https://redbus2us.com/h1b-historical-data-lottery-vs-85k-quo...


I know at least 3 french persons that managed to make it work. They don't even work for a FAANGs, and only of of them is technically better than I am.

I'm an optimist :)


Could you ask them on what visa they moved to the U.S.? I know a few French people in New York. But they're on J-1 visas -- which are very short in duration (~1 year), and truly temporary -- you are required to have plans to return to your "home country" after that period. It offers no practical path towards permanent residence.

The only visas that don't prohibit immigrant intent (i.e. desire to immigrate + company applying for permanent residence for you) are the H-1B visa, L-1 visa, and O-1 visa. For anyone looking to permanently move the U.S. on the basis of skills, those are the only options. (I didn't mention the L-1 and O-1 in my comment above because they're even more narrowly granted.)

> only of of them is technically better

Technical skill doesn't really matter with the H-1B visa; only salary really does. Even going to a top university doesn't mean anything. The lottery doesn't discriminate.

There are foreign students who graduate from the best universities in the U.S. like Harvard, Yale, MIT, etc, often with advanced degrees (Master's / PhD), with good high-paying jobs, who get deported from the US because they did not win the H-1B lottery.

The Harvard Crimson complained about this even back in 2007: https://www.thecrimson.com/article/2007/4/9/raise-the-h-1b-c...


I'll check with them if I work in their team again.


Their job site is a big /dev/null however. I spent several years dutifully submitting for numerous jobs with twenty years experience in internet and VFX in Hollywood and didn't even get an "other direction" form-letter for my trouble. (They don't date their posts so you have no idea which are active.)

Netflix is in biking distance, but instead I work for a company on the east coast, 4000km away, waking at the crack of dawn.


The only way to get hired at competitive companies is via referral - if you live in the Bay Area you probably know someone or know someone who knows someone at Netflix.

Trying any other way is a lottery - and if you didn’t go to MIT/Stanford/CalTech/Harvard then your odds are pretty bad.

The other way is already working at a famous company and having another famous company send you a recruitment email via linked in, but that typically requires you to have done the referral way first.


The FAANGs support diversity. You can go to any of MIT, Stanford, CalTech, or Harvard.


I know this was just an off the cuff response, but it's a little more nuanced than that.

They don't really care where you went or even if you have a degree, but there are so many people applying through the front door that that ends up being a filter everyone uses. There are obviously excellent people outside of that, but it's harder to find them through the noise.

This is another reason I think Lamda School is great - they're actually attacking this problem too: http://zalberico.com/essay/2019/04/08/lambda-school.html


It's partially an off the cuff response, but like all good off the cuff responses, it's meant to cause the reader to think a little bit deeper about the message. I'm glad that you didn't find it beneath you to respond to it as it was not intended to be dismissive of response, nor was it thoughtless.

Many of the FAANGs have pushed for diversity in hiring. I don't have a problem with that. I think an ideal workplace would hire people regardless of their circumstances of birth and lived experience as long as they are qualified to do the job and they, in fact, do a good job there. That's actually the crux of my problem.

When companies have pushed to remove names, genders, and races from resumes before the hiring process because they find that people are treated differently in hiring due to unconscious bias in that process then it smacks of hypocrisy to then blanket filter incoming resumes unless you went to a short list of approved schools or have a buddy on the inside juicing your chances. That sounds like the complete opposite of trying to be diverse in recruitment. I don't think that Lambda School is that great of a response to this - it just adds one more "acceptable" school to the list.

In fact, to a cynic, it sounds like virtue signalling of the highest order to please both Wall Street and the public.


Their goal is to hire the best candidates as quickly as possible and given the amount of applications it makes sense for them to focus attention somewhere.

Currently highly selective schools and other famous companies are the easiest filter. Then they don’t want to have unconscious bias from that point on - I don’t think it’s virtue signaling, it’s more just pragmatism.

The reason I think Lambda School is cool is that they're actually focused on the first piece of this, scaling up to give opportunity to capable people the current system ignores and eventually leveraging their reputation to get people interviews. Right now it’s extremely unlikely to get accepted into MIT or Stanford, but Lambda School is incentivized to not do this - if you’re capable of the work they want to be able to scale to admit you.


I was not hired via referral. They reached out because they were using a node module I had written. Not sure how common that is, but tbh I think any general characterization of the hiring practices of such a large org is bound to be wrong in many ways.


That’s great, I’d put that in the lottery category.

I was also not hired by referral, but got lucky they liked my resume in some resume book at RPI.

For the places I applied online my interview response rate from companies was poor (or an instant rejection).


It was more a comment on the lack of feedback rather than hiring practices, which as mentioned I have no exposure to.


> The other way is already working at a famous company and having another famous company send you a recruitment email via linked in, but that typically requires you to have done the referral way first.

Just as a single data point, the last clause here isn't true in my experience. I worked at Google for a few years some years ago, and I get a regular flow of recruiter spam from all the big companies for Bay Area roles, and have never had any friction turning them into interviews.


Yeah, but you had to work at google first.

I wasn’t clear, but I meant in order to already be working at a famous company you probably already had to figure out the referral piece at least once.


Ah I see, I misunderstood the final clause as stating that you need a referral _as_ an ex-Googler etc. Thanks for clarifying!


Thanks for saying this. This is so clearly observable that I'm surprised people still don't realize that's simply The Way the World Works


I should probably clarify that it's actually the main way to get an interview (not hired) - you still have to pass whatever the interview tests are and a referral doesn't really help you there.

It's one of the main benefits of living in the bay area, you build out a network of friends working at different companies which adds a layer of job security and ability to interview more easily at interesting places.


You're right of course. Unfortunately Netflix is newish to the content business and all my contacts are at more mature companies.


Yeah that’s tough, you might have better luck cold emailing their recruiters directly with questions about roles.


Rejection is not a problem to me. The world is a big place, and it moves fast.


Bueller? It's not a problem of hurt feelings, rather a huge waste of time.


And the local girl may not want you. And the local club may not have the culture you want. And the local market may be not hold your favorite products. And the local schools may not be the right for your children.

There is always something. It's not possible to solve every problem, I prefer to choose my battles. Getting polite rejection letters is not important to me.


You've now replied twice addressing the least important part of my post.


Important to you, not to me.


Least important. Now that I think of it, I've seen some of your passive-aggressive replies here before. Perhaps a maturation phase is needed before applying to notable companies, now that you've shown yourself as difficult to communicate with in a public forum.

My intent wasn't to offend, rather set expectations for would-be applicants. Of course, things could be entirely different in Paris vs. Hollywood, so grain of salt and all that.


Your comment is very arrogant in itself, also I'm mature enough to not type those kind of comments with my real identity, or police myself at work. HN is merely a fun forum, not a career.


I was told it was not an option, when I discussed an engineering IC role with them.


They don't do it.


Thanks. Working at Netflix ?


No, but I have a friend who used to work there as an engineering manager 2-ish years ago. I did hit him up with the same question and he told me that's definitely not part of their culture.


Interesting. I had it in my head that Netflix was a pure Java shop. Was I wrong in the first place, or has something changed?


I mainly write js but our team maintains a scala app as well. It is much more of a polyglot company with a history of java (I have only been here 1.5y so this is from hearsay and my expirence, not a definitive source).


A sufficiently large company uses basically every language at some layer.


When I interviewed there a couple years ago (decided to stay in phx), it was a polyglot culture... the team I was interviewing with was largely node/js. I'm not surprised to see a lot of Python, Java or anything else that leverages existing tooling. For that matter, I wouldn't be surprised by custom C++, Go or Rust for some pieces.


If you read the article they don't talk about "online" services / API which are probably most of them in Java.


I believe their is a fair bit of Kotlin used there too (based on some @netflix.com addresses in Kotlin libraries like Strikt)


Mainly a Java shop. They published Node and even some C stuff back then in their blog posts.


From what I understand from their talk on how they (don't) do devops they adopt a polygot architecture and teams use different languages.


Different teams using different languages doesn't mean they don't do DevOps.


I read that as "in a talk on how they don't do devops they also happened to talk about how their teams are polyglots".


We're everything!

JS is heavily used, as is Groovy, Java, Python, C and C ++, etc.

Depends on the project and the team. We're not about limiting people, but the choice should be justifiable (like... Brainfuck is probably not ok)



Java or JavaScript? Some of the replies to this post indicate JS, others Java, while they're not at all related to each other...


At Cloudflare we use their Python-based Lemur (https://github.com/Netflix/lemur) application to issue (on some days) 1M SSL certificates.


I am wondering if PyPy is used somewhere for workloads that need performance.


Probably not, more likely to call out to their numerous Java projects and C/C++ video tools.


Does anyone have more info about this or know of similar projects?

We lean on the many of the statistical and mathematical libraries (numpy, scipy, ruptures, pandas) to help automate the analysis of 1000s of related signals when our alerting systems indicate problems. We’ve developed a time series correlation system...


There's https://prometheus.io/ which you're probably familiar with


What would you like to know?


The correlations part specifically, what does it do, what’s the modelling approach, and is there more info about it anywhere?


Initially we wrote the library to help us answer the question about what change may be causing impact to our customers' experiences. Out of the billions of time series metrics we have we knew there were about 10,000 or so likely (for this initial use case), possible candidates and we wanted to reduce that set as far as possible and either (1) find /the/ candidate for problem or (2) produce a short list of things for the humans to look into. In order to get to the point where we could ensure high likelihood of apropos correlations, we needed to do some work on the signals first.

First we detect if any of the possible candidate signals were born or died in the interesting time period. We use these time points to reduce the window we'll use to pass to the correlation functions. We can also detect any changepoints in the time series and apply similar logic. Once we've determined the best window bounds for each candidate signal, we use pearson and spearman correlation functions to get a score for the pair of signals -- the initial signal that started the inquiry and the candidate signal using the determined time window.

The code is about 98% data preparation, signal analysis, and window determination and about 2% correlation work.

(I've tried to summarize quite a bit, let me know if you'd like clarifications or have other questions.)


Cool, I’ll ping the folks responsible and have them reply here soon.


push or pull ?


I fail to understand the use of python in a distributed environment while the language has such poor concurrency support (on top of the lack of a type system). You can make your application HA, but they are obviously not trying to squeeze out every CPU cycle.


> I fail to understand the use of python in a distributed environment while the language has such poor concurrency support (on top of the lack of a type system). You can make your application HA, but they are obviously not trying to squeeze out every CPU cycle.

You're conflating several things that are orthogonal imo. A system can be distributed without concurrency. A concurrent system need not be distributed. Either kind of thing can be built with or without a specific kind of type system. And CPU efficiency has nothing to do specifically with any of the previous things.

To expand on that: distributed systems are quite often constructed from simple single-threaded processes, and where concurrency is needed it is probably more often achieved through multi-processing than multi-threading. A single-threaded event-dispatched request-response service probably describes a big chunk of all the stuff running in distributed environments today. In a lot of these cases the workloads are i/o bound, and instructions per cycle is not even close to the top of the list of concerns. There are a lot of reasons why python fits into that world very well.


> A system can be distributed without concurrency. A concurrent system need not be distributed.

The only justification for such design I can think of is HA. I.e. you have very light workload that does not saturate a single machine, but nonetheless you create a distributed system with two machines for HA. Such light workload is probably not often the case at Netflix.


> The only justification for such design I can think of is HA.

I'm not trying to justify any particular design. I'm just saying that concurrency and distributed processing are different things. And I'm no expert, but I continue to be confused as to what saturation, whether of cpu or i/o - you don't specify - has to do with concurrency? If I have for example n http servers all running one thread and handling enough traffic that they are all at 100 percent CPU that is by some minimal definition a distributed system, but none of the processes in it are concurrent. If the http servers read and write a database then the database is almost certainly concurrent, so then you have a distributed system that has both concurrent and non-concurrent processes collaborating.


> f I have for example n http servers all running one thread and handling enough traffic that they are all at 100 percent CPU that is by some minimal definition a distributed system, but none of the processes in it are concurrent.

Concurrency is broader than just threads in a process. In your example, the whole problem is concurrent.


Yes of course you're correct. Any system of that kind is concurrent across its architecture. But the current context being considered is that of concurrency as a programming paradigm, which was established by the OP's observations on python's lack of concurrency which sparked the exchange. In the comment you quoted above the statement "none of the processes are concurrent" was meant to make the context clearer.


> I fail to understand the use of python in a distributed environment while the language has such poor concurrency support

Because it's a distributed environment probably is exactly why. Python has (arguably) great concurrency support apart from Multi-threading.

https://www.youtube.com/watch?v=MCs5OvhV9S4

So if you need concurrency in the context of a single thread, then Python's GIL is a non-starter. But a distributed environment is not likely one of those.

Edit: I should amend concurrency in a single thread to: concurrency in a single thread that is compute gated...since coroutines can give you pseudo concurrency in a single thread provided you're workload has blocking steps like IO or TCP calls.


In fact it's precisely Python's deficiency in multithreading that lead to it having one of the best ecosystems for every other form of concurrency, like green threads and multiprocess applications.


If you’re doing (data analysis|simulations|Image processing) you can offload computation to numpy, which releases the GIL. This allows nice multicore speedups with python and threading.

The same holds for various CPU intensive standard library functions implemented in C.

The GIL issue is real, but posts like this one confused me for years. Please, don’t exaggerate GIL issues.


Not everything is amenable to numpy and it's pretty easy to make performance worse by throwing numpy at every problem. For example, if your array contains Python objects that are part of an operation, you've likely just introduced a significant performance regression. Worse, there's no way to detect these regressions except to have performance tests. Please, don't understate GIL issues.


Why would you make a NumPy array of objects? Use a list until it makes sense to create an array.


Because you’re using Numpy via an intermediate library like pandas and you have composite data in one column?


Ah. Pandas is the problem. Unfortunately, you need to understand how its features are implemented to use it well. Still, my main trouble with Pandas is unnecessary memory bloat, not compute inefficiency.

That caveat is somewhat true for all programming abstractions, but well-designed interfaces make the more efficient techniques more obvious and beautiful, while the inefficient or risky techniques are made esoteric and ugly.


Pandas isn't the problem, the problem is assuming that "$LIBRARY releases the GIL so things will be fast!". It's a pennywise, pound-foolish approach to performance. Someone will write a function assuming the user is only going to pass in a list of ints and someone else will extend that function to take a list of Tuple[str, int] or something and all of a sudden your program has a difficult to debug performance regression.

In general, the "just rewrite the slow parts in C!" motto is terrible advice because it's unlikely that it will actually make your code appreciably slower, and if it does, it's very likely to be defeated unexpectedly as soon as requirements change. Using FFI to make things faster can work, but only if you've really considered your problem and you're quite sure you can safely predict relevant changes to requirements.


> Someone will write a function assuming the user is only going to pass in a list of ints and someone else will extend that function to take a list of Tuple[str, int]

There are plenty of pitfalls in leaky abstractions. Establishing that fast numeric calculations only work with specific numeric types seems to help.

One thing you seem to be encountering, that I've seen a few times, is that people don't realize NumPy and core Python are almost orthoganal. The best practices for each are nearly opposite. I try to make it clear when I'm switching from one to the other by explaining the performance optimization (broadly) in comments.

Regardless, any function that receives a ``list`` of ints will need to convert to an ndarray if it wants NumPy speed. If the function interface is modified later, I think it's fair to expect the editor to understand why.


> There are plenty of pitfalls in leaky abstractions

Sure, but this is a _massive_ pitfall. It's an optimization that can trivially make your code slower than the naive Python implementation, all due to a leaky abstraction.

> Regardless, any function that receives a ``list`` of ints will need to convert to an ndarray if it wants NumPy speed. If the function interface is modified later, I think it's fair to expect the editor to understand why.

Yeah, that was a toy example. In practice, the scenario was similar except the function called to a third party library that used Numpy under the hood. We introduced a ton of complexity to use this third party library on the grounds that "it will make things fast" instead of the naive list implementation, and the very next sprint we needed to update it such that it became 10X slower than the naive Python implementation.

That's the starkest example, but there have been others and there would have been many more if we didn't have the stark example to point to.

The current slogan is "just use Python; you can always make things fast with Numpy/native code!", but it should be "use Python if you have a deep understanding of how Numpy (or whatever native library you're using) makes things fast such that you can count on that invariant to hold even as your requirements change" or some such.


I have mixed feelings about your conclusion. On one hand I don't want to discourage newbies from using Python. On the other, I enjoy that my expertise is valuable.

It seems reasonable that different parts of the code are appropriate for modification by engineers of differing skills.


Even if your app is IO bound, Python's concurrency is painful. Because it's not statically typed, it's too easy to forget an `await` (causing your program to get a Promise[Foo] when you meant to get a Foo) or to overburden your event loop and such things are difficult to debug (we've had several production outages because of these class of bugs). Never mind the papercuts that come about from dealing with the sync/async dichotomy.


Both problems have built-in debug solutions in recent versions of python. The event loop will literally print out all the un-awaited coroutines when it exits, and you can enable debug on the event loop and have it print out every time a coroutine takes longer than a configurable amount of time.


> The event loop will literally print out all the un-awaited coroutines when it exits

IIRC, I've only ever seen "unawaited coroutine found" (or similar) errors; I've never seen anything that points to a specific unawaited coroutine. In either case, a bug in prod is still many times worse than compile time type error.

> you can enable debug on the event loop and have it print out every time a coroutine takes longer than a configurable amount of time

I don't run my production servers in debug mode, and even when I do manage to find the problem, I have limited options for solving it. Usually it amounts to refactoring out the offending code into a separate process or service.

An extreme counterpoint is a language like Go which

1) Is roughly 100X faster in single-threaded, CPU-bound execution anyway

2) Allows for additional optimizations that simply aren't possible in Python (mostly involving reduced allocations and improved cache coherence)

3) Has a runtime that balances CPU load across all available cores

This isn't a "shit on Python" post; only that concurrency really isn't Python's strong suit (yet).


These are not really an issue in vfx production and other things Python is used for.


It’s a problem for lots of things Python is used for, but maybe not vfx (whatever that is).


They are using it for things it’s good at, for others they use java. So this subthread is largely a waste of time.


Who is “they”? What is your point?


They is Netflix, and other post-production oriented users. You know, what this article and discussion is about?


It's not at all obvious that "they" refers to "netflix and other post-production oriented users", and your argument is a tautology "Python is good at the things that Python is good at". Obviously. The rest of us are debating what those things are or are not.


The subject is well-trodden, there's not much to debate. Python is not good at threading, but works well in multiprocessing situations. Netflix is using it in the later situation, and not the former. Async is unlikely to be a use case either.


> The subject is well-trodden, there's not much to debate

And yet we see the same incorrect information trotted out over and over again.


"(yet)", but then it's still a dynamic language (no typing, not dynamic as in hip). The reason for python is mainly:

- it's easy to learn - we've go numpy

If I were to pick a language to build significant infrastructure with, I wouldn't choose python.


The missing await is a a very common fault indeed, they should have used another keyword like 'not_await' for that scenario to make the decision explicit. Pycharm at least will warn you if you call an awaitable without 'await' and without assigning it to a variable. If you assign it to a variable and pass it into another function that doesn't expect an awaitable, it's up to you to have added sufficient type annotations and run your code through some static checker like mypy. Running python at scale without mypy is kindof doomed to failed to begin with.


And that even with async you're still bound by the GIL


Huh? With async, there's typically only ever 1 thread running, so there's no contention for the GIL.


Just because you don't see or understand doesn't mean their usage is null and void. Does distributed imply a need to "squeeze out every cycle" at the cost of productivity? If they find that they are CPU bound, or need more efficiency then it's likely they'll move to another platform. If not, why make the move, or lose productivity?

Personally, I've worked with some HPC apps in pharma. We found quite a mix of CPU and IO-bound challenges when we actually profiled and looked closely at what was slowing up the apps. Contrary to the original belief, rewriting and increasing improving the CPUs wouldn't have helped much.


Some parts of the process are obviously not ones where it makes sense to squeeze out every CPU cycle. Sometimes programmer time (both in development and maintenance) is your scarce resource (either because of total numbers, or people with specific knowledge), and so it is far better to optimize for that.

In most cases (Python, Ruby, etc...) is it even possible to find your hot-loops and replace them with C/C++/Rust/etc... code. So you can really focus on those small areas where that would make an actual difference.

Additionally, in some places where you need concurrency you can split up the task into parts that have little to do with each other, and there Python is a great glue language to manage calling other executable that do the actual work.


To make a distributed application, you need some sort of orchestration. For example, a Redis job queue.

And at that point, you can just code your application as a single thread, and run one per core on the machine.

And the Python lack of types "issue" has always been a bit overblown in my opinion. If it's not immediately obvious what something is, then you need to either comment more, or come up with better variable names. And if you really, really feel the need for a type system, Python natively supports that now.


> I fail to understand the use of python in a distributed environment while the language has such poor concurrency support (on top of the lack of a type system).

Well, without knowing the exact issue people trying to solve, of course you won't understand the motivation behind.

Reading their post, it seems that they are not using Python for serving, mainly for long running daemon processes. Concurrency is probably not something people care about in such situation, nor squeeze CPU perf. In fact such workload probably wants to maintain a low-key finger print.


Re: Type system -- at Gigantum we very aggressively enforce all classes and methods in the core libraries must be fully typed using mypy. The depth and expressiveness of mypy rivals that of any other strongly typed language.


> poor concurrency support

Poor intra-process parallelism, yes. Python is pretty capable when it comes to concurrency.


Interesting to see a touch of Flask used for some internal APIs.


People talk a lot about complicated stacks but Python Flask + some basic HTML/CSS/Vanilla JS can solve a LOT of problems.

This is especially true inside big orgs that have lots of silos and need "Rosetta Stones" that translate between the silos.


I had actually looked at Flask-RESTPlus for a relatively recent project before deciding to use FastAPI.


You made the right choice.


Surprised they use Flask.

I'm working on my first larger, industrial-strength REST API in Python and I've found the Django Rest Framework to be more suited once you get to that level of complexity.


Django works great if you're buying into the Django ecosystem. But if you're standing up a simple REST API, I've found Flask is much more flexible. Either approach will get you to where you're going, but I find Flask gets out of your way and lets you do your thing, while you have to find the Django Way of doing what you need to do.

This may be biased (I've been a Flask user since 0.7), and perhaps I'm stuck in my ways, but I've found I either need to find a supported and blessed Django plugin for the thing I need, or build something much more complex than I'd need in Flask.


For typical apps I think that Django is far superior to Flask because every Flask project ends up recreating a lot of Django's core functionality; but in a more time consuming and less standardized way.

For a company as specialized as Netflix, they probably are doing so much customization it doesn't matter much which framework they started with.


Probably whoever made it just preferred Flask.

django-rest-framework is nice because it's maintained well and it's consistent with the rest of Django, but plenty of people just prefer the Flask way of doing things, even if it's a scrappier set of tools in some ways.


I'm using DRF in a project and don't really like the code/magic I end up with. IIRC I was happier with Flask and SQL... (But of course, for the bits that maps properly into its model, DRF its awesome)


Yeah. You can always get into what you do/don't like about a framework, but the same can be said of Flask.

FWIW, I don't like DRF's reliance on serializers that do more than serialization.


Metaflow looks like a superhero solution for ML/DL. Hope they open source it one day


>The ability to drop into a bpython shell and improvise has saved the day more than once.

what do you really do here - connect to the flask instance and route requests manually ?


Our python code does not handle user requests directly. Our team uses python to control the control plane of traffic — we flip dns records, control cross-region proxying, re-steer cdn-based reverse proxying and scaling of the hundreds of micro services that power the Netflix experience. We’ve improvised various custom traffic distribution patterns, operated outside of our normal traffic-shifting workflows, and written/run quick scripts to modify scaling fleetwide.


hi - so im actually curious about the infrastructure that allows you to connect bpython to it. Is it a rq process that you connect to, etc ?

I would love to setup an architecture like this that lets me connect to stuff through a CLI. Also, I'm assuming you are on kubernetes (or something). How does this bpython business work through all those layers ?


We ssh into a box in production, run bpython and import the libraries ( some environmental stuff, boto3, our own code, etc). At that point, there isn’t much difference from a bpython session and the regular application code... Ssh’ing into a production box does involve connecting through a bastion, and knowing which box you want to connect to, but that isn’t too hard with spinnaker.


pretty cool. We have struggled with doing this in Flask, because the whole codebase starts getting peppered with flask_context everywhere. So importing some libraries and code starts spawning flask.

Not sure if you use different frameworks that dont have this issue.


Also who uses anything but ptpython!


Wasn't Netflix a Node.JS shop two years ago when Node was popular and now when Python is the most popular they are a Python shop ? :P (sarcasm) It's good to use many languages as it favors a micro-service architecture.


At a company the size of netflix, I would imagine they're using all sorts of languages. I have a friend that works there and he primarily codes in c and/or c++.


Python at Netflix 2013 edition: https://medium.com/netflix-techblog/python-at-netflix-86b602...

Though it is outside my area of expertise, Node usage continues to increase as well. Different use-cases.


Is Python the most popular language now? I'm not trying to argue, I'm trying to see how out-of-the-loop I am nowadays.


I've been resisting Python for years, but it just keeps getting more popular (and support and more/better libraries come along with that), so I might just stop fighting using it for local scripts and small GUI projects.


Why would you resist it?


It would be nice if the language was typed. Easier for overall maintenance, and refactoring for big projects.


Type hinting is a standard feature since Python 3.6.

https://docs.python.org/3/library/typing.html

The thing about python, as overall language philosophy, is convention over enforcement.

If you want your team to have a particular style and use certain features, then you should agree to do it and then use automated tools (like pep8, flake) to check that code complies with those rules, but it is up to you how you want to write the code.


Python 3 is typed. Not strongly typed but the point remains. Python is used in a lot of domains and I feel the strides the language has made over the past decade have made it very competitive and a worthy addition to any programmer's toolbox.


Python is strongly typed. It is not statically typed.


There is a way: http://mypy-lang.org/

I've used Python a fair bit for a variety of semi-permanent or long scripts. I would have liked types more as well. Discovered MyPy after those instances.


It has a very nice strong typing add on in mypy. Highly recommended for a couple of years now. It’s technically optional but your CI process should take care of that.


You should look into type hinting and MyPy


I don’t really think that it’s a good idea for guis.


PyQt is one of the better cross platform GUI libraries for higher level languages. Obviously all the issues that apply to cross platform GUI libraries still apply, though.


It has gained a lot of popularity and is used across disciplines pretty heavily. Apart from the typical CS uses, it's used a lot by scientists and engineers because it's pretty easy to pick up and the available resources have snowballed such that someone has probably already solved your problem so all you have to do is import antigravity[0].

[0]: https://xkcd.com/353/


Python has seen an increase because most ML is done in Python and ML is a popular topic right now.

"most popular" is hard to measure or even define.



I think it's the popular learner language (UK schools seem to all use it now), which would probably correlate with higher search frequency.


Could be it.

Another measure could be looking at commits on github, but that also has its own biases.


Depends on how you measure most popular. I think I saw a post here saying that the most new questions on SO were for python over js now, which might be a good enough proxy?


I suppose Python is so popular in the ML and data-science spaces now that it makes some sense. I don't know a lot about those spaces, and the little bit I've played with neural networks I've done with Clojure because I'm hipster.

Amazing how much of a pure-functional bubble I live in now. I should probably branch out more.


The university I went to have now switched from Java to Python for introductory computer science. So this type of change may influence this increase in popularity as well.

I remember being a bit baffled that they used Java and not Python as an introductory language. It's a very obvious choice for beginners.

Lol, I remember the first lesson.

    class HelloWorld {
      public static void main(String[] args) {
        System.out.println("Hello, world");
      }
    }


That was the first piece of code I learned, too. It's a small miracle I somehow still managed to fall in love with programming that semester.


My first language was C++ (which I hate now), then I learned Haskell, and then I went to college and learned Java, and I remember honestly feeling that the "hello world" that they teach must be somewhat of an "academic version" of things; there's no way anyone would write code that verbose that accomplishes that little.

Now I have a job where I do Java about 30-40% of the time, and I've become horrified to learn that the school version of Java is the easy version. Having to create three different files to set up a database connection never ceases to depress me.


A lot of universities are teaching using python now.


It's good to use many languages as it favors a micro-service architecture.

I'd argue the converse: one advantage of a micro-service architecture is that it lets you use the best language for any particular task (where best could be "what your team is most comfortable with" or "has the most extensive library support" or something else).


I'd say a "shop" implies less than 150 employees. So there are many "shops" inside Netflix.


now to get the apple tv app to use more than the bottom third of screen for letterbox sized navigation by making the massive "thumbnail"/trailer at the top a third of the screen, so i can actually find something good to watch...


Id like to contribute to the netflix code base by offering what is obviously a missing but much needed piece of code:

def enableAutoPlay( flag ): annoyingAutoPlay = flag return


I don't think that's valid syntax, so it likely won't make it in their repo


Drats! We were so close!


Just remove the "return"


Still won't work, you need to declare that variable as global, otherwise you just create a local variable and set it to false.


Netflix does have people from MySpace, which could explain the annoying auto play feature, but the reality is Netflix has a mature process of AB testing. If autoplay is permanently on then it's because AB testing wills it so. ¯\_(ツ)_/¯


Maybe they noticed people watching more content with autoplay on. Wait...


TV's should also have this feature. When you switch to a new channel, it should be displayed in a paused state and you should press a button on the remote to start playing.

Auto-play doesn't make sense anywhere, streaming apps, youtube, or TVs. Only crazy people expect video to auto-play in a video app.


That would make it super annoying for your average channel surfer.


Afaik you can disable autoplay in the options?


At least with the web UI, you can mute the auto-playing video and subsequent auto-playing videos will remain muted.


You can disable automatic playback of the next video but can't disable autoplay previews.


At least as an option hidden somewhere.


"You've read 11 stories this months, lets make things official".

Medium is trying to look like NY Times, minus any kind of effort to actually write the stories.


Why can't I read their articles through Pocket?


Reason #8,201 not to model your org after FAANG: writing custom software just to operate your site is like building your own tools to till your farm. I suppose if you hire world-class toolmakers you'll have some very good tools, but that's not quite the point of the farm, is it?


If you hire world class tool makers to make the best tools for your farm then your farm has a strategic advantage.

Really good tools can make a big difference.

You really do need world class though - the trade off of custom means your tools really do need to be a lot better than the standard.


Most of these use cases would be better served with a type safe language.


You think you're being downvoted because people like dynamic typing very much.

You're being downvoted because:

A) You left a comment with a strong opinion without expressing it while dismissing other peoples' experience B) Python is in fact a type safe language for a while now https://docs.python.org/3/library/typing.html - you have pluggable "a la bracha" types.


Python’s type annotation mechanism is somewhere between afterthought and joke. Nobody uses it and nothing is built with it in mind; it can be anywhere from needlessly complicated to utterly impossible for code that interacts with third-party libraries to successfully discover and import class names for type annotations. Making matters worse, you need to use even more extra libraries for the type annotations to actually do anything.


I beg to differ, I've seen it adopted in quite a few places by large companies. I've rarely seen it used in startups though. My evidence is anecdotal (so is yours though).

I definitely agree that it's not seeing nearly as much adoption as let's say... TypeScript in the JavaScript ecosystem though.

Facebook even wrote a package to generate types automatically for existing code https://github.com/Instagram/MonkeyType .


Re: startups, it's more useful in a large, mature code-base rather than smaller projects still in the design-churn phase.


FWIW, we use mypy in Cirq (https://github.com/quantumlib/Cirq) and we haven't reached version 1.0 yet.


It’s important to introduce it early enough that it’s still feasible to introduce it. Otherwise the architecture of the code will get in the way—defining classes deep within module hierarchies for objects that get returned up is painful.


I’m speaking from the personal experience of using it myself. I don’t doubt that some companies have large internal Python codebases with extensive type annotations. If you’re at one of those companies, maybe the experience of using those annotations rises from third-rate to second-rate. It’s never going to be comparable to a statically typed language, or maybe even a language designed from the ground up with optional static typing in mind.

And that’s fine! Python is probably the best language in its niche. Type annotations just don’t do enough to break out of that niche. Annotated Python with mypy is a terrible statically-typed language. Python generally is an excellent dynamically-typed language.


A pessimistic view, it has always been strongly typed. Annotation is relatively new so will take a while to spread. But there is utility today for big projects using tools like PyCharm, for example.


> Python is in fact a type safe language for a while now

I like the idea of type annotators but they do not make it a type safe language because types are not checked and they are more like comments.

Python will remain a dynamically typed language, and the authors have no desire to ever make type hints mandatory, even by convention.[1]

[1]: https://www.python.org/dev/peps/pep-0484/#non-goals


This is only true if you don't use a type checker. If you do use a type checker...types are checked.

It's still not really a sound type system, though. Many safe languages have some kind of escape hatch for the type system, but the escape hatches are very close at hand in Python's typing and you see them used several orders of magnitude more often.


-Wall -Werror is also optional in GCC...

Not that i like it but just to show that a lot of languages require a properly set up configuration and CI-infrastructure to make them safe and useful. mypy should be considered equal to -Wall -Werror and anyone who doesn't run it is frankly a bloody idiot.


Your entire reason for calling people not doing what you do "bloody idiots" stems from your opinion that static typing is superior.

It isn't, you're just contributing to a flamewar that has been going on since computers have been a thing.


> Python is in fact a type safe language for a while now

Well it has a typing mechanism, which is better than nothing, but has the mechanism been shown to be safe? C has types too, but I'm not sure that it would be described as type safe.


C is weakly typed. Python is strongly typed. That's orthogonal to 'having types'.


There is no agreed-upon definition of weakly and strongly typed; It is a nonsense word. Any Haskell programmer would laugh at someone who calls Python strongly typed.


> There is no agreed-upon definition of weakly and strongly typed; It is a nonsense word.

Sure there is. Strong typing means a language doesn't permit implicit type coercions.

> Any Haskell programmer would laugh at someone who calls Python strongly typed.

I'm sure lots of programmers are not particularly educated on the topic; it doesn't mean the word is 'nonsense'.


Showing a typing mechanism to be safe is a very hard thing to do. There are a few methodologies I'm familiar with. What most people think about - formal proof - doesn't work for semi-obvious reasons (as you can easily reduce "is this program safe" to the halting problem).

The research I've seen is basically:

- Take a class room of independently selected subjects (read: CS undergrad students).

- Give them the same coding problem in multiple languages or in the same language with or without compile time type checking.

- Measure the number of defects said programs have and hopefully draw a conclusion about the safety of the language.

All the research I've read on the topic has been pretty poor (though maybe I'm just bad at finding it).

Indeed there is no such research for typed Python (although the pluggalbe types research in general is great) but there is also no such research (that I've found) for Idris, ReasonML, Swift, Haskell, Rust or any of the usual "suspects" for "good types".

Please enlighten me :]


I think you meant dynamic typing and not dynamic programming.


Yes thank you, good catch! Edited, I had a long day of dynamic programming (ironically in Python) and my brain is numb.


Yes, they are two very different things.


Yeah it is a bit strange argument to use "type safe", the better wording would probably have been to use a statically typed language which is more what is getting more and more popular at the moment.


This isn't a very useful comment in this form. What aspects of these specific use cases would be better served by a type-safe language? Why is type safety the most important attribute of language choice in this scenario?


I read a lot of data analysis, prototyping, configuration, monitoring and deployment, for which Python is very, very well suited. So I don't follow how you got there.


I'm going to start a drinking game every time someone says that.


You'll want to get on a liver donor list now.


Will you start the game everytime someone says that, or will your game be to drink everytime someone says that? Honestly curious what your methodology is.


Drink every time someone says that.

See you at rehab!


It's been going on for sixty years (something the defenders of both paradigms often seem to ignore) and will never end, you'll die.


Question: If they would be, don't you think Netflix would go that route?

"Better" is contextually dependent. There's no silver bullet to anything we do in tech.


> don't you think Netflix would go that route?

Not necessarily. Python has a few huge advantages that aren't (directly) related to the quality or utility of the language. For example, hiring and training people to use Python is going to be easier and cheaper than hiring and training people to use C++. Additionally, as mentioned in the article, Python has a massive and healthy ecosystem of high quality packages.

And then there's the cost of porting an existing codebase to a new language...


Netflix are famous for paying the top salaries in the industry, you will take a paycut leaving them. Programmer cost is absolutely not a reason for anything done there.

https://www.salaryproject.com/salaries/company/netflix/senio...


Then one could argue Netflix is using the best tool for their purpose, so a type safe language wouldn't be 'better'.


You're supporting the parent's point that it's contextually dependant.


you may be the exception, but most people i’ve met that make this claim have no idea what they are talking about.

how would the cases be better suited for a type safe language? what is type safety, in your opinion?


[flagged]


I didn't downvote the parent, but I think that doing so is justified because the comment doesn't add much to the conversation, whether you agree with it or not. To be a substantive comment, it should explain _why_ typed languages would be a better choice for these particular use cases.


At this point it should be common sense, there are arguments about this in practically every thread that mentions Python or JS.

As much as I hate dynamic typing, there are use-cases where I don't care about it much. What makes Python truly an awful experience in any case is the completely asinine dependency management.


haha. if you think dependency management is bad in python you should take a look at go :O




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: