Hacker News new | comments | show | ask | jobs | submit login
Questions (patrickcollison.com)
1106 points by tosh 17 days ago | hide | past | web | favorite | 470 comments



> Why are there so many successful startups in Stockholm?

Several reasons. First, you need to recognize that any Sweden-based startup will, when it gets to be known internationally, have a Stockholm-based office. So it's not about a city of 1 million inhabitants, it's a country of 10 million that's the true number here. As an example, I believe Spotify opened their original offices in both Stockholm and Göteborg more or less simultaneously.

With that said, a commonly stated reason for why Sweden in general has such a high prevalence of tech startups comes from a bunch of fortuitious decisions in the 90s and 00s. In 1998 Sweden's government started a program that allowed employers to sell their employees computers under a tax free scheme (the so called Hem-PC-reformen https://sv.wikipedia.org/wiki/Hem-PC-reformen). This was extremely popular, and led almost every Swedish home to get an often extremely overpowered personal computer. Thus, practically everyone who was a kid in 1998-2006 (the rebate was cancelled in 2006) grew up with a computer. This gave Sweden a huge advantage compared to other countries in the early Internet revolution.

Sweden has also invested heavily in building a fiber network, you have access to gigabit Internet even in some extremely rural areas.

Another thing is that Sweden doesn't have the tradition of dubbing movies. That means kids will be exposed to English from an early age. This leads to Swedish tech companies not being afraid of hiring talent globally and generally use English as their business language.

Finally, out of the 5 examples posted, one is Mojang, which is clearly an outlier. I'm not saying what Notch accomplished wasn't extremely impressive, but it was essentially a one-man operation, and probably shouldn't be held as an example of a trend.


Good observations! You forgot one major thing. This goes for all of Finland, Sweden, Norway and Denmark. The winter is horribly dark and boring. (Unless you're super rich). Therefore most turn inwards, staying indoors, thinking deeply at problems, spending endless afternoons and nights on things. Be it software development, game development, car tuning, car engine work, engineering, knitting or just reading loads of books.

I found this to be almost impossible to achieve when I moved from Norway to Australia. There I was outside hanging out with friends or just doing stuff on the beach or whatever. The deep focus was harder to achieve. Quality of life was insanely better there, yet somehow I missed the possibility to sit down and be productive in some narrow topic.


I believe to have read somewhere that this is also the reason why so many Russians (and people born in the USSR) are great mathematicians and chess players. Math and Chess are both indoor activities and also absolutely unpolitical, which is a huge plus in an authoritarian regime.


>>Math and Chess are both indoor activities and also absolutely unpolitical, which is a huge plus in an authoritarian regime.

The history of chess is definitely very political, especially inside the Soviet Union. It was seen as the proxy for intelligence, and if Soviet players were better than Americans, it was their way of demonstrating superiority in yet another domain.


True, but ultra-authoritarian regimes will politicize almost everything imaginable.


I read somewhere that it was only way to distinguish themselfs and to travel.

The other major thing (and I would argue more important than the others) is Sweden's wealth and social safety net.

Wealth is one of the most important factors for someone to found a startup. You aren't going out and starting a company if you have to work 80 plus hours a week to live.

Just look at the clueless opinions of so many tech founders about how "you shouldn't pay yourself a salary at first after you take investment money". Oh really? So we should just buy groceries and pay rent from the trust fund mommy and daddy gave us?


I agree with your first 2 paragraphs, but the third is being overly simplistic (or potentially just not understanding the trade-off at play).

The scenario is that (roughly) 90% of the hypergrowth-style startups which raise a first round will fail to achieve enough momentum to successfully raise a second. And due to the business model choices they've committed to, failing to raise at that point is equivalent to going out of business. A first raise is often too small to be able to do everything they'd like, so they suggest decide that it's in their best interest (long term) to forgo a salary where possible, in order to buy a few extra months of progress before you're forced to start shopping for investment again.

Of course, folks give advice based on their own experience, but it doesn't always generalise to folks in different situations, which is potentially why that would seem like silly advice in (what I assume is) your scenario.

(minor edits toward the end for clarification)


You are completely missing his third point. For people without wealth (or a social safety-net which provides you the resources you need to survive), it isn't optional to work for free. You have to eat and have a roof over your head, which means taking a salary if you are working full-time on your startup. Its not a strategic decision.


I must have poorly explained. The strategic decision happens earlier, when you decide which idea to commit to. Certain ideas are both a) incapable of generating early profits and b) dependent on reaching uncertain milestones to unlock the next stage of crucial funding.

So if you don't have the personal finances to deal with that situation, then you make a different decision further upstream, in the past, by choosing to work on an idea which has either easier funding targets (allowing an early salary), or which can generate early profits (allowing an early salary), or which has lower development requirements (allowing you to work on it alongside another paying job).

The decision is about idea selection, where I agree with you. "If you need the money, then pick an idea which allows you to pay yourself quickly." But if you've chosen to play a different game (typical hypergrowth VC stuff), and you want to maximise your odds of winning at that particular game, then it's generally ideal to buy yourself more months business runway instead of more months of personal runway.

Perhaps still not clear, and I know it's an emotionally loaded topic, but hopefully that makes sense.


I think you make my point. Which is that unless you have some wealth already, family wealth or otherwise, your options for a startup are much more limited than those who do not have money.

The point is that you shouldn't take a regular market salary. Software people can live on way less than what the market currently offers so if you pay yourself the competitive rate you're burning a lot of your runway pointlessly.


> Good observations! You forgot one major thing. This goes for all of Finland, Sweden, Norway and Denmark. The winter is horribly dark and boring. (Unless you're super rich). Therefore most turn inwards, staying indoors, thinking deeply at problems, spending endless afternoons and nights on things. Be it software development, game development, car tuning, car engine work, engineering, knitting or just reading loads of books.

Wonder if that's also a factor in the prevalence of tech/programmers/hackers in Russia. Don't think there's a startup scene there, but there definitely seems to be more technology experts coming from the area in general.


As an anecdote, a lot of programmers and heavy gamers I knew in high school lived on remote islands and took a water taxi over to ours for school. They couldn't do after school activities without sleeping over at someone's place. Their islands had even less to do on them than ours.

Soft forms of isolation certainly seem to lead to solitary activities.


One of the vanishingly few things the Soviets did right was to emphasize STEM education. This seems to have stuck in Russia. And I’ve heard the same was true across the eastern bloc.


Hmmm interesting observation about the long dark winters. Edinburgh similarly has a disproportionately successful tech industry (it's the same latitude as Moscow) and Estonia has also done very well in tech (for many other reasons but that might be a contributing factor).


Winter is not horribly dark and boring in Scandinavia unless you are super rich. It’s equally dark and boring for everyone. And virtually everybody in Scandinavia can afford to travel south to sunnier places for a few weeks, if that’s what they want.

I’d say proficiency in English and quite wealthy populations is what matters.


That's not true. The super rich are taking luxury vacations and spa trips.


Finnish travel agencies offer a week long trip to Canary islands for €500, you don't need to be rich to save for that once a year. Most cities have spas too.


I don't think it is that relevant. You can be happy in Sweden and miserable in Malta. They funny thing is that people tend to say the same thing about SV, but because of the nice weather. Not to say that dark afternoon can't be special, but it is generally everything else (like having something relevant to work on) that makes it so. And that doesn't necessarily lose its value if it is sunny instead.


This is a plausible story, but that doesn't mean it's true. Anecdotally I felt more productive living in a warmer place than when living in Scandinavia. I think it completely depends on your situation and an actual study is needed to prove it one way or the other.

I also recall Jared Diamond debunking this as a theory for why "the west" got ahead in his book Guns, Germs, and Steel.


The same is basically true with Seattle. Well, not so dark (the sun is out til 4:30), winters are quite mild, but wet enough to keep everyone focused.


the most boring weather is reflected in the most boring software


Seattle's weather is hardly boring. And there is plenty of software going on here, is Facebook, Google, Apple, Unreal, Unity, Oculus, really that boring?


it's always raining in windows?


But then if that's true, you wouldn't get lots of devs in the Bay Area California. Lots of great weather here.


How many devs have you met that grew up in the Bay Area?


I know a handful. Is it really surprising?


Also a hypothesized reason for why canadians are unreasonably well represented in esports.


Don't forget Iceland, with its "Christmas Book Flood" (https://www.npr.org/2012/12/25/167537939/literary-iceland-re...).

(disclaimer: I am an American and don't know what I'm talking about. I have visited Norway but it was summer.)


On the other hand, here on the other side of the Atlantic ... Silicon Valley is not only at a temperate latitude but also downwind (jetstreamwise) of an ocean; placing it in the middle of one of our continent's few bastions of reliably non-shitty weather. And there isn't much tech-related stuff happening on the north coast of Alaska or the northern regions of Canada, either.

So I daresay there's more to this than just the Arctic Circle.


Then how do you explain Silicon Valley? Outdoor life is great here. You have good point but may not be as important as you think.


How many engineers have you met that grew up in Silicon Valley?


Because when Silicon Valley established its dominance, access to computing resources, digital networks, tech talent, and venture capital were by far the crucial factors. All those things were much rarer back then, and startups more costly, especially to get off the ground. Given today’s low costs and ubiquitous computing and internet access — and the trends in this direction even 20 years ago - it makes sense other, softer factors can now come into play.


Israel doesn't have long dark winters and yet they're the pioneers of innovation and start-up culture.


I'm an Israeli and it comes from a few factors:

1. Israelis basically have no choice. If you want to make a good income, working hard in a technology startup(we have very few big tech companies) is one of the very few good options.

2. The army: at a very young age,a decent percentage of Israelis who join the army lead, in high value, high risk situations. That creates a sense of responsibility and strong ambition at a relatively early age.

The army is also a place where a lot of new tech is being developed , so people get exposure, and often in roles of major responsibility.

3. The Jewish people have lived among other people, in very hostile conditions, often forced to do banking(loans) and commerce at times when most people did agriculture. That forces a certain entrepreneurial spirit, and possibly higher intelligence(also witnessed by the higher rate of genetic illnesses in Ashkenazi Jews). That, plus a culture that always focused on learning(religiously).


> 2. The army: at a very young age,a decent percentage of Israelis who join the army lead, in high value, high risk situations. That creates a sense of responsibility and strong ambition at a relatively early age.

Summed up in one word - discipline. The glamorous myth of startups is just that, a myth. Some outliers go from zero to hero overnight, but most require slogging away day in and day out. Motivation can only last so long, and after it's gone all you have left is discipline. Military writings both current and historic routinely speak of discipline being the single most important aspect of achieving a goal.


>and possibly higher intelligence

Interesting what would happen if this is said about any other country


Obviously this discussion is a minefield. But there is some merit in asserting some relations there. In The Netherlands, Iranian and Afghani immigrants are often of a more privileged descent than Moroccan and Turkish immigrants, because of the reason of their migration. Moroccans and Turks generally migrated for manual labor, where Irani and Afghans usually fled religious oppression.

Of course this does not mean Dutch Iranians are smarter than Dutch Moroccans, but it could mean there might be more smart/entrepreneurial Iranians than Moroccans in The Netherlands.

These statistics might die out very quickly though, for example Turks are often already second or third generation, so many have been born in the privilege of The Netherlands.

Perhaps similarly, the second world war cost us a huge percentage of Jewish people, being privileged has meant having a larger chance of survival through moving to less dangerous countries (like the U.S.).

It doesn't mean Jewish people are smarter, it might mean though that you could find a bit more smart Jews in their population. Of course the second world war has already been a long time, so the effect might already be gone.


The problem is that people (Jews and non-Jews, Israelis and non-Israelis) really believe and expect it.

I'm probably average intelligence, but I've had it expressed to me by people who hardly know me that they expect that I'm brilliant or something because I'm an Ashkanazi Jew. Its one of those stigmas that get attached to any race, such as Russians drinking Vodka or Argentinians eating meat. I'm sure that there are sober Russians, vegetarians in Argentina, and there's me!


It's not about nationality: Ashkenazi Jews in general -- Israeli or not -- tend to be smarter. ~20% of Nobel prize winners are Jews compared to less than 1% of the global population. So either there's a Jewish conspiracy (which some people believe...) or Jews tend to be smarter.

Some people would say that this is cultural. I doubt it -- I think it's genetic. Intelligence is a physical attribute determined by genes, just like every other physical attribute. It's really no different from the observation that Kenyans and Ethiopians win most marathons.

In general, we should expect different traits to be exhibited at different frequencies by different populations that were reproductively isolated in the past. This doesn't mean prejudice is okay. Not all Jews are smart, and not all Kenyans are going to win marathons. We can acknowledge these correlations without behaving in a prejudiced way toward individuals.

Most people would rather not talk about this. And that's certainly my rule of thumb for in-person conversations. But, hey, we're on the internet.


> So either there's a Jewish conspiracy (which some people believe...) or Jews tend to be smarter.

You're assuming that intelligence is the determining factor in winning a Nobel prize, which seems spurious at best. Work ethic and training, particularly early in life seem, to me, like they'd be better predictors. I'd posit that Jewish culture is better at nurturing intelligence before I'd conclude that there's some ethnic superiority going on.


You used the term "ethnic superiority". That's you making a value judgement about intelligence. It has absolutely nothing to do with what I wrote. Personally, I don't think intelligence is a good proxy for "value of a human being".

While there isn't a bullet-proof case, there is quite a bit of evidence for an ethnicity-intelligence correlation.


You left off a key word when you took the term "ethnic superiority" out of context. I prefaced it with the word "some" to indicate that I was considering only a single dimension, intelligence. I said nothing about "value of being a human being," so don't put words in my mouth.

The only value judgement I was making is that higher intelligence is superior to lower intelligence and that Nobel prizes is an extremely poor proxy for intelligence.


> It's really no different from the observation that Kenyans and Ethiopians win most marathons.

But don't most Kenyan and Ethiopian marathon winners also grow up in Kenya or Ethiopia? I.e. how do we discount at very least environmental factors (including, for instance, diet), even if you doubt cultural ones?


> But don't most Kenyan and Ethiopian marathon winners also grow up in Kenya or Ethiopia?

No. There is a hugely disproportionate number of Americans and British of Somali, Kenyan, and Ethiopian origin who excel in world class distance running (Adbirahman, Farah, and many more, including a big new wave of Somali American after).


Interesting. That still wouldn't completely rule out environmental/cultural factors though, including diet and so on (e.g. maybe eating teff is good for long-distance runners).


I guess you've heard of The Bell Curve and the related controversy (check Sam Harris podcast with Charles Murray)?

Fwiw I don't think you're necessarily wrong about intelligence but from what I remember The Gene (Siddhartha Mukherjee) does have a few passages contradicting this theory. Also by questioning the validity of IQ tests.

Could it also not be that jews are just more motivated to get in to STEM fields and perform well, for cultural reasons or otherwise?

Similarly, Malcolm Gladwell mentions a theory about why "Asians are smarter" which according to him may be related to hard and smart work leading to bigger rice harvests, and other factors (see here https://www.cs.unh.edu/~sbhatia/outliers/outliers.pdf).

While I have zero interest in the cultural implications and all the moral panics surrounding these issues, I don't think intelligence being mostly down to genetics is a proven fact. I find it gets extremely complicated very quickly.

Genes are hugely influenced by their environmental (cultural) triggers and those should not be ignored.


Jews were forced into banking, because they were forbidden to do agriculture or cattle back in Europe. Banking was considered dirty, so Christians didn't want to do it. Jews did what they were allowed - banking, philosophy, medicine, science - and they became masters of the craft.


I thought it was also about usury not being allowed (for Christians, Jews, or Muslims) intrareligiously but only extrareligiously, i.e. Jews could loan money to Christians, but Christians couldn't loan money to Christians.


Yes. That's the usual story indeed. I'm not sure where the concept that banking was 'dirty' came from - I suspect that's mixing up banking with lending. Lending at interest was subject to religious prohibition for all but Jews who interpreted the Torah in a way that forbade lending only to other Jews. Thus Jewish people were not "forced" into banking because they were banned from agriculture (lol, how would that even work?) but rather, became bankers by default because it was highly profitable and others wouldn't do it for religious reasons.

This is the main reason for the ancient historical stereotypes linking Judaism and money/wealth/power/etc. And why names like Goldstein (gold stone) are considered Jewish names.


>> and possibly higher intelligence

I heard that a lot of times from mostly uneducated people. All nations have equal level of intelligence. The difference might be access to education, environment, common wealth, and social inequality.


This seems like a dogmatic response rather than a reasoned one. All other aspects of human beings vary by region to some degree, why would intellectual be an exception?


Source?

Edit: I would argue to the contrary, that there is a appreciable difference in average IQ between some countries which expresses itself among other things in GDP and quality of life. Perhaps the nature of this difference is based on things like nutrition, environment quality, pre-natal screening and care, so improvements in all these things will decrease the gap. If it happens that after all of that there is still a certain gap , who cares? As long as the citizens live in peace and relative prosperity.


> All nations have equal level of intelligence

How do you know this?


By far the biggest factor is the over $130B of aid the U.S. has provided to Israel which directly nurtured the high technology industry.[1]

This is also true for the U.S. itself; Silicon Valley is largely a creation of Pentagon investment (see: DARPA).

[1] https://www.aljazeera.com/indepth/interactive/2018/03/unders...


As someone who has worked with teams from Israel, I'd like to hear more about this if you have good sources.

In my experience, it's been the opposite. They like to play office politics, focus on keeping work off their plate, and they're not really team players.

I may just have a bad sample of Israel tech workers, though.


Start-up Nation: The Story of Israel's Economic Miracle makes a good case. [0]

[0] https://en.wikipedia.org/wiki/Start-up_Nation


I have also experienced this - in addition, rampant nationalism is an issue when dealing with Israeli companies, in my experience (5 different customers, similar problems..)

Where I think Israeli tech startups get most of their mojo, is desperation. Their culture involves so much struggle and effort .. and I think the reaction to it at a personal level, results in the laziness, non-team-playing, political problems.


I didn't comment on how good or bad they're to work with but on the assumption that weather conditions can affect productivity.


Just because something is a factor doesn't mean there can't be other factors that lead to a similar outcome.


Well, their summers are kinda long and hot.


One counterargument against that indoor scenario would be The Valley™. Unless you consider fog.


> Quality of life was insanely better there

Just because of sun? Did you try UV?


> Quality of life was insanely better there,

> yet somehow I missed the possibility to sit down and be productive in some narrow topic.

Do you mean that you preferred the life you led in Australia to the one in Norway? If not, how do you mean that quality of life was insanely better?


>> I found this to be almost impossible to achieve when I moved from Norway to Australia.

> Do you mean that you preferred the life you led in Australia to the one in Denmark?

Calling a Norwegian a Dane is about as popular as saying a Canadian is from USA or calling an American British I guess ;-)


Ah, a simple mistake :)


That's what I guessed :-)

(Seems I've offended someone else though, but I have no idea why.)


Why wouldn't they prefer it?


Nothing beats sitting in front of a computer in the dark.


I can't stand it. Feels like staring at a light bulb. The only thing that makes it passable for me is Night Mode in F.lux on OSX. Nothing else works. I keep a Mid-2012 MacBook Pro around just so I can have a computer I can actually use in a dark room.

Even then it's not ideal. The lowest brightness setting still isn't low enough.

In my twenties I could happily destroy my circadian rhythm and sit in front of a screen til 4-5 in the morning. At 35 that shit's gotten real old.


I never implied they didn't prefer it, so I wonder why you seem to think so.


Well, he literally called it "insanely better", and you felt the need to double-check whether they do prefer it.

That's enough implying in my books.

(Besides, if we're to play this game, I never said that _you_ "implied they didn't prefer it". I just asked, "why wouldn't they prefer it").


I previously read someone's take on this in relation to people feeling secure enough to take risks here (not just in Stockholm but Sweden as a whole I guess). While notch's success may be an outlier, the process of trying to do your own thing isn't here. I'm not sure when in his development Notch left King, but in general there are quite a few people here who just try to go their own way (sometimes completely outside of their previous industry even). It helps when you don't have to live in fear of being completely penniless should you fail, I think.

Purely anecdotally, I am not part of a startup, but do a lot of hobby work in coffee shops around Stockholm. I'm always seeing/hearing people in local cafes discussing some new startup or project they are getting off the ground. I've also seen first hand many people get maybe a year or two of funding and start up their own game studios, for example. Of course not all of them end up being successful, but many people feel secure/comfortable enough to try. I think the volume of these attempts and people trying to do something new also helps drive up the number of "hits" overall, to be added to these kinds of lists.


I think the social safety net is definitely a part of it, yes. But after working for a couple years at different startups in Stockholm I can say that another big factor is that most Swedish founders are either rich themselves, or their parents are, or they are very well connected. They don’t see themselves this way of course, they think they are building everything from scratch, but this does tend to be the case from my experience.


I think it is becoming more common. In the early days it seems like there were more of a mix between rich drop outs and poor enthusiasts. Today, especially with the housing market, it seems like there are a lot more 'fake' companies. It used to be that if you came from a wealth family and didn't know what to do you started a public relations, event or media company to pretend that you were doing something. Today that is a "technology startup". Most of which are second or third tier companies in the sense of global reach. While much of the success of Swedish startups come from ending up being first tier companies. Maybe part of that was that Sweden sort of experienced the dot com bubble, which meant that there was less room for phonies for a while.


Definitely, having a social safety net that allows for moving on from failures is probably a huge factor.


But that safety net doesn't extend to startup founders does it? AFAIR In Norway, you need to have worked in a company as a regular employee for at least one or two years before you can claim unemployment benefits. Health insurance is always there, and so is basic social benefits (but the unemployment one is the only you can live on).


I think it can be hard for people to appreciate the differences because the truly different things are those one takes for granted. To not make this too long I will go directly to what I think has made the difference:

A relatively egalitarian society with a lot of trust between people and towards the government has made progress relatively effortless. When people don't feel the need to guard their own position because they might end up getting screwed there often isn't any good reason to be against development.

A relatively large amount of excess time, security and knowledge that enable people to something else. Startups are ultimately about harnessing excess potential. If all the value are capture by some other industry or the housing market there won't be much left over for startups.

Various cultural factors enabled by those things. Like being able to be independent (with the help of the government). Not being afraid to leave value on the table. An overall sort of generous, or at least non-petty, society.

If you look at factors like these, you can actually draw a lot of parallels to somewhere like SV compared to the rest of the world. SV is to at least some degree a recreation of academic life at its best. Which is the area of US society that would be most similar to Sweden.


When people don't feel the need to guard their own position because they might end up getting screwed...

I’m not sure this is too applicable to SV, lol, nor academic life in the US.


It's interesting the quite possibly the most important condition didn't get mentioned.

Sweden's social safety net is among the strongest in the world. If the floor on failure is that you still have access to food, housing and healthcare then quitting a full-time job to start a company becomes much more possible.

More people starting companies, more attempts at big targets, more outlier successes.


maybe the opposite is true too - SV/US has more innovation because there's absolutely nothing to lose and you can't expect to eat/medical because US.


Also education and childcare


Also, Sweden was the first European country that was connected to the U.S. Internet, or more specifically to NSFNET [1][2]. This was because the adoption of TCP/IP protocols for wide area networks came earlier in Sweden than in other parts of Europe, for various serendipitous reasons. A funny anecdote is that Switzerland could have beat Sweden to be connected first, but they were delayed because they had to renumber all their networks at CERN that were already using TCP/IP.

Initially it was just non-commercial Internet for academic institutions, but lots of students were exposed to the technology and the infrastructure early on.

[1]. https://en.wikipedia.org/wiki/History_of_the_Internet_in_Swe...

[2]. https://en.wikipedia.org/wiki/Internet#History


One other reason is that taxation of income gets extremly high (60 %) when the salary rises above ~$7000/month. This makes people feel that it is impossible to get rich by working for someone else in Sweden, which in turn pushes productive people into startups.


That could only apply to founders who can sell parts of their company, surely. Employee number 1 would still face the same tax ceiling, and you need employees to build a company. The ratio of founders:employees will still be tiny even in Sweden.


No, since stock are offered to early emplyees, which are taxed way lower.


Most of the things you mention also apply to The Netherlands, except for possibly the tax-free computer scheme (not sure about that one). As far as I can tell we're not really all that note-worthy startup-wise. Assuming that's true, any ideas what might be the differentiating factor?

From what I understand, both Holland and Sweden are culturally similar enough that I can't immediately think of other factors that make 'us' more hostile to startups.


It may just be me, but I have a feeling that Dutch startups have a tendency to focus on the Netherlands first, and either don’t feel the need to go global or just fail to do so.

Also, compensation. The Netherlands has a lot of good people, but they’ll leave for greener pastures.


> It may just be me, but I have a feeling that Dutch startups have a tendency to focus on the Netherlands first, and either don’t feel the need to go global or just fail to do so.

That leaves the more interesting question: why is The Netherlands different from Sweden in this regard? My impression was that The Netherlands, if anything, is more internationally-focused.

> Also, compensation. The Netherlands has a lot of good people, but they’ll leave for greener pastures.

Does Sweden pay better, taking into account cost of living? And if so, why?

(not disagreeing, btw, just more questions)


That's an interesting question, especially taking into account that I know of and worked with many computer scientists from Holland, but not from from Sweden. You would think that Holland would have the expertise to form many startups.

Obviously, there is the business side in addition to the computer science side. I don't know much about that with respect to Holland.

But one possibility is that Dutch computer scientists tend to leave Holland to work. That explains why I know of so many as an American? (e.g. I worked with Guido van Rossum and Werner Vogels advised my master's project)


That is a hard one. They only thing I could come up with off the top of my head is that Sweden a bit more consensus oriented, which could be quite good for creating commercial successes. Like for example I would say that the Netherlands has a stronger electronic music scene, but the big names are still a bit less commercial than Avicii and Swedish House Mafia. Maybe that is because between the Netherlands and its neighboring countries you have a pretty big market. While for Sweden it is Sweden and then the world.


Interesting. Generally, in conversations I've had before, the consensus-don't-stand-out aspect of Dutch (and perhaps moreso Swedish) culture is considered a bad environment for startups.

I don't think our electronic music scene is less commercial though.


The Netherlands also had rebate for personal computers, called the "PC privé plan" (Private PC plan), where bosses could give their employees tax-free computers with a maximum of about 2300 euros at the start, which was cut to 1500 euros near the end of measure. This lasted from 1997 until 2004.


I remember that in the late 90ies/early 2000 that swedes or scandinavians always had these super low ping internet connections. Combine that with a long winter and you have a lot of tech experienced youngsters that spend a lot of time on projects (in this case gaming). They did (and still do) extremely way in esports in relation to the size of the population.


> As an example, I believe Spotify opened their original offices in both Stockholm and Göteborg more or less simultaneously.

Without disputing the technical correctness, Spotify's Göteborg office was literally a single desk at a shared office for the first three years of its existence.


I think Stockholm does have more startups than the rest of Sweden, though, so there is probably more to it. The article understates the size of Stockholm a bit though. As measured by urban area it’s a city of 2.5 million.


Not only that, but within a one hour drive are Västerås and Uppsala, big cities in Sweden's scale.

The former hosts ABB and has tons of robotics related research, the latter has one of the major universities. The startup scene is smaller in Västerås than in Uppsala though.

Uppsala produced Klarna and Skype, Västerås produced Pingdom, to name some.


> Another thing is that Sweden doesn't have the tradition of dubbing movies

Here in Belgium, in the Flemish part of the country we don't dub movies but in the French part we do. The levels of English are massively different, not sure if this is the main reason.


That's interesting about the reform. I was a kid during those years and I recall us buying our computers through my dads work.


> Is Bloom's "Two Sigma" phenomenon real? ... one-on-one tutoring using mastery learning led to a two sigma(!) improvement in student performance.

When I tutored, I found students often had some misunderstanding, somewhere. So my task was to listen, to find that misunderstanding, so I could correct it. This "teaching" is listening, more than talking. The idea is they are lost, but to know what direction they need, I first must know where they are.

To correct misunderstanding without this guidance can be very difficult, and might only happen serendipitiously, years later... assuming they continue with study. Which an unidentified misunderstanding can prevent.

Recently, I'm seeing the other side, while self-learning some maths. I can see how much one-on-one tutoring would help clear up misunderstandings. Instead, I'm using the strategy of insisting on starting from the basics, chasing down each detail as much as I can, using online resources, and working out proofs for myself. Each step is a journey in itself...

Luckily, I have enough skill, confidence, motivation and time. By working it out myself, I think I'm also gaining a depth of understanding I could not get from a tutor's guidance.

But it sure would be a lot more efficient!

[ PS I haven't yet read the two pdf's in the question ]


I tend to believe in some of Stephen Krashen's notions of language acquisition. Specifically that there is a difference between learning (being able to remember and repeat something) and acquisition (being able to use it fluently). Also that acquisition comes from comprehension. I also believe that language acquisition is no different than any the acquisition of any other skill. Many people don't agree with these ideas, but I'm laying it out as my assumptions before I start :-)

With that in mind, one of the interesting findings in language acquisition studies is that when free reading (reading things for pleasure), it takes 95% comprehension of the text in order to acquire new grammar and vocabulary in context (quite a bit higher than most people imagine -- which is one of the reason people advance a lot more slowly than they might otherwise).

With that, just like your experience, the key to teaching is to ensure that the student comprehends at least 95% of what you are saying. The only way to ensure this is by constantly testing their comprehension with a two way dialog. Once a very high level of comprehension is reached, and once enough repetition happens to remember the thing, you will acquire the knowledge.

It is incredibly difficult to do this unless you are teaching 1:1. There is a special technique called "circling" that you can use to teach language to a larger number of students and it worked extremely well for me. I still can't effectively do it for more than about 10 or 15 though. If you think in a 45 minute class, if I have 15 students, then each student gets 3 minutes of my time. It's not actually surprising that classes of 30 or 40 basically impossible.

Quick note: I'm no longer teaching, in case it is unclear from the above.


There are other ways of making sure the student understood the concept. Recently I played the game 'The Witness'. The whole game is about learning new puzzle rules, and yet there is not a single dialog within the game or even text explaining those rules.

I am not saying that their technique is the most efficient (e.g., adding hints would undoubtedly increase the efficiency, but also ruin the game experience), just that there are other methods of making sure a student understands a concept. You don't necessarily need the one-on-one conversations. Those conversations are mostly useful to round up incomplete teaching material (again, I am not saying that creating perfect teaching material is easy).


You had a one on one conversation with Jon Blow's creation. This is not an argument against one on one. More like an argument for low-key AI tutors.


While I like your way of thinking, I don't think the argument applies. The game itself doesn't possess any kind of AI and is somewhat static, more like a sudoku book: There are lots of puzzles, but you know it when you have solved one.

The one-on-one tutor idea is that you have a master who sees the mistakes the student makes and gives him an exercise to target precisely the misconception the student might have in his head.

The Witness, on the other hand, doesn't possess such intelligence. Instead, it is a carefully crafted series of puzzles which slowly broaden the possible moves. Most of the time every next puzzle requires you to learn a new part of the rules. Sometimes you assumed that part anyway, and the puzzles are easy. But sometimes you have to find out that misconception in your head and replace it with something correct which makes the puzzle harder.

So one concept includes an intelligent observer while the other is more like a perfected text book.


> it takes 95% comprehension of the text in order to acquire new grammar and vocabulary

strongly disagree. learning chinese by youtube. I comprehend 15% but I pick up new patterns and words all the time.


That's interesting. I'm definitely interested to understand what you are doing. Are you watching Chinese videos or Chinese language instruction videos? Are you able to use the language fluently?


I find it hard to maintain motivation and attention if I'm not getting at least ~30%. This applies to both movies and real-life conversations in another language. And, now I think about it, it also applies to English-language materials that require specific technical background, e.g. academic papers.


Any good sources you recommend for learning Chinese on YouTube? I’m just getting started with HelloChinese and Fluent Forever - watching video seems too deep for now.



Not YouTube, but Chinese Pod is really great.


I think the One-on-One Tutoring things is something the AI current movement could make a difference. Everybody seems to be obsessed with building really cool stuff, but our teaching system is quite obsolete and could get better by adding smart systems.

That said I must to add, that I am referring to teaching humans who are 12 years and older (IMHO young kids require physical interaction if you want to avoid psychological conditions).


I agree, generally; however, I believe the lownhanging fruit is in assistive tools for teachers.

The commenter above made a good point about the value of removing barriers to learning as a primary asset of a good teacher. People tend to focus on content knowledge/curriculum as the mark of good teaching, but removing barriers is the real, difficult work. Tools that assist the instructor in understanding their students’, their students’ knowledge, and their learning behaviors would be valuable. Don’t focus on content delivery. Focus on making in-class assessment more frequent and trustable. Focus on tools that assist an instructor understand thirty students as they might understand five.


I am working to solve this by creating a conversation bot based interface that can make personalized learning viable. If you are interested for updates you can sign up here: https://tinyletter.com/primerlabs/

P.S: I was supposed launch a month back. But a lot of rewrites made it difficult. For now, I can say I will be launching soon.


As a teacher -- yes! More of a life coach, someone who rewards you for good decisions, eating well, homework, being kind etc


Tutoring is often the only time that a student will sit down for a dedicated amount of time and study without distractions. As much as I want to believe in my own value as a tutor (and I do think there is some value), I will admit that a fair portion of the benefit is just having someone force the student to study.


> Recently, I'm seeing the other side, while self-learning some maths. I can see how much one-on-one tutoring would help clear up misunderstandings. Instead, I'm using the strategy of insisting on starting from the basics, chasing down each detail as much as I can, using online resources, and working out proofs for myself. Each step is a journey in itself...

I did this for a while and gave up the self-learning aspect and went to university to study math part-time. I have the utmost respect for anyone who has the patience to push through it on their own. Some things I can learn on my own but higher maths I couldn't--at least with any degree of efficiency.


I think listening is also valuable because verbalizing a problem often brings up misconceptions or helps internalizing the idea and thus leads to a solution, maybe with a nudge in the right direction if that's expected in a dialog, thus not going the full way.

I had that experience sometimes when preparing a question to ask on-line. Sometimes it becomes clear when trying to see it from the other direction of the listener, or researching the problem space to phrase the question properly yields unexpected results.


> The Empire State Building was built in 410 days

At least one reason is that we have substantially different safety regulations since we're not accepting of deaths on a project like that. 5 people died on that project. 11 died to build the Golden Gate. Original Bay Bridge? 24.

They actually had a rule of thumb at the time: 1 death for every $1M spent on a project[1]. Any metric like that would be absolutely unacceptable today.

[1] - https://www.npr.org/2012/05/27/153778083/75-years-later-buil...


Death rates are probably better than absolute number of deaths for comparison. Here are death rates per 1000 workers of some well known construction projects:

  80    Transcontinental Railroad
  80    Suez Canal
  50    Brooklyn Bridge
  17.46 World Trade Center
   6.4  Sydney Harbor Bridge
   4.47 Hoover Dam
   3.37 San Francisco Bay Bridge
   3.33 Eiffel Tower
   2.67 Titanic
   2.5  Sears Tower
   1.47 Empire State Building
   1.17 Trans-Alaska Pipeline System
   0.75 City Center Las Vegas
   0    Chrysler Building
Topping all of those by a large amount are the Panama Canal and the Burma-Siam railway, which I did not include because I don't have numbers--they are literally off the chart. I mean that literally. On the bar chart I'm getting the numbers from [1] the bars for those are so big they are clipped and the number is not visible.

[1] https://www.forconstructionpros.com/blogs/construction-toolb...


Are you on mobile? Or something else might be breaking formatting for you. The chart linked does have numbers for the Panama Canal (408.12) and Burma-Siam Railway (385.45).


40% of workers died building the Panama Canal? That's nuts.


The history of the canal itself is pretty interesting. It was an effort started by the French, who lost too many workers to tropical diseases (specifically malaria). Then the Americans took over, and one of the reasons they succeeded was they made sure to take preventative measures to protect workers from malaria.


No, I'm on Firefox 63.0b9 on MacOS.

Looks like I zoomed in (cmd-+) a few times to make the numbers more readable, and went past what the page's CSS could handle.


Vancouver’s “Second Narrows” bridge was built in the late 50s and completed in 3 years. Midway through the project a major accident killed 19 people. It was the result of an engineering error.

In the mid 90s the bridge name was changed. It is now generally referred to as the “Ironworkers memorial bridge”. Anyone crossing that bridge since is constantly reminded of that engineering error.

“Better safe than sorry” is rising belief.

Once this truism became generally accepted, as it generally is, it has precedent over other considerations. Since “safe” means more than physical safety, practically any human action is subject to exponentially increasing levels of scrutiny. It takes time to come to an agreement that everything is safe. In big projects it takes a lot of time. On bigger projects it might never come.


> “Better safe than sorry” is rising belief.

I think this is a cowardly and inaccurate belief and agree it is what it driving changes across all areas.

It is the institutional flaw of democracies and free markets. People (demand) are capable of acting irrationally and emotionally leading to a reduction of individual and public good.


"I think this is a cowardly and inaccurate belief"

Throughout history you find the people who believe this are not the ones who's safety is sacrificed in pursuit of some grand project...


You’re assuming a lot.

I’m assuming your OP thinks “better safe than sorry” is stupid because it doesn’t actually help you make decisions.

I would much rather drive on a bridge of someone whose motto is “do it right.”

This isn’t the Apollo program. We know how to make safe bridges. It’s not a matter of being cautious and talking to every stakeholder. It’s a matter of hiring actual engineers,* letting them work, believing them, and then giving them the resources to monitor construction properly.

It means sometimes rejecting a lot of bad material and holding the project up for 8 months.

It does not mean “lean safe” and cross your fingers.

* meaning licensed engineers. The word engineer has I guess been made meaningless by thousands of people writing code and calling it “engineering”. It used to mean someone had completed training as an engineer.


I believe you are either arguing in bad faith, or not giving the post you are replying to enough credit.

Better safe than sorry MEANS do it right. It means that human life is more valuable than material wealth.

If you are hiring amateurs to build your bridge, or you are “crossing your fingers”, then you are not being “better safe than sorry” — in the sense that what you are doing can not be an implentation of “better safe than sorry” that can be reconciled with the broader cultural context in which people use the phrase and discuss it.


> Better safe than sorry MEANS do it right. It means that human life is more valuable than material wealth.

This is not accurate on both accounts.

First, material wealth can be converted into quality of life in multiple different ways. Consuming additional millions to push death rates down by a few percentage points is taking away from other places, ostensibly hurting others.

Second, better safe than sorry does not mean do it right. Mistakes will happen, and acting like nothing will ever go wrong is a fools errand. Planning for failure is a significant portion of project management and engineering in general. The goal is 0 mistakes, but severe negative overreactions in response to failure can have a net negative impact.


A net negative impact on whom? A net positive impact on whom? How do you quantify this impact?


> The word engineer has I guess been made meaningless by thousands of people writing code and calling it “engineering”.

A friend of mine was just hired as a project engineer at a construction company. He was confused since he has no engineering experience. Turns out, at this company, it means you are training to be a project manager.


“Do it right” is what we want, “better safe than sorry” is what we usually get. “Better safe than sorry” in practice usually ends up looking like a tool for acquiring broad agreement with stakeholders and spreading accountability. I have seen many “better safe than sorry” initiatives that result in complete garbage that loses the purpose, spirit and original intent of its mission.


There are plenty of places, such as western European countries and Japan, that have strong safety regulations and still manage to complete large-scale infrastructure projects quickly.

Another way to look at this is to ask why the United States seems incapable of even maintaining existing projects. For example, look at the current state of the New York City subway.

I suspect slowness in building new infrastructure, and poor maintenance of existing infrastructure, have the same root cause: lack of political will.

American voters don't expect their governments to be good at this kind of thing. European voters would vote politicians out of office if their transit systems got as bad as the NY subway has become. It would be seen as a failure to execute one of the basic duties of government.


>> American voters don't expect their governments to be good at this kind of thing. European voters would vote politicians out of office if their transit systems got as bad as the NY subway has become. It would be seen as a failure to execute one of the basic duties of government.

Americans often vote those people out of office and their replacements are equally useless or worse.


Unfortunately that's often not accurate. The vast majority of the time the incumbent wins. We vote by party here and if the incumbent is our party rep. we will vote for them regardless of track record as we assume the other party is worse.

We have created a system where results often don't matter as long as there is the correct capital letter next to your name on the ballot.

Compounding this further is that politicians know that they can count on your vote but they rely on the money of industry and lobbyists to campaign. Thus the very industry that is supposed to be fixing the problem under contract is able to overcharge and take longer than agreed becuase our politicians rely on them for funding.

https://www.opensecrets.org/overview/reelect.php


lookup Berlin BER airport for a European project that's a complete disaster


Which is mostly due to safety concerns, the fire regulations are not met by BER due to a complete chaos in planning the construction.

Nowadays, big projects in the West are far more complex since have to meet more demands and more stakeholders are involved. In authoritarian countries, this is not so much a problem, the new airport in Istanbul is built very fast, but concerns from citizens are not respected etc.


Most american voters have a phobia of government.


This will sound horrible but is it rational? E.g. say you are building a huge hospital and due to the above it will take 4 years longer at 2x the cost so basically you could loose X lives due to hospital not being there and due to increased cost of care.


Yes, it is rational. We should live in a society where the expected human sacrifice of a construction project should be 0.

Pure utilitarianism leads to outcomes that are clearly out of step with almost everyone's moral codes. For example you could kill someone and take their organs to save the lives of 4-5 people. Is it rational that we're not allowed to do that? Why do some people get to keep 2 kidneys when there are others with none?

This is solved at least somewhat by using 'rule utilitarianism' instead of 'act utilitarianism'. Society is better off as a whole if we adhere to rules such as protection of the human body or safety regulations when constructing buildings.


> For example you could kill someone and take their organs to save the lives of 4-5 people. Is it rational that we're not allowed to do that? Why do some people get to keep 2 kidneys when there are others with none?

There was a pretty good short story, "Dibs" by Brian Plante, about that published in the April 2004 issue of Analog Science Fiction and Fact.

Everyone was required to be registered in the transplant matching system. Occasionally you'd receive a letter telling you that someone was going to die soon unless they got one of your organs. That person now had dibs on that organ, and if anything happened to you before they died they got it.

Usually you would get another letter a couple weeks or so later telling you that the person no longer had dibs, which generally meant that they had died.

Sometimes, though, you'd get a second dibs letter while you already had one organ under dibs.

And sometimes you'd get a dibs letter when you already had two organs under dibs...meaning if you died now it would save three lives. At that point you were required to report in and your organs were taken to save those three other lives.

The story concerned someone who worked for the transplant matching agency who got a second dibs letter and was quite worried. He illegally used his insider access to find out who the people were who had dibs on him, and started digging around to try to convince himself they were terrible people who didn't deserve to survive to justify illegally interfering, if I recall correctly (I only read the story once, when it was in the current issue).

I don't remember what happened after that. I just remember thinking that it was an interesting story and explored some interesting issues.


Do you believe we should make the national speed limit 25? If not, you're accepting that people will die needlessly, and that the value of a human life is not, in fact, infinite.


The value of a human life is not infinite, but that doesn't mean it isn't worth more than a certain amount of time spent on a construction project. The people who make executive choices about construction projects should not decide that it is acceptable for x people to die on this project in exchange for y fewer months construction time. Accidents happen, but we should not plan to trade lives.

Consider if by sacrificing someone on an altar you could magically cause several months of construction work to happen overnight. That would still be murder.


Sorry, let me clarify: I meant my question literally. May I ask what your answer is?


FWIW, I would [make the national speed limit 25] if I could. (With some hopefully obvious qualifications.)

The current traffic system as an insane "death and mayhem lottery" that we force ourselves to play, with out respect of youth, age or anything.

The current interest and action towards bike-friendly cities is a symptom, I think, of a healing of societies' psyches. We have been pretty brutal to each other since the Younger Dryas, and it's only recently that we've started to calm down and think about what we really want our civilization to be like.


Ok, so from 50 to 25 will reduce some deaths. That's right. But now we can reduce to 20, will reduce even more deaths. Then 15, 10... Where do we stop?


Fair point, and I have two answers for you.

The real thing I would advocate (if this weren't a beautiful Sunday afternoon, calling me from my keyboard) is a design for traffic that began from the premise of three (or four) interconnected but separate networks, one each for pedestrians, bikes, and rail, and maybe one network of specialized freeways for trucks and buses. Personal cars would be a luxury (unless you live in the country) that few would need (rather than a cornerstone of our economy) with rentals taking up the slack for vacations and such.

But if you're interested in this sort of thing, don't bother with my blathering, go read Christopher Alexander.

My other answer is really just an invitation to a kind of thought experiment: what if we really did restrict ourselves to just walking, biking, and trains? How would civilization look in that alternate reality?


And what is the real human cost when we factor in the number of human lives wasted sitting in traffic at stupidly low speeds.

If decreasing your speed from 100km/h to 50km/h gives you a 1% lower chance of dying in a road traffic accident, but you spend an additional 2% of your life stuck in traffic, is that a win?


Has this been tried? I would expect more speeding and maybe even more fatalities.


Not if there were an ironclad law that cars have to be manufacturered to be physically incapable of exceeding that speed.


That is essentially how "vision zero" works, but with the right speed for the right conditions.

https://youtu.be/7kIFegy4fII?t=1478


Is this the argument you want to plant your stake in the ground on as absurd? Because the modern debate in the tech community is "should people be allowed to operate motor vehicles at all?"


Even autonomous vehicles will always have nonzero fatality rates. Letting them go 50mph will lead to more human deaths than capping them at 25.


lol, I hear people frame the self driving car discussion that way regularly, but it is just wrong. Noone is going to be making human driven vehicles illegal.

They may be more like classic cars than regular vehicles at some point, though.


Nah, you'll just see the insurance cost spike to the point where driving is a weekend activity for rich weirdos. Give it a generation and it will be as strange and morally suspect as smoking.


> it will be as strange and morally suspect as smoking.

You could have picked a better example. I see tons of young people smoke and wonder what the hell is wrong with them, whether or not they haven't been paying attention for the last 40 years and then I realize they didn't because they weren't there to begin with. So the tobacco industry can work their nasty charms on them with abandon because there are new potential suckers born every day.


Smoking is a weird analogy. I'd expect a better one to be something like "give it a generation and it will be as strange and morally suspect as a horse and buggy".

Or alternatively, maybe as strange as a motorcycle or vintage MG.


I wouldn't be surprised to see an increasing number of (express-type) roads or perhaps dedicated lanes where human drivers were not allowed on though, after self-driving capabilities become the norm. (yes I realise that's an assumption).

I think people underestimate the cultural impact that self-driving vehicles will have - imagine a whole generation or two after self-driving vehicles are generally available - how many people will bother learning to drive? I think it might become more of a job-specific skill than a general 'adult life' skill as it is now in most places.


First it will be like knowing how to drive a stick-shift. Then it'll be like owning a sports car. Then it'll be like owning a horse or a boat.


I think you are absolutely right about some limited circumstances that make them the only legal option. But the analogy that I keep making is to classic cars. A lot of them don't have the safety features that we expect today. It isn't uncommon for their owners to say things like "I'm only safe on roads that existed in 1960". It is obviously an exaggeration, but the point is that even today there are plenty of cars that are legal to operate but probably wouldn't be anyone's preference on a busy 70 MPH interstate.

At some point, human driven cars become novelties, just like that. There is no reason to ban them, but as you suggest, maybe there will be some HOV-like lanes where they don't really have access. Or even some time constraints (not during rush hour on some key roads, not in lower manhattan, etc).


Your "pure utilitarianism" is a complete straw man.

Killing someone to take their organs to save 5 lives is utilitarianism to only first order effects. We live in a world that does not terminate after one time step, so we have to consider Nth order effects to calculate utility.

For example, the second order effect is other humans' moral judgements. "How horrible, he murdered that man" is a valid moral reaction to have, and this is disutility that must be accounted for in a "pure utilitarian" world view. Third order effects may be the social disorder that results from allowing such ad hoc killings as means to an end, and so on.

The only thing preventing pure utilitarianism from being viable is a lack of compute power, and "rule utilitarianism" is a poor heuristic based approach for philosophers without patience ;)


Great comment, I've never had a good way of putting that same thought.


> We should live in a society where the expected human sacrifice of a construction project should be 0.

No, but that should be the ideal which you should strive to move towards, when practicable. But you can't ever actually get there, and shouldn't try infinitely hard either.


Yes, but only up to a point. If all the rules and safety checks inflate the cost and timeline of projects exponentially then barely anything would get built.

The question, as always, is: "Where do we draw the line?".


This doesn't seem to be true because things do get built with high safety today.


It might be more palatable if the benefits accrued to the people assuming the risks.

Reliably the people absorbing the risks capture almost none of the added value.


Yes, it does sound horrible. Not because of the objective, but it is so uninformed and low effort that it is really pain full.

Yes, it's a great idea. Killing one person will allow us to double speed. How exactly I wonder, but this seems like the kind of project where asking questions is strictly forbidden.


This has nothing to do with specifically killing people but has to do with creating regulatory environment were very few entities can complete and comply with regulation. I am not sure anyone can contribute the improved stats to that burdensome regulation easily as it can be just general tech and process improvements.


The premise that people will die if the hospital takes longer to build is false. The same work can be done in another building


You make a reasonable point, health and safety bullshit can go too far at times, but equally exploitation often goes further - in that the effort to save/pockets costs shadows the good will. Very difficult to quantify


China built their national high speed rail in remarkable time. Like less time to cover their entire country, than it will take California to build one high speed rail line. The difference seems to be that autocracy gets shit done.


Sure, they didn't have to worry about property rights, environmental impact studies, etc. They just did it.

We built the first interstate highways pretty quickly too, and part of why is we just plotted a line and set the bulldozers to work. Nobody worried about habitat destruction, erosion, endangered species, etc.


Interestingly the author cites Hong Kong in another question as a city that should be replicated. It's a city where at least 10 (and likely a great deal more#) workers died and 234-600 were injured in the last few years building a bridge of questionable utility to other local cities.

# 10 are confirmed dead on the HK portion, the death toll on the Chinese portion is unknown.

There's little question that lax worker safety and weak labor laws can contribute to faster economic growth, but I'm not sure that's something we should be trying to replicate.

https://en.wikipedia.org/wiki/Hong_Kong%E2%80%93Zhuhai%E2%80...

http://www.ejinsight.com/20170413-how-many-more-people-have-...


I'd be surprised if safety is a significant contributor. Assuming it is, why haven't we gained, in almost a hundred years, efficiency to counter balance it?


> Any metric like that would be absolutely unacceptable today.

Except for car deaths. Over one million a year seems to be acceptable.


"Why are certain things getting so much more expensive?"

Because we are lying about inflation. It's not a conspiracy, just a mutually agreed-to delusion. By pretending inflation is lower than it is, the poor feel like they are standing still instead of sinking ...though instinctively they know. And the people just keeping up get to feel richer. Since technology and efficiency improve, the people staying in the same place have cooler things.

If inflation were reported correctly, average would see their paychecks dropping as wealth and power consolidate elsewhere. There is no interest in creating alarm around this fact. Instead the public is distracted by social drama, and political discourse consumed by things that do not affect the real shift in power.


Do you have any evidence that “we are lying about inflation”?

Unless you do, I think it those industries are getting more expensive due to Baumol’s Cost Disease:

https://en.m.wikipedia.org/wiki/Baumol%27s_cost_disease



ShadowStats inflation is literally the CPI minus a constant. The guy making is completely laughing at you -- he doesn't have a methodology to calculate the inflation, he's just publishing numbers people want to hear.


I have a different take. Healthcare, education, construction costs are all regulated by governments. Unlike companies, governments are systems of people who thrive from complexity. More complexity = more work, more power, more benefits (direct on indirect).

To give you an example from construction, a company producing a complex part is "giving away" 3 day training in fancy locations, with all hotel & meals paid by the company. As a government employee who gets a fixed salary no matter the results, would you prefer using a product which gives you to these free trainings (free mini-holiday), or using a cheap part which does the same thing, minus the fancy training sessions?

I think the root cause is having some people in charge of other's people money, without clear responsibilities (vs. evaluations which happen in private companies, with the possibility to getting fired anytime if you have a poor performance on result-oriented KPIs), AND having a monopoly. You can't simply start another healthcare or education system, without complying to all existing complex regulations & processes.


From a US perspective, these are all markets that depend on expensive skilled US labor. Many things like manufacturing have shifted to labor markets that are cheaper. How much is due to currency differences, cost of living differences, etc. vs. employee safety or anti-pollution regulations ... I don’t know.


My dad was a programmer and I'm a programmer. I don't think I'll ever make a salary as high as he did[1], and that's in absolute dollars. Yet, my quality of life is definitely better. I take vacations all over the world. I can look up any fact I want instantly on the internet. I work from home 3 days a week. I've easily cured maladies that were huge nuisances when he was my age.

You're right that macroeconomic indicators don't seem to tell the story, but I can't square my experience with your claim.

1. He has a Master of Physics, I don't have a degree.


How much was his salary? I would think a lot of programmers these days make more in absolute dollars than their parents would have.


Not all children are as smart or productive as their parents.


Inflation/devaluation has always been that game. The question is why it’s been so uneven.

I think all of the areas Patrick mentions are places the government has decided are “multipliers,” and has directed spending or subsidy. I vaguely recall a discussion in undergrad macro that it was always an advantage to be at the place inflation is injected into the system. We’ve had these input points for 60+ years.

Tech is a mixed bag, but housing, construction, medicine, and education are all prime places for social “investment.”


I have read that the cost of education is going up because unlike other parts of the economy, which benefit greatly from automation, with education you still have to hire these costly human teachers.

So as the efficiency of everything else goes up, the cost of those things is depressed. But teachers cost the same. So relatively under an inflated environment, they become more expensive.


So why are other things not getting much more expensive? Why are things not getting more expensive at the same rate? Monetary inflation affects all goods more or less uniformly, but the price of milk hasn't skyrocketed as much as the cost of education or housing or healthcare.


Monetary inflation affects all goods more or less uniformly

Money being created is like pouring water into a pool - it creates ripples outwards. Eventually, if you stop, yes the surface of the pool will become calm and the pool will be higher. But whilst you're pouring, the volumes are not even.

These days, when the government creates money it doesn't put that money into everyone's bank account overnight. When was the last time you got a cheque from the government labelled "new money"?

Instead the central bank engages in various forms of manipulation, like via the "QE" programmes that involved asset purchases. So, the prices of certain financial assets go up. They also purchase a lot of government bonds, or that money eventually makes its way into corporate debt. And what do governments do with this money, well, they often spend it on things like subsidising mortgages, or subsidising private banks (via bailouts), or healthcare, or education, or paying a large staff of government workers, or buying military hardware, etc.

So you go look at what's gone up in price very fast over the years and hey, look at that, it's the stuff near the centre of the pool. Things that governments tend to subsidise a lot or things that people feel they have to buy regardless of cost, like education, healthcare, homes, etc. The money pouring into the system ends up stacking up in a few places, it's not evenly distributed.


> Instead the central bank engages in various forms of manipulation, like via the "QE" programmes that involved asset purchases. So, the prices of certain financial assets go up.

The last round of quantitative easing by the Federal Reserve, QE3, ended in 2014. (QE1 began in 2008.) If these price effects for health care and housing began in 2008 and ended in 2014, it would make sense to blame QE, but they didn't, so it doesn't.

Price effects that are associated with government vs. private expenditures are not the same as inflation. That's just governments being bad (perhaps intentionally bad?) at spending taxpayer money efficiently. However, when it comes to health care in particular, that just isn't the case either--Medicare and Medicaid pay much lower prices for medical procedures than private insurance companies do.


The banking system was creating money in various forms outside of QE. That was just a well known mechanism.

The main way it's created is through loans, many of which end up in housing. And rampant speculation on apparently ever-rising house prices is where the financial crisis started. Ripples expanding ever outwards ...


Because monetary policy is only one of a multitude of factors that influence inflation. Regulations, subsidies, tax laws, trade agreements, labor laws, immigration policy are just some of the other things that have wildly disjointed effects on industries, markets and prices.


You're confusing inflation with rising real prices. Inflation doesn't discriminate -- the price of everything rises by the same proportion.


He's not confusing it.

> the price of everything rises by the same proportion

Due to regulation, that's simply not the universal case - e.g. rent control. The economy is conceptual, but prices are concrete leading to some ironic situations.


Inflation is when the currency itself loses market value. There's no such thing as inflation that only affects some goods and not others. The prices of some goods do fluctuate relative to each other, but that's not inflation!


Do taxes matter? Indentical products in the US and France feature French prices that are far more expensive, even when adjusting for currency. Milk, for example. iPhones. Clothing especially. Washing machines, furniture.

Disposable income in France is vastly lower than that of the US. Is that inflation? Or tax policy?


> Why are programming environments still so primitive?

Because we as an industry made a strategic decision in the late 20th century to value run-time efficiency over all other quality metrics, a decision which has manifested itself in the primacy of C and its derivatives. Everything else has been sacrificed in the name of run time efficiency, including, notably, security. Development convenience was also among the collateral damage.

> Why can't I debug a function without restarting my program?

Because you use C or one of its derivatives instead of Lisp or one of its derivatives. In Common Lisp you can not only debug a function without restarting your program, you can redefine classes without restarting your program. It is truly awesome. You should try it some time.


>> Why can't I debug a function without restarting my program?

> Because you use C or one of its derivatives instead of Lisp or one of its derivatives. In Common Lisp you can not only debug a function without restarting your program, you can redefine classes without restarting your program. It is truly awesome. You should try it some time.

There is no technical reason why it shouldn't be possible in C, if you are willing to do without many optimizations. One approach is to make a function stub that simply jumps into the currently loaded implementation. A more efficient but more convoluted way is to patch the address of the currently loaded implementation at all call sites.

The problem is that in general you can't simply replace a function without either restarting the process or risking to crash it. In general functions have some implementation-dependent context that is accumulated in the running process, and a different implementation does the accumulation in a different way. I'm not a lisper, but there is no way this is different in LISP. (And it is not because in C you often use global data. "Global" vs "local" is only a syntactic distinction anyway).

If you're willing to risk crashing your process that's okay. It's often a fine choice. And you can do it in C. The easiest way to implement approach #1 is to load DLLs / shared objects.


There is no technical limitation why it is impossible in C, however the raw details of how C is implemented make it entirely impractical.

A super generalized description of a C function might check if a register contains a positive value. If so, it will then jump to address 0x42, which is a memory offset, which is the beginning of another function. Its near impossible to "swap" out what lies at 0x42 since that was defined at compile time and is included in a monolithic executable.

Looking at more dynamic languages, like C#, Java or LISP, they run on a virtual or abstracted machine instead of raw registers. This means that a similarly defined function will instead jump to whatever matches the requirement. That means that we could have a lookup table that says we should jump to a symbol of S42, and based on what we have loaded in memory, S42 resides at 0x42. Essentially, all functions are function pointers, and we can change the value the resides in that memory address in order to swap out implementations of whatever function maintains the same signature as the intended function. This is why one can make trivial changes to C# in visual studio while stopped at a breakpoint and have those changes applied to the running program. Instead of jumping to 0x42, we jump to 0x84 by "hotswapping" the value of the pointer we're about to jump into.

Obviously this isn't entirely the truth, there are a lot more nuances and it's a fair bit more complicated than this, but the idea should hold water.


Your point is moot. C doesn't "run on raw registers". That's just common practice but has nothing to do with C per se. You can easily run it on the C#'s virtual machine.

Furthermore, it doesn't matter whether you are running on a virtual machine or on bare metal. What matters is if you have turned on optimizations (such as hardcoding of function addresses, or even code inlining) that remove flexibility (such as hot swapping of code). Visual Studio can hot-edit C code as well.

And as I stated, it is pretty easy and common practice to hot swap object code through DLLs or shared objects even on less high-tech platforms. It's easily done with function pointers (as you described) and a simple API (GetProcAddress() on Windows, dlsym() on Linux). Why shouldn't it be possible in C?

Virtual Machines bring portable executables and nothing more, I think. Well, maybe a slightly nicer platform for co-existence and interoperability of multiple programming languages (but then again, there is considerable lock-in to the platform).


> There is no technical limitation why it is impossible in C,

Yes, there is. You can't trace the stack in portable C, so you can't build a proper garbage collector.


Debugging a function and GC are independent things. Also, you can easily add tracing information since you will know how the C code is being run. There is no good reason not to.

There are easy ways to build a GC in portable C as well, of course, if less performant.

As you will know, in portable C you cannot even implement a system call. So what?


> Debugging a function and GC are independent things.

Yes, but redefining a function and GC are not.

> There are easy ways to build a GC in portable C as well, of course, if less performant.

No, because you can't walk the stack. Also pointer aliasing.

[UPDATE] I just realized that I was wrong about redefining functions in C. Not only is it possible, you can actually do it with currently available technology using dynamically linked libraries. But I have never seen anyone actually do this (which is why it didn't occur to me).


> But I have never seen anyone actually do this (which is why it didn't occur to me).

I wrote about this in my two earlier comments. This is very old technology. And very commonly used. I think most plugin systems wrap dynamically linked libraries.

This is also an easy way to redefine functions without needing GC. Under the hood, it is implemented in the loader's way: virtual memory pages mapped as read-only and executable (see mmap() or mprotect() for example).


I agree regarding the symptoms, but am less convinced that run-time efficiency is the sole (or perhaps even major) cause. If it were, I'd argue that we'd see less usage of, e.g., Python.

I don't know the true cause -- I wish I did. But I do do see a trend towards an ever more "hands-off" style of software development: large volumes of automated (especially unit) tests in preference to interactive approaches (or hybrids, like running subsets of tests via a REPL), and running test instances on remote servers (often via extra layers of indirection, such as CI servers) rather than running on your own machine. I'd love to see a resurgence of interactivity, but if anything it seems to be going against recent trends.

Edit: The subjective impression I've got is that doing things the indirect, hands-off way is being presented as somehow more "professional" while hands-on interactivity is the realm of the self-taught and hackers. Do others see this? What can be done to change these perceptions?


This decision was made and become deeply entrenched long before Python came along. Python is pretty good, but even it is constrained by the decisions that went into the design of C, since Python is implemented in C. Its underlying data structures are C data structures. Its memory management is C memory management. The GIL is there because C.


> The GIL is there because C.

That's not at all true. There are very, very few platforms/libraries/languages that both a) allow you to manage concurrent access to complex, arbitrary data structures in a sane, predictable, and not-dangerous way and b) provide the capabilities (including performance) necessary to implement a general-purpose programming language on top of them. Even fewer of those tools existed when Python came about.

Was it technically possible to make a GIL-free Python or a Python not based in C? Sure. But it wasn't in any way likely, or a reasonable-at-the-time decision. If you look into the history of the GIL things will make more sense; it has next to nothing to do with the implementation language.


I'm not saying that trading the GIL for the benefits of C was not a reasonable tradeoff. Nonetheless, it is simply a fact that the GIL is needed because of the constraints imposed by choosing to work in C.


Citation needed. “Because there existed languages with different concurrency models” doesn’t really explain why the choice of C for Python’s implementation necessitated a GIL (there are plenty of languages and tools built on C with non-GIL-ish concurrency systems, and many languages/tools built on not-C that don’t expose the concurrency features of their underlying language at all). Look into how/why then GIL was added; it has much more to do with not wanting to reimplement the language/break compatibility than it has to do with what the language was implemented in.


> it has much more to do with not wanting to reimplement the language/break compatibility

I don't know what "language/break compatibility" means.

The GIL is needed because Python's reference-counted memory management system is not thread-safe, and this can't be fixed without compromising either portability or performance. It's really that simple.


> The GIL is needed because Python's reference-counted memory management system is not thread-safe.

That is correct. What does that have to do with C? Thread-unsafe code exists in all languages. The GIL's lock is itself a pthread mutex and condvar underneath, and equivalent constructs exist in all (to my knowledge) modern threaded programming environments.

"not wanting to reimplement the language/break compatibility" is a reference to the successful efforts that have been made to remove the GIL in CPython. Those efforts have not (yet) moved towards merging into mainline CPython because they require a) lots of reimplementation work and added complexity in the language core, and b) would very likely break the majority of compiled extension modules.

I think that's additional evidence that the GIL isn't a C problem; they removed it, in C, without fighting or otherwise working around the language.


> That is correct. What does that have to do with C? Thread-unsafe code exists in all languages.

Yes, that's true. But thread-unsafe GC does not exist in all languages. When GC is provided natively by the language it can be implemented much more safely and efficiently than if you try to shoehorn it in afterwards.

> the successful efforts that have been made to remove the GIL in CPython

That's news to me. Reference?


https://youtu.be/P3AyI_u66Bw

Or search “gilectomy” on LWN. Or check out stackless etc.; it turns out removing the GIL technically is historically one of the easier parts of removing the GIL entirely.

It sounds like your main problem is with python’s GC model. Reference counting doesn’t have to be thread-unsafe, but in scripting/interpreted-ish languages that want their threading story to involve seamless (I.e. no special syntax unless you want it, it’s on you not to blow off your foot) sharing of state between threads at will, like Python and Ruby, a GIL or equivalent is the norm. Sometimes it’s not as intrusive as python’s, but it does seem like a necessary (or at least very likely) implementation pattern for languages that want to provide that seamlessness. You can have thread-safe reference counted GC in a traditional scripting language, but that tends to come with a much less automatic threading/concurrency API. Perl 5 is an example of that category, and it is implemented in C.


> a GIL or equivalent is the norm

Exactly. And why do you think that is? And in particular, why do you think it is the norm when it is decidedly NOT the norm for Common Lisp and Scheme? The norm for those languages is completely seamless native threading with no special syntax.


It seems like you're arguing in favor of a different language rather than a different implementation of Python. If you wanted to implement, say, the Python class/'object' type in a thread-safe way that didn't rely on a GIL and still freed memory according to approximately the same GC promises as the Python interpreter, I suspect you'd end up implementing something GIL-like in Scheme or Lisp (though my experience extends only to some Clojure and Racket in school, so I may be unaware of the capabilities of some other dialects).

If you wanted to implement a language that looked a little like Python but had your favored Lisp's GC semantics and data structures, I'm sure you could. But it wouldn't be Python.

That's without getting into the significant speed tradeoffs--you can make these languages fast, and I get the impression that there has been a ton of progress there in the last decade. But when Python was being created, and when it had to deal with the implications of concurrency in its chosen semantics? Not so. As I originally said: was it theoretically possible to build Python or equivalent on top of a non-C platform at the time? Sure. But I doubt that would have saved it from tradeoffs at least as severe as the GIL, and it definitely would not have been the pragmatic choice--"let's build it on $lisp_dialect and then spend time optimizing that dialect's runtime to be fast enough for us to put an interpreted scripting language on top of it" seems an unlikely strategy.


> If you wanted to implement, say, the Python class/'object' type in a thread-safe way that didn't rely on a GIL and still freed memory according to approximately the same GC promises as the Python interpreter, I suspect you'd end up implementing something GIL-like in Scheme or Lisp

Nope. CLOS+MOP, which exists natively in nearly all CL implementations, is a superset of Python's object/class functionality.


The GIL is not there because C. It's there because of refcounting and Python guaranteeing that many nontrivial operations are thread safe.


Yes, but refcounting is there because it is impossible to build a proper GC in portable C.


Can you elaborate on this?


You can't trace the stack in portable C.


> The GIL is there because C.

Oh please.


Sometimes I do work this way in the repl - usually when I have little confidence I even understand how to fit together a new library or API call. But if you spend more time in the repl building a data structure and interactively iteratively testing your function, when it is all finished all you have checked into source control is a function, not the tests. You have to then write the tests. In a lot of cases its faster to just write the tests and interact with the code through those. In some language environments this is not exclusive of using a repl - in Haskell for instance it is pretty easy to interactively run a module and tests saved to disk in the repl.


Perhaps if great interactive development and debugging tools were more pervasive, there would be less demand for lots of tests written at the same time as the actual function.

(The fine-grained "unit" stuff, anyway. System/integration tests come with different trade-offs).


> Development convenience was also among the collateral damage.

Don't you find that web & javascript are pretty much a straight denial of your argument ?

They're "secure" (meaning we let anyone's javascript code just run in our browsers, even embedded in other people's code, and seriously expect no ill effects)

They're extremely inconvenient to develop with. Especially compared to those "run-time above all else" environments you mention. For one, you need to know 5-6 languages to use the web.

> Because you use C or one of its derivatives instead of Lisp or one of its derivatives. In Common Lisp you can not only debug a function without restarting your program, you can redefine classes without restarting your program. It is truly awesome. You should try it some time.

I think you'll find that pretty much any environment allows this. Even without debug symbols you mostly can do this for C, C++, ... programs. On the web, you can't, because every call immediately gets you into overcomplicated minified libraries that you can't change anyway, assuming it doesn't go into a remote call entirely.

And there's environments that go further. .Net not only lets you debug any C function you run within it, you can even modify it's code from within the debugger and "replay" to the same point. I believe there's a few more proprietary compilers that support that functionality too.


> Don't you find that web & javascript are pretty much a straight denial of your argument ?

You should probably direct that question at Patrick because his original question was kind of based on the premise that the answer to your question is "no".

My personal opinion? No. OMG no. Javascript and HTML are both poorly designed re-inventions of tiny little corners of Lisp. In that regard they are improvements over C. But no. OMG no.

> Even without debug symbols you mostly can do this for C, C++, ... programs

No, you can't. You can grovel around on the stack and muck with the data structures, but you can't redefine a function, or redefine a class, or change a class's inheritance structure without restarting your program. In Common Lisp you can do all of these things.


Can you/someone explain for this noob why you can't redefine a function, etc.?

I'm not a programmer, so I'm imagining you hook in to the call that addresses the function (like modify a jump instruction), overwrite any registers that need changing, nullify caches, so the program runs new code -- this I think is how some hacks work?

Ultimately couldn't you just write NOP to addresses used for a function?

Is it something structural about C/C++ that stops this, like the ownership of memory (I'm assuming a superuser can tell the computer to ignore that addresses written to are reserved for the program being modified).

How does the computer know that you pushed a different long jump in to a particular address and stop working, rather than keep going processing instructions.

Apologies if I've misunderstood, please be gentle.


It's a good question, and I'm a little disappointed no one has answered this yet.

There is no reason you couldn't redefine a function in C. It's really more of a cultural constraint than a technical one. It's a little tricky, but it could be done. It just so happens that C programmers don't think in terms of interactive programming because C started out as a non-interactive language, and so it has remained mostly a non-interactive language. Lisp, by way of contrast, was interactive and dynamic on day 1, and it has stayed that way. In the early days that made C substantially faster than Lisp, but that performance gap has mostly closed.

However, there are some things that Lisp does that are actually impossible in C. The two most notable ones are garbage collection and tail recursion. It's impossible to write a proper GC in portable C because there is no way to walk the stack, and there is no way to compile a tail-call unless you put the entire program inside the body of a single function.


> On the web, you can't, because every call immediately gets you into overcomplicated minified libraries that you can't change anyway, assuming it doesn't go into a remote call entirely.

This is true if you go to a production website and try to start debugging. But it's untrue for any modern development environment. The minification comes later and even then, source maps are first class supported in browsers, mapping the minified code to the source.

It's funny. In my experience the web debug tools are some of the best of any language/environment I've experienced.


No, just no. Web debugging is worse than anything I've ever seen - maybe a bit better than cmd-line debugging with gdb, but not much.

Sourcemaps supposedly work, though I have never seen this actually work in practice. And since babel is apparently indispensible, you can't be entirely sure that conditionals and statements in your source haven't been optimized away in the transpilation.

Routinely one sets breakpoints in JS files that are not hit by Chrome dev tools, and symbols that should be in scope don't want to be defined. It's a mess.

I suppose if you know how to structure the rube goldberg machine correctly, web dev can be productive. But it's so hard, and the hardness is of the tedious yak-shaving variety. I just hate it and want to fire up Visual Studio and write some .NET apps with tools that just work instead.


"Rube Goldberg" is the best description I've heard for a JS dev stack. So true. Its so awful. But its so much better than it used to be and I pray it continues to improve.

I hear what you're saying. They can be really finicky. I've had very good luck using it all cleanly without issue. I especially love binding vscode to an open browser so that I use vscode for all the inspection, breakpoints, etc.

But I also experienced your lamentations. It took a long time for me to get it all working. Sourcemaps were so unreliable years ago. Now they just seem to work, having learned all the painful lessons about configuration.

There still aren't any sane defaults and the ground won't stop shifting. But now that I have it working, it works great.


> They're extremely inconvenient to develop with. Especially compared to those "run-time above all else" environments you mention. For one, you need to know 5-6 languages to use the web.

I'm not sure how this is a denial of lisper's argument. You've picked an environment which you admit has major flaws, but those flaws are independent of the issue at hand.

I've worked in runtime-perf-above-anything programming environments which required the use of many different languages even for the simplest program. It's terrible there, too. That has nothing to do with dynamic languages. In fact, due to the ease of creating DSLs, most of the dynamic languages I've used allow one to get by with using fewer languages.

> On the web, you can't [do this other thing you can do in Lisp], because ...

Indeed, you've picked the one modern dynamic environment which lacks most of the features lisper is talking about. That's not an argument against having those features. I think it's mostly an observation that this particular environment picked a different attribute (runtime security) to optimize for above all else. You'll note that JS-style security is fundamentally incompatible with many of the concepts in OP's original question.

> I think you'll find that pretty much any environment allows this. Even without debug symbols you mostly can do this for C, C++, ... programs.

Can you give an example? I've never heard of a C++ system that let you redefine classes at runtime without debug symbols. I can't imagine how it would work. How would you even inspect the classes at runtime to find out what you're redefining?


> They're "secure" (meaning we let anyone's javascript code just run in our browsers, even embedded in other people's code, and seriously expect no ill effects)

I don’t expect that. Use after free vuln + heap spray + shellcode = Ill effects


Maybe the reason for this has more to do with the type of development structure that helps a language get scale.

The rich powerful development environments you've described exist primarily in proprietary, integrated environments. If you want to integrate the editor, debugger, OS, and language, it helps to be able to coordinate the design of all those components.

On the other hand, languages that have gotten to huge popular scale have typically been more open in their specification and implementation process. Perhaps this is because the creators of breakthrough tools that drive language adoption (like web frameworks or data science kits) prefer these tools, or because the sorts of conditions that lead to creation of such tools are inherently marginalized ones. In other words, if you're a happy iOS developer using a lovely integrated world of xcode and swift, you're not going to spot a dramatically underserved development niche.


Both Scheme and Common Lisp have been open standards since their inception. And both have excellent IDEs available, both commercial and open-source (Clozure CL for Common Lisp and Racket for Scheme).

I did my masters thesis in 1986 on Coral Common Lisp on a Macintosh Plus with 800k floppies and 1 MB of RAM. It had an IDE that would still be competitive today, indeed is in some ways still superior to anything available today. All this was possible because it was Common Lisp and not C. The language design really is a huge factor.

(Coral has today evolved into Clozure Common Lisp, which is completely free and open (Apache license). You really should try it.)


Lisps are great. Why do you think they haven't been at the forefront of any big trends in application development? Things like the web, mobile apps, data science ...

My guess at an argument here is that the languages popular in the 80s drove the curriculum design of most Computer Science education, and the relative absence of the Lisps (to today) makes the languages seem less approachable to practicing programmers than they really should be.

On the other hand, you can find a lot of Lisp's influence in something like Python (though obviously with many differences both superficial and deep). So in that case, why are Python IDEs so much worse than what you'd see in Lisp? (And is that even the case? Maybe there's just more Python devs and thus more IDEs and like anything, most are crap; but if there one or two great ones then does Lisp really have an advantage there?)


> You should try it some time.

Worth noting that Patrick Collison was/is a Lisp user. While in high school, he won a fairly prestigious Irish national science-fair-type contest (the “Young Scientist”) with an AI bot written in some Lisp dialect.

IIRC he also contributed some patches to one of the major Lisp projects.


Yes, I know. He was signed up for Google's Summer of Code (I forget the exact year) to do a CL project and I was going to be his mentor. But before it started he quit and founded Stripe instead. :-)


Shocked he mentions Visual Studio Code but not Visual Studio itself.

The .net CLR has a lot of the features that he would want to enable the kind of interactive debugging (variable hovers, REPL) he talks about, and Visual Studio itself supports a lot of them.

Personally going from doing C# development in VS2k15 to doing golang development in VSCode feels like going back in time.


Visual studio has a lot of power, especially if we consider Intellitrace, but damn is it hard to use that power. I really think the main problem here is UX and not technology.


The main problem of Visual Studio is that it doesn't work on free operating systems.


No. I mean, this might be a problem for some people, that it doesn’t work on OS X (not free) is a problem for me. But a bigger problem is that even on Windows its more advanced features are often too hard to be usable except for very special situations.


Well, I have narrowed the statement to be about free OSes because there is a version of VisualStudio for MacOS as far as I've heard and because whatever works on Linux/FreeBSD usually works on MacOS too. I mean it should be cross-platform in the way VSCode is.

I don't use Linux for ideological reasons, I use it because it works a lot faster (especially when using PyCharm - the difference is drastic) and more reliable than Windows and gives me "almost-MacOS-and-much-more" experience on my PC.

Modern apps targeting professionals and enthusiasts should be cross-platform (run on Windows, Linux and MacOS) and not force you to choose from just one or two major OSes.


The version for OS X is just a rebranded Xamarin Studio, nothing like the real visual studio at all. I’m surprised the former doesn’t run under Linux already, there is no good reason it couldn’t.

For the real professional apps, the app is more important than the OS, so you choose whatever OS will run your app. Need to use Adobe CS, well, OS X or Windows is it, it could have been Linux and graphic artists would just swallow and use it, since they need to use CS. Likewise, a tool targeted at Windows app development would be fairly weird running on non-Windows.


> Because we as an industry made a strategic decision in the late 20th century to value run-time efficiency over all other quality metrics

Except if that were the case, we wouldn't have so many bloated framework code and towers of abstractions, dragging runtime efficiency to the bottom.


One reason(or maybe the common excuse) is often "we need to have full control". That's a reason most managers can get behind.

So that's a common reason slowing every shift to a higher level language or library, to tools that automatically create stuff for you(GUI, optimized code from dsl), to using a platform controlled by another company(one important channel for newer tech, with some strong ux benefits over open-source), to trusting open source.


> One reason(or maybe the common excuse) is often "we need to have full control". That's a reason most managers can get behind.

On modern machines, this excuse is even better. Full control can easily mean 100x difference in speed. Most importantly, how do contemporary lisps deal with arrays of unboxed structs or primitives?


Um, why is everyone forgetting Java?


Don't forget Smalltalk has these capabilities as well.


> Why can't I debug a function without restarting my program?

Because you use C or one of its derivatives

It's doable in java which is, I think, a derivative of C.


> What's the successor to the book? And how could books be improved?

> Books are great (unless you're Socrates). We now have magic ink. As an artifact for effecting the transmission of knowledge (rather than a source of entertainment), how can the book be improved? How can we help authors understand how well their work is doing in practice? (Which parts are readers confused by or stumbling over or skipping?) How can we follow shared annotations by the people we admire? Being limited in our years on the earth, how can we incentivize brevity? Is there any way to facilitate user-suggested improvements?

The great thing about books is that no matter how long they've been sitting around, it's easy to take one off the shelf and read it. The cultural infrastructure of written language has been around much longer (and been much more stable) than the computational infrastructure you'd need have your "magic ink" still work in 1000 years. At some point we need to start treating computers and software more seriously if we want to have things like this.


I love books, don't get me wrong, but they do age. This is especially true of books that are specialty related. Take a book from the 70s on computers, on medicine or psychology, on prehistory, on history, etc. They are all likely to be significantly outdated compared to their modern counterparts. There is much to be said for the modern way of information transference: it is always most up to date.


Whilst your core point about books aging is absolutely true I think thats mostly the mass market publications designed to ease entry than actual papers.

Similarly, anything writen about an implementation is short lived. The Idiots Guide to Windows 98 is less useful today than it was in 1998 and is getting less useful with time.

Lovelace, Babbage, Turing, McCarthy et al are still seminal. But these are academic papers that focus on what is possible to compute and how one might construct an implementation.

There are some interesting edge cases too. Is the Gang of Four Design Patterns still relevent? Its not embarassing, but its not as applicable as it used to be.


As an aside, what would be today's ideal intro to design patterns?

Now .. on topic ... I had a roomful of books collected from my youth. I cleaned out my room at my parents house some years ago ... and pretty much everything got chucked. The only books that I kept were seminal books like Knuth, Cormen, Gang of Four, TCP/IP series (v6 kinda makes them out-of-date too). All my MFC books ... java books .. pretty much all of it was out of date. I had an epiphany .. CS does not age well at all.

I no longer buy physical books ... I got a subscription to Safari and love it. Also consume tons of content on e-learning platforms. But .. I really miss real books.


I miss walking in a room of books.

My parents had bookshelves that filled walls that I built with my Dad before they moved. I, and most other people that visited the house, would spend a fair amount of time just looking at the books on the shelf. Comparing notes on what they'd read, and asking to borrow books. Looking at the books, and being reminded of the experience you had with them or wanted to have with them was a vitally important part of the process that I fear we've lost.


Seems like roms for emulators are becoming kind of book like. As soon as a new platform reaches market relevance, there is a dash to make sure these old platforms are emulated.

However, rom files specifically (as opposed to executeables for retro computing platforms) seem to be distinctly less fickle in becoming reliably emulateable.

The reason is seemingly simple, the early rom files were essentially an operating system, having all the software required to boot the system included in what now looks like one file.


There are very few people who want to read many books that are 1000 years old. The few books that interest more people can be converted to newer formats, the few people that want to read all old books will have to use extra tools to do so.

That said, I am not a particular fan of books. They take a lot of physical space, are heavy and age. So as long as we stick to reasonable formats (e.g., text-based, non-binary), it should not be too hard for future generations to use our books.

Using DRM, on the other hand, might make things complicated.


The Bible and the Koran would easily be on the world's best seller lists if they weren't excluded by default.

You might argue - and I would agree - that this is not necessarily a good thing as far as content goes.

But the point is that putting something into writing snd giving it a tangible form on paper gives it an inherent stability and authority missing from digital media.

We tend of think of digital media as temporary, disposable, relatively low value simulacra of a Real Thing.

Digital media can be hacked, edited, deleted, and lost when the power goes off.

A copy of a book from hundreds or thousands of years ago is just going to sit there for some indefinite period. (Which actually depends on the quality of the paper - but in theory could be centuries.)

This is not about practical reproduction and storage technologies, it's about persistence and tangibility.

A book is a tangible object which has some independence from its surroundings. After printing, it's going to exist unless you destroy it. If you print many copies the contents are geographically distributed and it becomes very hard to destroy them all.

A file depends on complex infrastructure. If the power goes down, it's gone. If the file format becomes obsolete, it's gone. (This has actually happened to many video and audio formats.) If there's an EMP event, it's gone.

And it's not just a tangible difference, but a cultural. We have a fundamentally different relationship with digital data than we do with tangible objects, and this influences the value we place on their cultural payload.


>If the file format becomes obsolete, it's gone. (This has actually happened to many video and audio formats.)

Any examples of video or audio files that are currently impossible to watch/listen to because knowledge of the file format, and all software capable of playing it was lost? If such a thing has happened, there are probably people interested in reverse engineering the format.


I can pick up some writing on physical media -- an Akkadian clay tablet -- and read it (if I have the knowledge) despite it being thousands of years old.

Things like laserdiscs, I can probably still buy equipment to read, but it's substantially different as I need the technology to read it.

Microfiche is quite good in this respect, you can easily read it even without the specific tech it was made for (using a magnifier, or projecting the image with a simple light source.

I wonder if you could make a crystal where, like a hologram, you can rotate the crystal a minute amount in order to project a different page (an idea I saw decades ago had a digital clock style projection from a crystal, used asa sundial -- pretty sure it was theoretical).

That way the information is relatively easy to discover, and with a simple light source you can get info out if it.


I keep hearing the fear of losing content because of obsolete file formats.

Then I think about Linear B, and I rest again.

https://en.wikipedia.org/wiki/Linear_B


There's no emulator that can run a Linear B parser. If a linear B dictionary ever existed, it was never mass produced. Linear B is older than the printing press, let alone the Internet. But now we do have those technologies, and "lots of copies makes stuff safe" is cheaper and easier than ever. I don't believe any mainstream digital format (i.e. popular enough to have a Wikipedia page) will be permanently lost unless there's a complete collapse of society, and then we'll have bigger problems to worry about.


In the modern world, a physical book is nothing more than a mere printout of a PDF file or a photocopy of an old edition. People print web pages all the time, but nobody in their right mind would think much about these printouts, let alone philosophize about their tangibility, endurance etc.

On the other hand, old books, with their high-quality paper, binding and letter-press print do seem have some kind of personality...


>There are very few people who want to read many books that are 1000 years old.

That's their loss. There are very few people who want to learn math too.


This isn't for everybody, but I would say Rapid Serial Visual Presentation (RSVP), where words are sequentially presented in place to the reader, which has significantly changed the way I read.

I wrote my own implementation of RSVP, which has eBook reader support, and now it is absolutely my preferred method of reading, and I read at 1000WPM. Though normal books are still enjoyable, they feel tedious and slow.

The project is here:

https://github.com/GlanceApps/Glance-Android

(It needs some help being updated for recent versions of Android, please let me know if you'd like to be involved! It has a new back-end API in place already and it just needs a few simple updates.)


Where I think this could be really useful is status messages. When a status message on some fixed layout display is too wide, the solution is often to have it scroll horizontally back and forth automatically, which can be very hard to read.

I think it would be more readable to put it in an area big enough for the longest word, and then RSVP through the message repeatedly.

If anyone wants a simple way to play around with RSVP, here's a little quick and dirty command line reader I wrote a long time ago to play with this: https://pastebin.com/zfq2eW4n

Put it in reader.cpp and compile with:

  $ c++ reader.cpp
To use:

  $ ./a.out N < text
where N is the number of milliseconds delay between words. It will then do the RSVP thing. It should compile with no problem on Mac or Linux.

If a word (which is really just a string of non-whitespace surround by white space) ends with a period or comma, the delay is doubled for that word.

There's a commented out check that sets a minimum line length. If you compile that check it can put more than one word on a line to make the line at least a minimum length.

PS: this aligns the words on their centers. To change it to left aligning them change where it sets the variable "pad" to use a small integer instead of basing it on the length of the word. If it is the same for all words, it becomes an indent for left alignment instead of a pad for centering.


This is really awesome. I think I played with your source a few years ago, trying to adapt it to work with Google Cardboard. My initial attempt failed, because my eyes would lose focus during each word transition. I decided I'd need to add a lightly textured background, which would be shown all the time, and which would fix the distance my eyes were focusing, and then lay the text on top of that. IIRC I gave up because I realised the 'right' way to do this was to use the Cardboard SDK, but that would mean also writing something to render the text into pixels (as the SDK only supported graphics).

BTW - The Google Play link in the repo doesn't work for me, and I don't see Glance in F-Droid. What's the easiest way for non-developers to get the APK?


I can't seem to find your app on the Android store, even through the link on your GitHub page. Are you sure it's working?


Also, the website linked in the repo seems to be down.


Isn’t recall way worse for books “read” this way?


Here is 1000wpm:

https://youtu.be/7i9fZvWyLfI?t=1m41s

I wouldn't recall a thing at this speed, nor at 600 which is shown just prior to the time stamp above.


A few things here:

This is a poor implementation of RSVP, as each word is being presented at the same speed. Longer words should be given longer presentation times, as should words with punctuation marks. The presentation of the words is also centered rather than aligned, which requires a saccade for each word, which defeats the whole point. It's also a difficult text to start out with, with no context.

Even still, I didn't have a problem reading and recalling this text, though I wouldn't recommended it for a beginner.


I made a similar app (iOS) which varies the display time by word length, punctuation, and each word's place in a list of the 100 most common words (under the assumption that common words contain less information, thus take less effort to read). To be honest, I'm not sure it works any better than one running at a constant speed. There seems to be a surprising lack of research in this area.

(https://itunes.apple.com/us/app/zipf/id1366685837?mt=8 if you're interested.)


> The presentation of the words is also centered rather than aligned, which requires a saccade for each word, which defeats the whole point.

Does it actually require a saccade?

Testing with a quick and dirty command line RSVP program I have, my speed and comprehension seem about the same with either centered or left aligned. But I'm mostly testing with fiction written for the average adult. The words are usually short enough that they are within the field of good visual acuity no matter where withing them the focus is.

I've not done a comparison using text with a lot of long word.


No, I would say quite the opposite.

Reading this way is a skill which needs a small up front investment, but the pay off is immense. The trick is to not try, just relax and pay attention and let the words speak to you as if they are being narrated to you inside your head.

Because there are no constant micro-interruptions from page scrolling, ads, or even from your eyes own saccades, I find that my attention to the text is much, much better, and I need to stop to ponder something I can just tap the screen to pause it.

I also find that I am far, far more likely to finish an article/paper/chapter via Glance than via my browser. These days it's pretty rare that I'll actually finish an article online, but with Glance I'll almost always read the entire thing from start to finish.

I really, really recommend this skill, especially if you have a lot of time to kill on a mass-transit commute, or if you just want to read more.


Not sure I follow the logic; don't physical books suffer from similar downsides? Books degrade and libraries burn down. Seems a lot like bitrot and file mirrors disappearing.


Those downsides only apply to the book or software itself. My point was about the stuff you need to have apart from the book/software in order to read it. You need a computer to run software; all you need to read a book is an understanding of the language. It does get harder and harder to understand books over time, but not at the same rate that computers are changing (and language tends to stay relatively simple, anyway, because people have to be able to learn it).


I have thought about the difficulties of long-term storage for a while, and have come to the conclusion that digital mediums are inherently a poor choice for archiving and preserving data:

http://howicode.nateeag.com/data-preservation.html

As one of the more recent additions to that essay shows, I'm not the only one with that opinion:

https://partners.nytimes.com/library/magazine/millennium/m6/...

As I note in the essay, there are some technologies that work better than others for preservation, but digital's biggest weaknesses are inherent, I think.


>> > What's the successor to the book? And how could books be improved?

First, what's the purpose of books ?

There's entertainment of course. But let's focus on books that teach. Their purpose is to let you access some knowledge, in a deep way.

On the other hand, computer systems are starting to fill that role, and the level of depth they can achieve is growing - on a good day, with the right query, Google may give you access to Amazing content, content that may help you connect different concepts - based on your past searches - just like your brain does.

Another option is if it was easy for a book author to package her knowledge in a smart chatbot or maybe an expert system, and we will have hundreds of such advisors advising us, or just interactively chatting with us,correcting our mistakes, that would be an interesting replacement to the book.


>What's the successor to the book?

The second edition.

And how could books be improved?

Until a different unpowered human-readable medium is proven over a longer period of more careless storage conditions, looks like a third edition if appropriate.

If it hasn't been printed it hasn't really been published as thoroughly as it could be, and if it hasn't been bound then it's not yet a real book. Up until recent decades the survival of unique knowledge was largely dependent on the number of copies printed and distributed, so popularity has had undue importance.

But don't let earlier editions become lost or woe on you.


True. Case in point, (the newly discovered 1300 year old book): http://www.openculture.com/2018/09/europes-oldest-intact-boo...


It seems like PDFs might still work in a hundred years. Just like zip files.

Fun fact: HN's been around for 10% of a century. That makes Arc one of the longer-lived programming languages.

Re: the ability to take books off the shelf and read it, Library Genesis has made a lot of progress in that area. http://libgen.io/


PDFs might still work - but will the storage devices they are on? There are good reasons to doubt this: tapes, CDs and DVDs age and degrade over time; plus devices to read them only last so long, and may not be produced anymore at some point. Classic "spinning" hard drives degrade, too; and even ignoring this, will there still be conputers supporting, say, SATA, in 50 years?

And then there is a trend to switch to onboard soldered flash in new devices which adds further problems.

Not saying this is impossible to overcome, but "PDF might still work" is at best solving a part of the problem, for only a part of all data (PDF is great - but only for some types of data) one might want to preserve.


The cloud-based storage will never die. (And if it does, we'll have bigger problems on our hands than lost PDFs.)

More

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: