Several reasons. First, you need to recognize that any Sweden-based startup will, when it gets to be known internationally, have a Stockholm-based office. So it's not about a city of 1 million inhabitants, it's a country of 10 million that's the true number here. As an example, I believe Spotify opened their original offices in both Stockholm and Göteborg more or less simultaneously.
With that said, a commonly stated reason for why Sweden in general has such a high prevalence of tech startups comes from a bunch of fortuitious decisions in the 90s and 00s. In 1998 Sweden's government started a program that allowed employers to sell their employees computers under a tax free scheme (the so called Hem-PC-reformen https://sv.wikipedia.org/wiki/Hem-PC-reformen). This was extremely popular, and led almost every Swedish home to get an often extremely overpowered personal computer. Thus, practically everyone who was a kid in 1998-2006 (the rebate was cancelled in 2006) grew up with a computer. This gave Sweden a huge advantage compared to other countries in the early Internet revolution.
Sweden has also invested heavily in building a fiber network, you have access to gigabit Internet even in some extremely rural areas.
Another thing is that Sweden doesn't have the tradition of dubbing movies. That means kids will be exposed to English from an early age. This leads to Swedish tech companies not being afraid of hiring talent globally and generally use English as their business language.
Finally, out of the 5 examples posted, one is Mojang, which is clearly an outlier. I'm not saying what Notch accomplished wasn't extremely impressive, but it was essentially a one-man operation, and probably shouldn't be held as an example of a trend.
I found this to be almost impossible to achieve when I moved from Norway to Australia. There I was outside hanging out with friends or just doing stuff on the beach or whatever. The deep focus was harder to achieve. Quality of life was insanely better there, yet somehow I missed the possibility to sit down and be productive in some narrow topic.
The history of chess is definitely very political, especially inside the Soviet Union. It was seen as the proxy for intelligence, and if Soviet players were better than Americans, it was their way of demonstrating superiority in yet another domain.
Wealth is one of the most important factors for someone to found a startup. You aren't going out and starting a company if you have to work 80 plus hours a week to live.
Just look at the clueless opinions of so many tech founders about how "you shouldn't pay yourself a salary at first after you take investment money". Oh really? So we should just buy groceries and pay rent from the trust fund mommy and daddy gave us?
The scenario is that (roughly) 90% of the hypergrowth-style startups which raise a first round will fail to achieve enough momentum to successfully raise a second. And due to the business model choices they've committed to, failing to raise at that point is equivalent to going out of business. A first raise is often too small to be able to do everything they'd like, so they suggest decide that it's in their best interest (long term) to forgo a salary where possible, in order to buy a few extra months of progress before you're forced to start shopping for investment again.
Of course, folks give advice based on their own experience, but it doesn't always generalise to folks in different situations, which is potentially why that would seem like silly advice in (what I assume is) your scenario.
(minor edits toward the end for clarification)
So if you don't have the personal finances to deal with that situation, then you make a different decision further upstream, in the past, by choosing to work on an idea which has either easier funding targets (allowing an early salary), or which can generate early profits (allowing an early salary), or which has lower development requirements (allowing you to work on it alongside another paying job).
The decision is about idea selection, where I agree with you. "If you need the money, then pick an idea which allows you to pay yourself quickly." But if you've chosen to play a different game (typical hypergrowth VC stuff), and you want to maximise your odds of winning at that particular game, then it's generally ideal to buy yourself more months business runway instead of more months of personal runway.
Perhaps still not clear, and I know it's an emotionally loaded topic, but hopefully that makes sense.
Wonder if that's also a factor in the prevalence of tech/programmers/hackers in Russia. Don't think there's a startup scene there, but there definitely seems to be more technology experts coming from the area in general.
Soft forms of isolation certainly seem to lead to solitary activities.
I’d say proficiency in English and quite wealthy populations is what matters.
I also recall Jared Diamond debunking this as a theory for why "the west" got ahead in his book Guns, Germs, and Steel.
(disclaimer: I am an American and don't know what I'm talking about. I have visited Norway but it was summer.)
So I daresay there's more to this than just the Arctic Circle.
1. Israelis basically have no choice. If you want to make a good income, working hard in a technology startup(we have very few big tech companies) is one of the very few good options.
2. The army: at a very young age,a decent percentage of Israelis who join the army lead, in high value, high risk situations. That creates a sense of responsibility and strong ambition at a relatively early age.
The army is also a place where a lot of new tech is being developed , so people get exposure, and often in roles of major responsibility.
3. The Jewish people have lived among other people, in very hostile conditions, often forced to do banking(loans) and commerce at times when most people did agriculture. That forces a certain entrepreneurial spirit, and possibly higher intelligence(also witnessed by the higher rate of genetic illnesses in Ashkenazi Jews). That, plus a culture that always focused on learning(religiously).
Summed up in one word - discipline. The glamorous myth of startups is just that, a myth. Some outliers go from zero to hero overnight, but most require slogging away day in and day out. Motivation can only last so long, and after it's gone all you have left is discipline. Military writings both current and historic routinely speak of discipline being the single most important aspect of achieving a goal.
Interesting what would happen if this is said about any other country
Of course this does not mean Dutch Iranians are smarter than Dutch Moroccans, but it could mean there might be more smart/entrepreneurial Iranians than Moroccans in The Netherlands.
These statistics might die out very quickly though, for example Turks are often already second or third generation, so many have been born in the privilege of The Netherlands.
Perhaps similarly, the second world war cost us a huge percentage of Jewish people, being privileged has meant having a larger chance of survival through moving to less dangerous countries (like the U.S.).
It doesn't mean Jewish people are smarter, it might mean though that you could find a bit more smart Jews in their population. Of course the second world war has already been a long time, so the effect might already be gone.
I'm probably average intelligence, but I've had it expressed to me by people who hardly know me that they expect that I'm brilliant or something because I'm an Ashkanazi Jew. Its one of those stigmas that get attached to any race, such as Russians drinking Vodka or Argentinians eating meat. I'm sure that there are sober Russians, vegetarians in Argentina, and there's me!
Some people would say that this is cultural. I doubt it -- I think it's genetic. Intelligence is a physical attribute determined by genes, just like every other physical attribute. It's really no different from the observation that Kenyans and Ethiopians win most marathons.
In general, we should expect different traits to be exhibited at different frequencies by different populations that were reproductively isolated in the past. This doesn't mean prejudice is okay. Not all Jews are smart, and not all Kenyans are going to win marathons. We can acknowledge these correlations without behaving in a prejudiced way toward individuals.
Most people would rather not talk about this. And that's certainly my rule of thumb for in-person conversations. But, hey, we're on the internet.
You're assuming that intelligence is the determining factor in winning a Nobel prize, which seems spurious at best. Work ethic and training, particularly early in life seem, to me, like they'd be better predictors. I'd posit that Jewish culture is better at nurturing intelligence before I'd conclude that there's some ethnic superiority going on.
While there isn't a bullet-proof case, there is quite a bit of evidence for an ethnicity-intelligence correlation.
The only value judgement I was making is that higher intelligence is superior to lower intelligence and that Nobel prizes is an extremely poor proxy for intelligence.
But don't most Kenyan and Ethiopian marathon winners also grow up in Kenya or Ethiopia? I.e. how do we discount at very least environmental factors (including, for instance, diet), even if you doubt cultural ones?
No. There is a hugely disproportionate number of Americans and British of Somali, Kenyan, and Ethiopian origin who excel in world class distance running (Adbirahman, Farah, and many more, including a big new wave of Somali American after).
Fwiw I don't think you're necessarily wrong about intelligence but from what I remember The Gene (Siddhartha Mukherjee) does have a few passages contradicting this theory. Also by questioning the validity of IQ tests.
Could it also not be that jews are just more motivated to get in to STEM fields and perform well, for cultural reasons or otherwise?
Similarly, Malcolm Gladwell mentions a theory about why "Asians are smarter" which according to him may be related to hard and smart work leading to bigger rice harvests, and other factors (see here https://www.cs.unh.edu/~sbhatia/outliers/outliers.pdf).
While I have zero interest in the cultural implications and all the moral panics surrounding these issues, I don't think intelligence being mostly down to genetics is a proven fact. I find it gets extremely complicated very quickly.
Genes are hugely influenced by their environmental (cultural) triggers and those should not be ignored.
This is the main reason for the ancient historical stereotypes linking Judaism and money/wealth/power/etc. And why names like Goldstein (gold stone) are considered Jewish names.
I heard that a lot of times from mostly uneducated people. All nations have equal level of intelligence. The difference might be access to education, environment, common wealth, and social inequality.
Edit: I would argue to the contrary, that there is a appreciable difference in average IQ between some countries which expresses itself among other things in GDP and quality of life. Perhaps the nature of this difference is based on things like nutrition, environment quality, pre-natal screening and care, so improvements in all these things will decrease the gap. If it happens that after all of that there is still a certain gap , who cares? As long as the citizens live in peace and relative prosperity.
How do you know this?
This is also true for the U.S. itself; Silicon Valley is largely a creation of Pentagon investment (see: DARPA).
In my experience, it's been the opposite. They like to play office politics, focus on keeping work off their plate, and they're not really team players.
I may just have a bad sample of Israel tech workers, though.
Where I think Israeli tech startups get most of their mojo, is desperation. Their culture involves so much struggle and effort .. and I think the reaction to it at a personal level, results in the laziness, non-team-playing, political problems.
Just because of sun? Did you try UV?
> yet somehow I missed the possibility to sit down and be productive in some narrow topic.
Do you mean that you preferred the life you led in Australia to the one in Norway? If not, how do you mean that quality of life was insanely better?
> Do you mean that you preferred the life you led in Australia to the one in Denmark?
Calling a Norwegian a Dane is about as popular as saying a Canadian is from USA or calling an American British I guess ;-)
(Seems I've offended someone else though, but I have no idea why.)
Even then it's not ideal. The lowest brightness setting still isn't low enough.
In my twenties I could happily destroy my circadian rhythm and sit in front of a screen til 4-5 in the morning. At 35 that shit's gotten real old.
That's enough implying in my books.
(Besides, if we're to play this game, I never said that _you_ "implied they didn't prefer it". I just asked, "why wouldn't they prefer it").
Purely anecdotally, I am not part of a startup, but do a lot of hobby work in coffee shops around Stockholm. I'm always seeing/hearing people in local cafes discussing some new startup or project they are getting off the ground. I've also seen first hand many people get maybe a year or two of funding and start up their own game studios, for example. Of course not all of them end up being successful, but many people feel secure/comfortable enough to try. I think the volume of these attempts and people trying to do something new also helps drive up the number of "hits" overall, to be added to these kinds of lists.
A relatively egalitarian society with a lot of trust between people and towards the government has made progress relatively effortless. When people don't feel the need to guard their own position because they might end up getting screwed there often isn't any good reason to be against development.
A relatively large amount of excess time, security and knowledge that enable people to something else. Startups are ultimately about harnessing excess potential. If all the value are capture by some other industry or the housing market there won't be much left over for startups.
Various cultural factors enabled by those things. Like being able to be independent (with the help of the government). Not being afraid to leave value on the table. An overall sort of generous, or at least non-petty, society.
If you look at factors like these, you can actually draw a lot of parallels to somewhere like SV compared to the rest of the world. SV is to at least some degree a recreation of academic life at its best. Which is the area of US society that would be most similar to Sweden.
I’m not sure this is too applicable to SV, lol, nor academic life in the US.
Sweden's social safety net is among the strongest in the world. If the floor on failure is that you still have access to food, housing and healthcare then quitting a full-time job to start a company becomes much more possible.
More people starting companies, more attempts at big targets, more outlier successes.
Initially it was just non-commercial Internet for academic institutions, but lots of students were exposed to the technology and the infrastructure early on.
From what I understand, both Holland and Sweden are culturally similar enough that I can't immediately think of other factors that make 'us' more hostile to startups.
Also, compensation. The Netherlands has a lot of good people, but they’ll leave for greener pastures.
That leaves the more interesting question: why is The Netherlands different from Sweden in this regard? My impression was that The Netherlands, if anything, is more internationally-focused.
> Also, compensation. The Netherlands has a lot of good people, but they’ll leave for greener pastures.
Does Sweden pay better, taking into account cost of living? And if so, why?
(not disagreeing, btw, just more questions)
Obviously, there is the business side in addition to the computer science side. I don't know much about that with respect to Holland.
But one possibility is that Dutch computer scientists tend to leave Holland to work. That explains why I know of so many as an American? (e.g. I worked with Guido van Rossum and Werner Vogels advised my master's project)
I don't think our electronic music scene is less commercial though.
Without disputing the technical correctness, Spotify's Göteborg office was literally a single desk at a shared office for the first three years of its existence.
The former hosts ABB and has tons of robotics related research, the latter has one of the major universities. The startup scene is smaller in Västerås than in Uppsala though.
Uppsala produced Klarna and Skype, Västerås produced Pingdom, to name some.
Here in Belgium, in the Flemish part of the country we don't dub movies but in the French part we do. The levels of English are massively different, not sure if this is the main reason.
When I tutored, I found students often had some misunderstanding, somewhere. So my task was to listen, to find that misunderstanding, so I could correct it. This "teaching" is listening, more than talking.
The idea is they are lost, but to know what direction they need, I first must know where they are.
To correct misunderstanding without this guidance can be very difficult, and might only happen serendipitiously, years later... assuming they continue with study. Which an unidentified misunderstanding can prevent.
Recently, I'm seeing the other side, while self-learning some maths. I can see how much one-on-one tutoring would help clear up misunderstandings. Instead, I'm using the strategy of insisting on starting from the basics, chasing down each detail as much as I can, using online resources, and working out proofs for myself. Each step is a journey in itself...
Luckily, I have enough skill, confidence, motivation and time. By working it out myself, I think I'm also gaining a depth of understanding I could not get from a tutor's guidance.
But it sure would be a lot more efficient!
[ PS I haven't yet read the two pdf's in the question ]
With that in mind, one of the interesting findings in language acquisition studies is that when free reading (reading things for pleasure), it takes 95% comprehension of the text in order to acquire new grammar and vocabulary in context (quite a bit higher than most people imagine -- which is one of the reason people advance a lot more slowly than they might otherwise).
With that, just like your experience, the key to teaching is to ensure that the student comprehends at least 95% of what you are saying. The only way to ensure this is by constantly testing their comprehension with a two way dialog. Once a very high level of comprehension is reached, and once enough repetition happens to remember the thing, you will acquire the knowledge.
It is incredibly difficult to do this unless you are teaching 1:1. There is a special technique called "circling" that you can use to teach language to a larger number of students and it worked extremely well for me. I still can't effectively do it for more than about 10 or 15 though. If you think in a 45 minute class, if I have 15 students, then each student gets 3 minutes of my time. It's not actually surprising that classes of 30 or 40 basically impossible.
Quick note: I'm no longer teaching, in case it is unclear from the above.
I am not saying that their technique is the most efficient (e.g., adding hints would undoubtedly increase the efficiency, but also ruin the game experience), just that there are other methods of making sure a student understands a concept. You don't necessarily need the one-on-one conversations. Those conversations are mostly useful to round up incomplete teaching material (again, I am not saying that creating perfect teaching material is easy).
The one-on-one tutor idea is that you have a master who sees the mistakes the student makes and gives him an exercise to target precisely the misconception the student might have in his head.
The Witness, on the other hand, doesn't possess such intelligence. Instead, it is a carefully crafted series of puzzles which slowly broaden the possible moves. Most of the time every next puzzle requires you to learn a new part of the rules. Sometimes you assumed that part anyway, and the puzzles are easy. But sometimes you have to find out that misconception in your head and replace it with something correct which makes the puzzle harder.
So one concept includes an intelligent observer while the other is more like a perfected text book.
strongly disagree. learning chinese by youtube. I comprehend 15% but I pick up new patterns and words all the time.
https://www.youtube.com/playlist?list=PLfAyWdGHnLdErys013Ysp... 走遍中国 Across China
https://hooktube.com/watch?v=slFB9JhOTxk Homeland Dreamland 远方的家 no subs = better!
That said I must to add, that I am referring to teaching humans who are 12 years and older (IMHO young kids require physical interaction if you want to avoid psychological conditions).
The commenter above made a good point about the value of removing barriers to learning as a primary asset of a good teacher. People tend to focus on content knowledge/curriculum as the mark of good teaching, but removing barriers is the real, difficult work. Tools that assist the instructor in understanding their students’, their students’ knowledge, and their learning behaviors would be valuable. Don’t focus on content delivery. Focus on making in-class assessment more frequent and trustable. Focus on tools that assist an instructor understand thirty students as they might understand five.
P.S: I was supposed launch a month back. But a lot of rewrites made it difficult. For now, I can say I will be launching soon.
I did this for a while and gave up the self-learning aspect and went to university to study math part-time. I have the utmost respect for anyone who has the patience to push through it on their own. Some things I can learn on my own but higher maths I couldn't--at least with any degree of efficiency.
I had that experience sometimes when preparing a question to ask on-line. Sometimes it becomes clear when trying to see it from the other direction of the listener, or researching the problem space to phrase the question properly yields unexpected results.
At least one reason is that we have substantially different safety regulations since we're not accepting of deaths on a project like that. 5 people died on that project. 11 died to build the Golden Gate. Original Bay Bridge? 24.
They actually had a rule of thumb at the time: 1 death for every $1M spent on a project. Any metric like that would be absolutely unacceptable today.
 - https://www.npr.org/2012/05/27/153778083/75-years-later-buil...
80 Transcontinental Railroad
80 Suez Canal
50 Brooklyn Bridge
17.46 World Trade Center
6.4 Sydney Harbor Bridge
4.47 Hoover Dam
3.37 San Francisco Bay Bridge
3.33 Eiffel Tower
2.5 Sears Tower
1.47 Empire State Building
1.17 Trans-Alaska Pipeline System
0.75 City Center Las Vegas
0 Chrysler Building
Looks like I zoomed in (cmd-+) a few times to make the numbers more readable, and went past what the page's CSS could handle.
In the mid 90s the bridge name was changed. It is now generally referred to as the “Ironworkers memorial bridge”. Anyone crossing that bridge since is constantly reminded of that engineering error.
“Better safe than sorry” is rising belief.
Once this truism became generally accepted, as it generally is, it has precedent over other considerations. Since “safe” means more than physical safety, practically any human action is subject to exponentially increasing levels of scrutiny. It takes time to come to an agreement that everything is safe. In big projects it takes a lot of time. On bigger projects it might never come.
I think this is a cowardly and inaccurate belief and agree it is what it driving changes across all areas.
It is the institutional flaw of democracies and free markets. People (demand) are capable of acting irrationally and emotionally leading to a reduction of individual and public good.
Throughout history you find the people who believe this are not the ones who's safety is sacrificed in pursuit of some grand project...
I’m assuming your OP thinks “better safe than sorry” is stupid because it doesn’t actually help you make decisions.
I would much rather drive on a bridge of someone whose motto is “do it right.”
This isn’t the Apollo program. We know how to make safe bridges. It’s not a matter of being cautious and talking to every stakeholder. It’s a matter of hiring actual
engineers,* letting them work, believing them, and then giving them the resources to monitor construction properly.
It means sometimes rejecting a lot of bad material and holding the project up for 8 months.
It does not mean “lean safe” and cross your fingers.
* meaning licensed engineers. The word engineer has I guess been made meaningless by thousands of people writing code and calling it “engineering”. It used to mean someone had completed training as an engineer.
Better safe than sorry MEANS do it right. It means that human life is more valuable than material wealth.
If you are hiring amateurs to build your bridge, or you are “crossing your fingers”, then you are not being “better safe than sorry” — in the sense that what you are doing can not be an implentation of “better safe than sorry” that can be reconciled with the broader cultural context in which people use the phrase and discuss it.
This is not accurate on both accounts.
First, material wealth can be converted into quality of life in multiple different ways. Consuming additional millions to push death rates down by a few percentage points is taking away from other places, ostensibly hurting others.
Second, better safe than sorry does not mean do it right. Mistakes will happen, and acting like nothing will ever go wrong is a fools errand. Planning for failure is a significant portion of project management and engineering in general. The goal is 0 mistakes, but severe negative overreactions in response to failure can have a net negative impact.
A friend of mine was just hired as a project engineer at a construction company. He was confused since he has no engineering experience. Turns out, at this company, it means you are training to be a project manager.
Another way to look at this is to ask why the United States seems incapable of even maintaining existing projects. For example, look at the current state of the New York City subway.
I suspect slowness in building new infrastructure, and poor maintenance of existing infrastructure, have the same root cause: lack of political will.
American voters don't expect their governments to be good at this kind of thing. European voters would vote politicians out of office if their transit systems got as bad as the NY subway has become. It would be seen as a failure to execute one of the basic duties of government.
Americans often vote those people out of office and their replacements are equally useless or worse.
We have created a system where results often don't matter as long as there is the correct capital letter next to your name on the ballot.
Compounding this further is that politicians know that they can count on your vote but they rely on the money of industry and lobbyists to campaign. Thus the very industry that is supposed to be fixing the problem under contract is able to overcharge and take longer than agreed becuase our politicians rely on them for funding.
Nowadays, big projects in the West are far more complex since have to meet more demands and more stakeholders are involved. In authoritarian countries, this is not so much a problem, the new airport in Istanbul is built very fast, but concerns from citizens are not respected etc.
Pure utilitarianism leads to outcomes that are clearly out of step with almost everyone's moral codes. For example you could kill someone and take their organs to save the lives of 4-5 people. Is it rational that we're not allowed to do that? Why do some people get to keep 2 kidneys when there are others with none?
This is solved at least somewhat by using 'rule utilitarianism' instead of 'act utilitarianism'. Society is better off as a whole if we adhere to rules such as protection of the human body or safety regulations when constructing buildings.
There was a pretty good short story, "Dibs" by Brian Plante, about that published in the April 2004 issue of Analog Science Fiction and Fact.
Everyone was required to be registered in the transplant matching system. Occasionally you'd receive a letter telling you that someone was going to die soon unless they got one of your organs. That person now had dibs on that organ, and if anything happened to you before they died they got it.
Usually you would get another letter a couple weeks or so later telling you that the person no longer had dibs, which generally meant that they had died.
Sometimes, though, you'd get a second dibs letter while you already had one organ under dibs.
And sometimes you'd get a dibs letter when you already had two organs under dibs...meaning if you died now it would save three lives. At that point you were required to report in and your organs were taken to save those three other lives.
The story concerned someone who worked for the transplant matching agency who got a second dibs letter and was quite worried. He illegally used his insider access to find out who the people were who had dibs on him, and started digging around to try to convince himself they were terrible people who didn't deserve to survive to justify illegally interfering, if I recall correctly (I only read the story once, when it was in the current issue).
I don't remember what happened after that. I just remember thinking that it was an interesting story and explored some interesting issues.
Consider if by sacrificing someone on an altar you could magically cause several months of construction work to happen overnight. That would still be murder.
The current traffic system as an insane "death and mayhem lottery" that we force ourselves to play, with out respect of youth, age or anything.
The current interest and action towards bike-friendly cities is a symptom, I think, of a healing of societies' psyches. We have been pretty brutal to each other since the Younger Dryas, and it's only recently that we've started to calm down and think about what we really want our civilization to be like.
The real thing I would advocate (if this weren't a beautiful Sunday afternoon, calling me from my keyboard) is a design for traffic that began from the premise of three (or four) interconnected but separate networks, one each for pedestrians, bikes, and rail, and maybe one network of specialized freeways for trucks and buses. Personal cars would be a luxury (unless you live in the country) that few would need (rather than a cornerstone of our economy) with rentals taking up the slack for vacations and such.
But if you're interested in this sort of thing, don't bother with my blathering, go read Christopher Alexander.
My other answer is really just an invitation to a kind of thought experiment: what if we really did restrict ourselves to just walking, biking, and trains? How would civilization look in that alternate reality?
If decreasing your speed from 100km/h to 50km/h gives you a 1% lower chance of dying in a road traffic accident, but you spend an additional 2% of your life stuck in traffic, is that a win?
They may be more like classic cars than regular vehicles at some point, though.
You could have picked a better example. I see tons of young people smoke and wonder what the hell is wrong with them, whether or not they haven't been paying attention for the last 40 years and then I realize they didn't because they weren't there to begin with. So the tobacco industry can work their nasty charms on them with abandon because there are new potential suckers born every day.
Or alternatively, maybe as strange as a motorcycle or vintage MG.
I think people underestimate the cultural impact that self-driving vehicles will have - imagine a whole generation or two after self-driving vehicles are generally available - how many people will bother learning to drive? I think it might become more of a job-specific skill than a general 'adult life' skill as it is now in most places.
At some point, human driven cars become novelties, just like that. There is no reason to ban them, but as you suggest, maybe there will be some HOV-like lanes where they don't really have access. Or even some time constraints (not during rush hour on some key roads, not in lower manhattan, etc).
Killing someone to take their organs to save 5 lives is utilitarianism to only first order effects. We live in a world that does not terminate after one time step, so we have to consider Nth order effects to calculate utility.
For example, the second order effect is other humans' moral judgements. "How horrible, he murdered that man" is a valid moral reaction to have, and this is disutility that must be accounted for in a "pure utilitarian" world view. Third order effects may be the social disorder that results from allowing such ad hoc killings as means to an end, and so on.
The only thing preventing pure utilitarianism from being viable is a lack of compute power, and "rule utilitarianism" is a poor heuristic based approach for philosophers without patience ;)
No, but that should be the ideal which you should strive to move towards, when practicable. But you can't ever actually get there, and shouldn't try infinitely hard either.
The question, as always, is: "Where do we draw the line?".
Reliably the people absorbing the risks capture almost none of the added value.
Yes, it's a great idea. Killing one person will allow us to double speed. How exactly I wonder, but this seems like the kind of project where asking questions is strictly forbidden.
We built the first interstate highways pretty quickly too, and part of why is we just plotted a line and set the bulldozers to work. Nobody worried about habitat destruction, erosion, endangered species, etc.
# 10 are confirmed dead on the HK portion, the death toll on the Chinese portion is unknown.
There's little question that lax worker safety and weak labor laws can contribute to faster economic growth, but I'm not sure that's something we should be trying to replicate.
Except for car deaths. Over one million a year seems to be acceptable.
Because we are lying about inflation. It's not a conspiracy, just a mutually agreed-to delusion. By pretending inflation is lower than it is, the poor feel like they are standing still instead of sinking ...though instinctively they know. And the people just keeping up get to feel richer. Since technology and efficiency improve, the people staying in the same place have cooler things.
If inflation were reported correctly, average would see their paychecks dropping as wealth and power consolidate elsewhere. There is no interest in creating alarm around this fact. Instead the public is distracted by social drama, and political discourse consumed by things that do not affect the real shift in power.
Unless you do, I think it those industries are getting more expensive due to Baumol’s Cost Disease:
To give you an example from construction, a company producing a complex part is "giving away" 3 day training in fancy locations, with all hotel & meals paid by the company. As a government employee who gets a fixed salary no matter the results, would you prefer using a product which gives you to these free trainings (free mini-holiday), or using a cheap part which does the same thing, minus the fancy training sessions?
I think the root cause is having some people in charge of other's people money, without clear responsibilities (vs. evaluations which happen in private companies, with the possibility to getting fired anytime if you have a poor performance on result-oriented KPIs), AND having a monopoly. You can't simply start another healthcare or education system, without complying to all existing complex regulations & processes.
You're right that macroeconomic indicators don't seem to tell the story, but I can't square my experience with your claim.
1. He has a Master of Physics, I don't have a degree.
I think all of the areas Patrick mentions are places the government has decided are “multipliers,” and has directed spending or subsidy. I vaguely recall a discussion in undergrad macro that it was always an advantage to be at the place inflation is injected into the system. We’ve had these input points for 60+ years.
Tech is a mixed bag, but housing, construction, medicine, and education are all prime places for social “investment.”
So as the efficiency of everything else goes up, the cost of those things is depressed. But teachers cost the same. So relatively under an inflated environment, they become more expensive.
Money being created is like pouring water into a pool - it creates ripples outwards. Eventually, if you stop, yes the surface of the pool will become calm and the pool will be higher. But whilst you're pouring, the volumes are not even.
These days, when the government creates money it doesn't put that money into everyone's bank account overnight. When was the last time you got a cheque from the government labelled "new money"?
Instead the central bank engages in various forms of manipulation, like via the "QE" programmes that involved asset purchases. So, the prices of certain financial assets go up. They also purchase a lot of government bonds, or that money eventually makes its way into corporate debt. And what do governments do with this money, well, they often spend it on things like subsidising mortgages, or subsidising private banks (via bailouts), or healthcare, or education, or paying a large staff of government workers, or buying military hardware, etc.
So you go look at what's gone up in price very fast over the years and hey, look at that, it's the stuff near the centre of the pool. Things that governments tend to subsidise a lot or things that people feel they have to buy regardless of cost, like education, healthcare, homes, etc. The money pouring into the system ends up stacking up in a few places, it's not evenly distributed.
The last round of quantitative easing by the Federal Reserve, QE3, ended in 2014. (QE1 began in 2008.) If these price effects for health care and housing began in 2008 and ended in 2014, it would make sense to blame QE, but they didn't, so it doesn't.
Price effects that are associated with government vs. private expenditures are not the same as inflation. That's just governments being bad (perhaps intentionally bad?) at spending taxpayer money efficiently. However, when it comes to health care in particular, that just isn't the case either--Medicare and Medicaid pay much lower prices for medical procedures than private insurance companies do.
The main way it's created is through loans, many of which end up in housing. And rampant speculation on apparently ever-rising house prices is where the financial crisis started. Ripples expanding ever outwards ...
> the price of everything rises by the same proportion
Due to regulation, that's simply not the universal case - e.g. rent control. The economy is conceptual, but prices are concrete leading to some ironic situations.
Disposable income in France is vastly lower than that of the US. Is that inflation? Or tax policy?
Because we as an industry made a strategic decision in the late 20th century to value run-time efficiency over all other quality metrics, a decision which has manifested itself in the primacy of C and its derivatives. Everything else has been sacrificed in the name of run time efficiency, including, notably, security. Development convenience was also among the collateral damage.
> Why can't I debug a function without restarting my program?
Because you use C or one of its derivatives instead of Lisp or one of its derivatives. In Common Lisp you can not only debug a function without restarting your program, you can redefine classes without restarting your program. It is truly awesome. You should try it some time.
> Because you use C or one of its derivatives instead of Lisp or one of its derivatives. In Common Lisp you can not only debug a function without restarting your program, you can redefine classes without restarting your program. It is truly awesome. You should try it some time.
There is no technical reason why it shouldn't be possible in C, if you are willing to do without many optimizations. One approach is to make a function stub that simply jumps into the currently loaded implementation. A more efficient but more convoluted way is to patch the address of the currently loaded implementation at all call sites.
The problem is that in general you can't simply replace a function without either restarting the process or risking to crash it. In general functions have some implementation-dependent context that is accumulated in the running process, and a different implementation does the accumulation in a different way. I'm not a lisper, but there is no way this is different in LISP. (And it is not because in C you often use global data. "Global" vs "local" is only a syntactic distinction anyway).
If you're willing to risk crashing your process that's okay. It's often a fine choice. And you can do it in C. The easiest way to implement approach #1 is to load DLLs / shared objects.
A super generalized description of a C function might check if a register contains a positive value. If so, it will then jump to address 0x42, which is a memory offset, which is the beginning of another function. Its near impossible to "swap" out what lies at 0x42 since that was defined at compile time and is included in a monolithic executable.
Looking at more dynamic languages, like C#, Java or LISP, they run on a virtual or abstracted machine instead of raw registers. This means that a similarly defined function will instead jump to whatever matches the requirement. That means that we could have a lookup table that says we should jump to a symbol of S42, and based on what we have loaded in memory, S42 resides at 0x42. Essentially, all functions are function pointers, and we can change the value the resides in that memory address in order to swap out implementations of whatever function maintains the same signature as the intended function. This is why one can make trivial changes to C# in visual studio while stopped at a breakpoint and have those changes applied to the running program. Instead of jumping to 0x42, we jump to 0x84 by "hotswapping" the value of the pointer we're about to jump into.
Obviously this isn't entirely the truth, there are a lot more nuances and it's a fair bit more complicated than this, but the idea should hold water.
Furthermore, it doesn't matter whether you are running on a virtual machine or on bare metal. What matters is if you have turned on optimizations (such as hardcoding of function addresses, or even code inlining) that remove flexibility (such as hot swapping of code). Visual Studio can hot-edit C code as well.
And as I stated, it is pretty easy and common practice to hot swap object code through DLLs or shared objects even on less high-tech platforms. It's easily done with function pointers (as you described) and a simple API (GetProcAddress() on Windows, dlsym() on Linux). Why shouldn't it be possible in C?
Virtual Machines bring portable executables and nothing more, I think. Well, maybe a slightly nicer platform for co-existence and interoperability of multiple programming languages (but then again, there is considerable lock-in to the platform).
Yes, there is. You can't trace the stack in portable C, so you can't build a proper garbage collector.
There are easy ways to build a GC in portable C as well, of course, if less performant.
As you will know, in portable C you cannot even implement a system call. So what?
Yes, but redefining a function and GC are not.
> There are easy ways to build a GC in portable C as well, of course, if less performant.
No, because you can't walk the stack. Also pointer aliasing.
[UPDATE] I just realized that I was wrong about redefining functions in C. Not only is it possible, you can actually do it with currently available technology using dynamically linked libraries. But I have never seen anyone actually do this (which is why it didn't occur to me).
I wrote about this in my two earlier comments. This is very old technology. And very commonly used. I think most plugin systems wrap dynamically linked libraries.
This is also an easy way to redefine functions without needing GC. Under the hood, it is implemented in the loader's way: virtual memory pages mapped as read-only and executable (see mmap() or mprotect() for example).
I don't know the true cause -- I wish I did. But I do do see a trend towards an ever more "hands-off" style of software development: large volumes of automated (especially unit) tests in preference to interactive approaches (or hybrids, like running subsets of tests via a REPL), and running test instances on remote servers (often via extra layers of indirection, such as CI servers) rather than running on your own machine. I'd love to see a resurgence of interactivity, but if anything it seems to be going against recent trends.
Edit: The subjective impression I've got is that doing things the indirect, hands-off way is being presented as somehow more "professional" while hands-on interactivity is the realm of the self-taught and hackers. Do others see this? What can be done to change these perceptions?
That's not at all true. There are very, very few platforms/libraries/languages that both a) allow you to manage concurrent access to complex, arbitrary data structures in a sane, predictable, and not-dangerous way and b) provide the capabilities (including performance) necessary to implement a general-purpose programming language on top of them. Even fewer of those tools existed when Python came about.
Was it technically possible to make a GIL-free Python or a Python not based in C? Sure. But it wasn't in any way likely, or a reasonable-at-the-time decision. If you look into the history of the GIL things will make more sense; it has next to nothing to do with the implementation language.
I don't know what "language/break compatibility" means.
The GIL is needed because Python's reference-counted memory management system is not thread-safe, and this can't be fixed without compromising either portability or performance. It's really that simple.
That is correct. What does that have to do with C? Thread-unsafe code exists in all languages. The GIL's lock is itself a pthread mutex and condvar underneath, and equivalent constructs exist in all (to my knowledge) modern threaded programming environments.
"not wanting to reimplement the language/break compatibility" is a reference to the successful efforts that have been made to remove the GIL in CPython. Those efforts have not (yet) moved towards merging into mainline CPython because they require a) lots of reimplementation work and added complexity in the language core, and b) would very likely break the majority of compiled extension modules.
I think that's additional evidence that the GIL isn't a C problem; they removed it, in C, without fighting or otherwise working around the language.
Yes, that's true. But thread-unsafe GC does not exist in all languages. When GC is provided natively by the language it can be implemented much more safely and efficiently than if you try to shoehorn it in afterwards.
> the successful efforts that have been made to remove the GIL in CPython
That's news to me. Reference?
Or search “gilectomy” on LWN. Or check out stackless etc.; it turns out removing the GIL technically is historically one of the easier parts of removing the GIL entirely.
It sounds like your main problem is with python’s GC model. Reference counting doesn’t have to be thread-unsafe, but in scripting/interpreted-ish languages that want their threading story to involve seamless (I.e. no special syntax unless you want it, it’s on you not to blow off your foot) sharing of state between threads at will, like Python and Ruby, a GIL or equivalent is the norm.
Sometimes it’s not as intrusive as python’s, but it does seem like a necessary (or at least very likely) implementation pattern for languages that want to provide that seamlessness. You can have thread-safe reference counted GC in a traditional scripting language, but that tends to come with a much less automatic threading/concurrency API. Perl 5 is an example of that category, and it is implemented in C.
Exactly. And why do you think that is? And in particular, why do you think it is the norm when it is decidedly NOT the norm for Common Lisp and Scheme? The norm for those languages is completely seamless native threading with no special syntax.
If you wanted to implement a language that looked a little like Python but had your favored Lisp's GC semantics and data structures, I'm sure you could. But it wouldn't be Python.
That's without getting into the significant speed tradeoffs--you can make these languages fast, and I get the impression that there has been a ton of progress there in the last decade. But when Python was being created, and when it had to deal with the implications of concurrency in its chosen semantics? Not so. As I originally said: was it theoretically possible to build Python or equivalent on top of a non-C platform at the time? Sure. But I doubt that would have saved it from tradeoffs at least as severe as the GIL, and it definitely would not have been the pragmatic choice--"let's build it on $lisp_dialect and then spend time optimizing that dialect's runtime to be fast enough for us to put an interpreted scripting language on top of it" seems an unlikely strategy.
Nope. CLOS+MOP, which exists natively in nearly all CL implementations, is a superset of Python's object/class functionality.
(The fine-grained "unit" stuff, anyway. System/integration tests come with different trade-offs).
They're extremely inconvenient to develop with. Especially compared to those "run-time above all else" environments you mention. For one, you need to know 5-6 languages to use the web.
I think you'll find that pretty much any environment allows this. Even without debug symbols you mostly can do this for C, C++, ... programs. On the web, you can't, because every call immediately gets you into overcomplicated minified libraries that you can't change anyway, assuming it doesn't go into a remote call entirely.
And there's environments that go further. .Net not only lets you debug any C function you run within it, you can even modify it's code from within the debugger and "replay" to the same point. I believe there's a few more proprietary compilers that support that functionality too.
You should probably direct that question at Patrick because his original question was kind of based on the premise that the answer to your question is "no".
> Even without debug symbols you mostly can do this for C, C++, ... programs
No, you can't. You can grovel around on the stack and muck with the data structures, but you can't redefine a function, or redefine a class, or change a class's inheritance structure without restarting your program. In Common Lisp you can do all of these things.
I'm not a programmer, so I'm imagining you hook in to the call that addresses the function (like modify a jump instruction), overwrite any registers that need changing, nullify caches, so the program runs new code -- this I think is how some hacks work?
Ultimately couldn't you just write NOP to addresses used for a function?
Is it something structural about C/C++ that stops this, like the ownership of memory (I'm assuming a superuser can tell the computer to ignore that addresses written to are reserved for the program being modified).
How does the computer know that you pushed a different long jump in to a particular address and stop working, rather than keep going processing instructions.
Apologies if I've misunderstood, please be gentle.
There is no reason you couldn't redefine a function in C. It's really more of a cultural constraint than a technical one. It's a little tricky, but it could be done. It just so happens that C programmers don't think in terms of interactive programming because C started out as a non-interactive language, and so it has remained mostly a non-interactive language. Lisp, by way of contrast, was interactive and dynamic on day 1, and it has stayed that way. In the early days that made C substantially faster than Lisp, but that performance gap has mostly closed.
However, there are some things that Lisp does that are actually impossible in C. The two most notable ones are garbage collection and tail recursion. It's impossible to write a proper GC in portable C because there is no way to walk the stack, and there is no way to compile a tail-call unless you put the entire program inside the body of a single function.
This is true if you go to a production website and try to start debugging. But it's untrue for any modern development environment. The minification comes later and even then, source maps are first class supported in browsers, mapping the minified code to the source.
It's funny. In my experience the web debug tools are some of the best of any language/environment I've experienced.
Sourcemaps supposedly work, though I have never seen this actually work in practice.
And since babel is apparently indispensible, you can't be entirely sure that conditionals and statements in your source haven't been optimized away in the transpilation.
Routinely one sets breakpoints in JS files that are not hit by Chrome dev tools, and symbols that should be in scope don't want to be defined. It's a mess.
I suppose if you know how to structure the rube goldberg machine correctly, web dev can be productive. But it's so hard, and the hardness is of the tedious yak-shaving variety. I just hate it and want to fire up Visual Studio and write some .NET apps with tools that just work instead.
I hear what you're saying. They can be really finicky. I've had very good luck using it all cleanly without issue. I especially love binding vscode to an open browser so that I use vscode for all the inspection, breakpoints, etc.
But I also experienced your lamentations. It took a long time for me to get it all working. Sourcemaps were so unreliable years ago. Now they just seem to work, having learned all the painful lessons about configuration.
There still aren't any sane defaults and the ground won't stop shifting. But now that I have it working, it works great.
I'm not sure how this is a denial of lisper's argument. You've picked an environment which you admit has major flaws, but those flaws are independent of the issue at hand.
I've worked in runtime-perf-above-anything programming environments which required the use of many different languages even for the simplest program. It's terrible there, too. That has nothing to do with dynamic languages. In fact, due to the ease of creating DSLs, most of the dynamic languages I've used allow one to get by with using fewer languages.
> On the web, you can't [do this other thing you can do in Lisp], because ...
Indeed, you've picked the one modern dynamic environment which lacks most of the features lisper is talking about. That's not an argument against having those features. I think it's mostly an observation that this particular environment picked a different attribute (runtime security) to optimize for above all else. You'll note that JS-style security is fundamentally incompatible with many of the concepts in OP's original question.
> I think you'll find that pretty much any environment allows this. Even without debug symbols you mostly can do this for C, C++, ... programs.
Can you give an example? I've never heard of a C++ system that let you redefine classes at runtime without debug symbols. I can't imagine how it would work. How would you even inspect the classes at runtime to find out what you're redefining?
I don’t expect that. Use after free vuln + heap spray + shellcode = Ill effects
The rich powerful development environments you've described exist primarily in proprietary, integrated environments. If you want to integrate the editor, debugger, OS, and language, it helps to be able to coordinate the design of all those components.
On the other hand, languages that have gotten to huge popular scale have typically been more open in their specification and implementation process. Perhaps this is because the creators of breakthrough tools that drive language adoption (like web frameworks or data science kits) prefer these tools, or because the sorts of conditions that lead to creation of such tools are inherently marginalized ones. In other words, if you're a happy iOS developer using a lovely integrated world of xcode and swift, you're not going to spot a dramatically underserved development niche.
I did my masters thesis in 1986 on Coral Common Lisp on a Macintosh Plus with 800k floppies and 1 MB of RAM. It had an IDE that would still be competitive today, indeed is in some ways still superior to anything available today. All this was possible because it was Common Lisp and not C. The language design really is a huge factor.
(Coral has today evolved into Clozure Common Lisp, which is completely free and open (Apache license). You really should try it.)
My guess at an argument here is that the languages popular in the 80s drove the curriculum design of most Computer Science education, and the relative absence of the Lisps (to today) makes the languages seem less approachable to practicing programmers than they really should be.
On the other hand, you can find a lot of Lisp's influence in something like Python (though obviously with many differences both superficial and deep). So in that case, why are Python IDEs so much worse than what you'd see in Lisp? (And is that even the case? Maybe there's just more Python devs and thus more IDEs and like anything, most are crap; but if there one or two great ones then does Lisp really have an advantage there?)
Worth noting that Patrick Collison was/is a Lisp user. While in high school, he won a fairly prestigious Irish national science-fair-type contest (the “Young Scientist”) with an AI bot written in some Lisp dialect.
IIRC he also contributed some patches to one of the major Lisp projects.
The .net CLR has a lot of the features that he would want to enable the kind of interactive debugging (variable hovers, REPL) he talks about, and Visual Studio itself supports a lot of them.
Personally going from doing C# development in VS2k15 to doing golang development in VSCode feels like going back in time.
I don't use Linux for ideological reasons, I use it because it works a lot faster (especially when using PyCharm - the difference is drastic) and more reliable than Windows and gives me "almost-MacOS-and-much-more" experience on my PC.
Modern apps targeting professionals and enthusiasts should be cross-platform (run on Windows, Linux and MacOS) and not force you to choose from just one or two major OSes.
For the real professional apps, the app is more important than the OS, so you choose whatever OS will run your app. Need to use Adobe CS, well, OS X or Windows is it, it could have been Linux and graphic artists would just swallow and use it, since they need to use CS. Likewise, a tool targeted at Windows app development would be fairly weird running on non-Windows.
Except if that were the case, we wouldn't have so many bloated framework code and towers of abstractions, dragging runtime efficiency to the bottom.
So that's a common reason slowing every shift to a higher level language or library, to tools that automatically create stuff for you(GUI, optimized code from dsl), to using a platform controlled by another company(one important channel for newer tech, with some strong ux benefits over open-source), to trusting open source.
On modern machines, this excuse is even better. Full control can easily mean 100x difference in speed. Most importantly, how do contemporary lisps deal with arrays of unboxed structs or primitives?
Because you use C or one of its derivatives
It's doable in java which is, I think, a derivative of C.
> Books are great (unless you're Socrates). We now have magic ink. As an artifact for effecting the transmission of knowledge (rather than a source of entertainment), how can the book be improved? How can we help authors understand how well their work is doing in practice? (Which parts are readers confused by or stumbling over or skipping?) How can we follow shared annotations by the people we admire? Being limited in our years on the earth, how can we incentivize brevity? Is there any way to facilitate user-suggested improvements?
The great thing about books is that no matter how long they've been sitting around, it's easy to take one off the shelf and read it. The cultural infrastructure of written language has been around much longer (and been much more stable) than the computational infrastructure you'd need have your "magic ink" still work in 1000 years. At some point we need to start treating computers and software more seriously if we want to have things like this.
Similarly, anything writen about an implementation is short lived. The Idiots Guide to Windows 98 is less useful today than it was in 1998 and is getting less useful with time.
Lovelace, Babbage, Turing, McCarthy et al are still seminal. But these are academic papers that focus on what is possible to compute and how one might construct an implementation.
There are some interesting edge cases too. Is the Gang of Four Design Patterns still relevent? Its not embarassing, but its not as applicable as it used to be.
Now .. on topic ... I had a roomful of books collected from my youth. I cleaned out my room at my parents house some years ago ... and pretty much everything got chucked. The only books that I kept were seminal books like Knuth, Cormen, Gang of Four, TCP/IP series (v6 kinda makes them out-of-date too). All my MFC books ... java books .. pretty much all of it was out of date. I had an epiphany .. CS does not age well at all.
I no longer buy physical books ... I got a subscription to Safari and love it. Also consume tons of content on e-learning platforms. But .. I really miss real books.
My parents had bookshelves that filled walls that I built with my Dad before they moved. I, and most other people that visited the house, would spend a fair amount of time just looking at the books on the shelf. Comparing notes on what they'd read, and asking to borrow books. Looking at the books, and being reminded of the experience you had with them or wanted to have with them was a vitally important part of the process that I fear we've lost.
However, rom files specifically (as opposed to executeables for retro computing platforms) seem to be distinctly less fickle in becoming reliably emulateable.
The reason is seemingly simple, the early rom files were essentially an operating system, having all the software required to boot the system included in what now looks like one file.
That said, I am not a particular fan of books. They take a lot of physical space, are heavy and age. So as long as we stick to reasonable formats (e.g., text-based, non-binary), it should not be too hard for future generations to use our books.
Using DRM, on the other hand, might make things complicated.
You might argue - and I would agree - that this is not necessarily a good thing as far as content goes.
But the point is that putting something into writing snd giving it a tangible form on paper gives it an inherent stability and authority missing from digital media.
We tend of think of digital media as temporary, disposable, relatively low value simulacra of a Real Thing.
Digital media can be hacked, edited, deleted, and lost when the power goes off.
A copy of a book from hundreds or thousands of years ago is just going to sit there for some indefinite period. (Which actually depends on the quality of the paper - but in theory could be centuries.)
This is not about practical reproduction and storage technologies, it's about persistence and tangibility.
A book is a tangible object which has some independence from its surroundings. After printing, it's going to exist unless you destroy it. If you print many copies the contents are geographically distributed and it becomes very hard to destroy them all.
A file depends on complex infrastructure. If the power goes down, it's gone. If the file format becomes obsolete, it's gone. (This has actually happened to many video and audio formats.) If there's an EMP event, it's gone.
And it's not just a tangible difference, but a cultural. We have a fundamentally different relationship with digital data than we do with tangible objects, and this influences the value we place on their cultural payload.
Any examples of video or audio files that are currently impossible to watch/listen to because knowledge of the file format, and all software capable of playing it was lost? If such a thing has happened, there are probably people interested in reverse engineering the format.
Things like laserdiscs, I can probably still buy equipment to read, but it's substantially different as I need the technology to read it.
Microfiche is quite good in this respect, you can easily read it even without the specific tech it was made for (using a magnifier, or projecting the image with a simple light source.
I wonder if you could make a crystal where, like a hologram, you can rotate the crystal a minute amount in order to project a different page (an idea I saw decades ago had a digital clock style projection from a crystal, used asa sundial -- pretty sure it was theoretical).
That way the information is relatively easy to discover, and with a simple light source you can get info out if it.
Then I think about Linear B, and I rest again.
On the other hand, old books, with their high-quality paper, binding and letter-press print do seem have some kind of personality...
That's their loss. There are very few people who want to learn math too.
I wrote my own implementation of RSVP, which has eBook reader support, and now it is absolutely my preferred method of reading, and I read at 1000WPM. Though normal books are still enjoyable, they feel tedious and slow.
The project is here:
(It needs some help being updated for recent versions of Android, please let me know if you'd like to be involved! It has a new back-end API in place already and it just needs a few simple updates.)
I think it would be more readable to put it in an area big enough for the longest word, and then RSVP through the message repeatedly.
If anyone wants a simple way to play around with RSVP, here's a little quick and dirty command line reader I wrote a long time ago to play with this: https://pastebin.com/zfq2eW4n
Put it in reader.cpp and compile with:
$ c++ reader.cpp
$ ./a.out N < text
If a word (which is really just a string of non-whitespace surround by white space) ends with a period or comma, the delay is doubled for that word.
There's a commented out check that sets a minimum line length. If you compile that check it can put more than one word on a line to make the line at least a minimum length.
PS: this aligns the words on their centers. To change it to left aligning them change where it sets the variable "pad" to use a small integer instead of basing it on the length of the word. If it is the same for all words, it becomes an indent for left alignment instead of a pad for centering.
BTW - The Google Play link in the repo doesn't work for me, and I don't see Glance in F-Droid. What's the easiest way for non-developers to get the APK?
I wouldn't recall a thing at this speed, nor at 600 which is shown just prior to the time stamp above.
This is a poor implementation of RSVP, as each word is being presented at the same speed. Longer words should be given longer presentation times, as should words with punctuation marks. The presentation of the words is also centered rather than aligned, which requires a saccade for each word, which defeats the whole point. It's also a difficult text to start out with, with no context.
Even still, I didn't have a problem reading and recalling this text, though I wouldn't recommended it for a beginner.
(https://itunes.apple.com/us/app/zipf/id1366685837?mt=8 if you're interested.)
Does it actually require a saccade?
Testing with a quick and dirty command line RSVP program I have, my speed and comprehension seem about the same with either centered or left aligned. But I'm mostly testing with fiction written for the average adult. The words are usually short enough that they are within the field of good visual acuity no matter where withing them the focus is.
I've not done a comparison using text with a lot of long word.
Reading this way is a skill which needs a small up front investment, but the pay off is immense. The trick is to not try, just relax and pay attention and let the words speak to you as if they are being narrated to you inside your head.
Because there are no constant micro-interruptions from page scrolling, ads, or even from your eyes own saccades, I find that my attention to the text is much, much better, and I need to stop to ponder something I can just tap the screen to pause it.
I also find that I am far, far more likely to finish an article/paper/chapter via Glance than via my browser. These days it's pretty rare that I'll actually finish an article online, but with Glance I'll almost always read the entire thing from start to finish.
I really, really recommend this skill, especially if you have a lot of time to kill on a mass-transit commute, or if you just want to read more.
As one of the more recent additions to that essay shows, I'm not the only one with that opinion:
As I note in the essay, there are some technologies that work better than others for preservation, but digital's biggest weaknesses are inherent, I think.
First, what's the purpose of books ?
There's entertainment of course. But let's focus on books that teach. Their purpose is to let you access some knowledge, in a deep way.
On the other hand, computer systems are starting to fill that role, and the level of depth they can achieve is growing - on a good day, with the right query, Google may give you access to Amazing content, content that may help you connect different concepts - based on your past searches - just like your brain does.
Another option is if it was easy for a book author to package her knowledge in a smart chatbot or maybe an expert system, and we will have hundreds of such advisors advising us, or just interactively chatting with us,correcting our mistakes, that would be an interesting replacement to the book.
The second edition.
And how could books be improved?
Until a different unpowered human-readable medium is proven over a longer period of more careless storage conditions, looks like a third edition if appropriate.
If it hasn't been printed it hasn't really been published as thoroughly as it could be, and if it hasn't been bound then it's not yet a real book. Up until recent decades the survival of unique knowledge was largely dependent on the number of copies printed and distributed, so popularity has had undue importance.
But don't let earlier editions become lost or woe on you.
Fun fact: HN's been around for 10% of a century. That makes Arc one of the longer-lived programming languages.
Re: the ability to take books off the shelf and read it, Library Genesis has made a lot of progress in that area. http://libgen.io/
And then there is a trend to switch to onboard soldered flash in new devices which adds further problems.
Not saying this is impossible to overcome, but "PDF might still work" is at best solving a part of the problem, for only a part of all data (PDF is great - but only for some types of data) one might want to preserve.