Hacker News new | comments | show | ask | jobs | submit login
General and Surprising (paulgraham.com)
415 points by maguay 12 days ago | hide | past | web | 196 comments | favorite





An unfortunate corollary is that slightly different general ideas are very difficult to get people to listen to because they feel like they already know it and it is not much new or the difference is negligible. This might be true of any particular case someone might think of but when the entire landscape is taken into account the multiplier pg mentions applies and can change things dramatically. But people don't seem to be well wired to think of the general.

And for any general multiplier to work, people generally need to be on board.

For example: there is a difference between economic inequality and desperate economic inequality -- the first is a mere difference in ownership and control. The second is /also/ a difference in ownership and control but that leaves one party or group bargaining against their physical well-being. There is much said about mere economic inequality but, by leaving out the desperation factor, the public conversation is tied up in knots (e.g. "Well why don't we give everyone a million dollars?!" nonsense) and the real problem, desperation and the conditions that lead to that state, persist despite having the tools to potentially tackle it. But any single example -- a single mother that has to take two minimum wage jobs to feed her kids -- can be reduced away. She could just do x, y, or z and her particular problem would be solved and she'd still be economically disadvantaged but not desperate to the point of worrying about her kids starving. And the conversation ends. We are back to the inconvenience of a weak bargaining position which is also described by the phrase 'economic inequality' and it all feels like the same, well trod and religiously guarded ground.


> It's not true that there's nothing new under the sun. There are some domains where there's almost nothing new. But there's a big difference between nothing and almost nothing, when it's multiplied by the area under the sun.

I found that statement the most insightful of the essay. Helped me resolve some cognitive dissonance.

Whether there really are totally new things and not just existing things in new packaging depends on which level of abstraction you stop at with when analyzing. For example, at a very high level you could say that the Internet as a whole is really just a very advanced/efficient printing press. (And automation like the printing press or assembly line, just efficiently organized production.)

That level of abstraction may be less than helpful, unless there are insights to be gleaned from where the internet is going by looking at printing press technology. Their impacts on empowering the masses are similar, but they each encouraged/enabled network effects to consolidate much power (newspapers and goog/fb) and thereby effectively countering much of the effect.


I agree, this statement seems like the most interesting one in the essay.

It reminds me of expected value calculations: extremely low probability events of tremendously high impact could actually out-rate moderately probable events of moderate impact in terms of how concerned you should be with them. Assigning meaningful probabilities and impact values can be tricky though, which adds a whole layer of complexity and hand-waving to the problem.

This is where discussions of nuclear war, strong AI, nazi-oriented political resurgences, geothermal storms, epidemics, etc. can really go off the rails in my opinion: two people can separately evaluate "worst-case scenario" as very different realities, assigning very different intuited impact-values, and also assign very different probability values. Even if they think about these things from an expected-value perspective, a discussion between them becomes at least a 2-variable linear equation, and the odds of even understanding each other are stretched, let alone finding the same x and y and agreeing upon what that means for the actions we should take in response.

Graham's point here on the positive side though is a refreshing step outside of that domain. An idea recycled, if broadly applicable enough, only needs a hint of novelty. It gives me some hope for being able to come up with useful things.


I was thinking recently about idea cross-pollination, and that kind of hits the same gap. Most of the time, it doesn't really lead anywhere interesting. But every once in a while, you get that perfect combination of a few different domains that just explodes.

"the more general the ideas you're talking about, the less you should worry about repeating yourself"

A rule-of-thumb I've generally followed when brainstorming is "never write down the ideas". And if at all possible, try to forget them, good or bad. It inevitably forces me to re-think the ideas "from scratch". I've never been worried that an idea would escape me or that I would forget it, as I think the exercise of re-thinking ideas from scratch has helped me build a relatively solid mental model of the world. I feel that, with practice, this method of brainstorming has started bringing me to the solid / useful (general / surprising) ideas faster.

It has likely, over the years, added weeks or possibly months worth of time that I've spent brainstorming the same (general) ideas over and over.


As an aside, I can't say for sure how you use the term "brainstorming" here, but there is plenty of evidence that classic-style brainstorming in groups leads to less ideas, and less creative ideas.

When I was forced by my boss to teach designers brainstorming, against my better judgement, I instead made this slideshow summarising my frustrations and what I would suggest people do instead:

https://docs.google.com/presentation/d/1EYyS4AtRhNa6i299R_RB...


Nice presentation. Something I've seen a lot over the decades is internal politics. One huge advantage of not using a brainstorming team is 99.999999% of humanity is more creative by sole mass of brain power than the team will be (Six billion people vs a team of ten people) so almost all good ideas, for any problem, will be discovered outside the team.

If the team is in charge as a group of doing the brainstorming, 99.999999% of the good ideas will come from non-team individuals who will be automatically shot down as "not your job" "work on your own team" and similar primate dominance games. If you brainstorm individually, that 99.999999% of good ideas has a better change of getting implemented. Groups hate outsiders (and by extension, their ideas), without a boundary there is no group.

Another internal politics problem is reward. Teams exist to funnel all success upward and channel all failure downward. Everyone knows if they actually have a solution its wasted on the team unless they're the leader. There are financial and career pressures for teams to fail so individuals can keep the fruits of their labor, so to speak. This can lead to very low team performance.


Thanks, although I personally feel it's more like a decent article disguised as a bad powerpoint (text overload) - works better when sharing it afterwards though.

I've been told it's better live when I combine it with barely-contained frustration - turns out to be a very relatable experience for many.

I have been lucky enough to never have been in an organisation with a dedicated brainstorming team, but your description definitely sounds like how it would play out in reality. Thanks for sharing, now I know it's a red flag.

If you combine that with brainstorming being misused for identifying problems (instead of solutions to known problems) like I mention early on, things probably get even worse.


> (instead of finding potential solutions to known problems)

You may like this quote from John Steinbeck's 'East of Eden':

> Our species is the only creative species, and it has only one creative instrument, the individual mind and spirit of man. Nothing was ever created by two men. There are no good collaborations, whether in music, in art, in poetry, in mathematics, in philosophy. Once the miracle of creation has taken place, the group can build and extend it, but the group never invents anything. The preciousness lies in the lonely mind of a man.”


Brilliant quote, thanks for sharing.

I loved this! You should post it as a submission here, not just a comment.

I could do that. What's the etiquette for this?

"Show HN: [Title]" with an explanation in the comments?


That sounds good to me. I don't know a lot about the etiquette other than as long as the original title is intact and non-clickbaity it is fine.


The ideas in this presentation are great. Thanks for posting

Agreed, really stimulating and well put. Thanks for sharing.

Indeed, great presentation, thanks for sharing :)

Thanks, glad to hear I'm not the only one tired of sitting in unproductive brainstorming sessions for the wrong reasons.

Not that think that what I suggest is perfect either, but it seems to be a bit better aligned with how ideation works.

Hope it will be of use to you all! :)


I find group sessions generally aren't very productive also. I am strictly talking about brainstorming I do independently.

I figured - the way you described your process wouldn't fit the systematic brainstorming style :)

Anyway, I think get what you're saying with not writing things down: you're forcing yourself to start from the ground up, so the result matches the problem better. Having said that, I wonder if this isn't more dependent on how you note things down and organise your thoughts, because like others have mentioned here, I would have lost a lot of ideas if not for short scribbles.


I usually write down everything and then go back through and determine what's worthwhile. There's almost always gold that I never would have remembered had I not written it down.

There is a risk here that one might trick themselves into believing they've made good progress in developing an idea, when if they had attempted to write it down in some detail, the fuzziness of our internal thought processes would be revealed and the perceived rigor of the idea would be revealed as somewhat illusory.

I've found this phenomenon to reveal itself even more explicitly when attempting to explain some of my more complex thoughts in detail to others.

I think the benefit you are aiming for can be achieved by making an effort to revisit prior ideas with fresh perspectives and from different angles, and then later comparing and contrasting the different attempts.


I've found notes from years ago describing the thing I was thinking just a few months ago..

Makes me feel I should write them down, and revisit/expand on them properly..


I'm glad Paul is still writing essays. It seems it has dropped off in frequency, this essay kind of hints at one such reason for that. I hope he continues to write even if people disagree with his ideas or if they are repetitive.

On the same note, PG's "official" HN account hasn't been active for years:

https://news.ycombinator.com/threads?id=pg


One year ten months, to be precise.

His account now makes comments once every year or year and a half, but it hasn't been generally active for nearly 3.5 years (the last bunch of comments in a row in a short time period was 1264 days ago, as of this writing).

I commented a post with a negative sentiment below, but I agree with you. A lot of his ideas and insights are valuable and thought provoking, and I wouldn't want the tone of these comments to make him stop.

The essay itself is general and surprising, and the combination never loses its effectiveness.

Ideas keep evolving. There was nothing new about the idea of a search engine when Google came into existence in 1998.

In 1945, Dr. Vannevar Bush wrote an article titled As We May Think:

https://www.theatlantic.com/magazine/archive/1945/07/as-we-m...

> he urges that men of science should then turn to the massive task of making more accessible our bewildering store of knowledge.

In the 1960’s, Gerard Salton created and developed Salton’s Magic Automatic Retriever of Text (SMART). He also authored a book called A Theory of Indexing detailing his initial tests that search is largely based off of relevancy algorithms. Here’s a very interesting blog post titled Search Down Memory Lane by Tom Evslin who worked with Salton during this project:

http://blog.tomevslin.com/2006/01/search_down_mem.html

Fast forward to 1990, a man named Alan Emtage created the first search engine known as Archie which retrieved a database files by matching user queries using regexes. Following the growth of Archie’s popularity, two similar search engines Veronica and Jughead were created and they started indexing plain text files.

In 1993, the first bot called World Wide Web Wanderer was created and then it was upgraded to capture active URLs and store them in a database. Then came ALIWEB (Archie-Like Indexing of the Web), which crawled meta information of pages.

There’re others including WebCrawler (1994), Lycos(1994), AltaVista(1995), Excite(1995), Yahoo(1995), Dogpile(1996), Ask Jeeves(1996).

This is just my superficial understanding of how the idea of search engines was born and has evolved over time. They were very different from Google, but they’ve similarities to how data is processed and analyzed today.


- Facebook wasn't the first social network. People were connected, shared pictures, and had friends before.

- Amazon wasn't the first online store.

- AirBnB wasn't the first short term rental website

- Netflix wasn't the first online movie portal.

- Apple didn't make the first cell phone or even the first smart phone, or the first music player or the first home computer.

- Tesla didn't make the first electric car.

- etc.

Still, the one criticism that startup hear again and again is that "You are late - it's already there".


Just to play devil's advocate, what was the first online store? I think Amazon was pretty damn close, having started in 1994 as a store that simply stocked a large selection of rare books. The shipping took weeks as I recall.

It was not an obvious idea at the time, simply because there weren't that many people on the web. It was around the time that SSL was being developed [1] -- so if you were much earlier you probably couldn't take payments securely.

There were probably mail order catalogs that had websites, but I think that is a different thing than a "web store".

Also, Netflix wasn't the first online movie portal, but I'm pretty sure they were the first people to make a business out of mailing DVDs! It's crazy that they started like that.

[1] https://en.wikipedia.org/wiki/Transport_Layer_Security#SSL_1...


> I think Amazon was pretty damn close, having started in 1994 as a store that simply stocked a large selection of rare books. The shipping took weeks as I recall.

Afaik, they didn't really carry any stock when they first started. They were re-selling mail order books.

A while back there was a documentary about Amazon's early years. The part that everyone always quotes is the thing about using doors as desks. (Which struck me as silly, doors are expensive compared to many large, flat surfaces.) Anyway, the part that really stuck with me was that they had one favorite mail-order place, except that shop had a ten book minimum order. So Amazon would order the one book someone wanted and then nine copies of some obscure book about snails that was never in stock.


Yup I think I saw that same video on YouTube. They didn't have stock, so they were sort of like a meta-retailer. They presented an illusion of large selection.

I think the value was the interface and ease of searching, and not inventory and retail operations.


Vaguely pertinent, but ViaWeb was founded in 1995 by pg (and others). Amazon was apparently founded in 1994, but wasn't called Amazon until 1995 -- or, at least based on my cursory skim of Wikipedia.

So, I don't know that it was the first or not, but Bezos was certainly not the only one thinking about it at the time.


CD Now beat Amazon by a few months: https://en.wikipedia.org/wiki/CDNow

But you're right, Amazon was basically there at the beginning of online retail. A better example is YouTube. Many online video sharing sites had started and died years before YouTube was a gleam in Janet Jackson's pasty http://usatoday30.usatoday.com/tech/news/2006-10-11-youtube-...


I think YouTube hit the right combination of UI and network speed improvements. Before that, either the UI was glitchy or the average person's network speed was too slow to stream video usefully.

Also, I believe the only way to deliver that content to a wide variety of browsers at the time was Flash. In fact Adobe/Macromedia may have only added the video functionality in the prior 5 years or so, so before that I think you couldn't even do it without making people install a plugin.

Remember this was well before Chrome came out, and Firefox was still nascent. So a lot of people were using IE.

I think Flash was already starting to fall out of fashion by then, but they realized there was a valuable technology deployed in it (video codecs).


Both true, Amazon was one of the first web stores, Netflix was one of the first movie portals. And similarly, the ipod and iphones weren't the first music players or smart phones but the first ones that worked really well. Remember how you could chose between 20 different mp3 players, all quite cheap but usually in crappy quality and without a music store? Remember how you could buy several smart phones like an Ericsson with a stylus or a BlackBerry with a keyboard?

Doing it right matters more than doing it first.


There's value in being first to market but what those example really show is that first isn't all it's cracked up to be, and marketing and sales are crucial for success.

First to market is a tactic and potentially an advantage, but it isn't the end all be all of whether or not you'll be successful.

I’d say all of those examples show that you need to be first to the actual market, not just first or first to _some_ market

Is there a distinction between "its already here" and "its already being made popular by someone else"?

That distinction could be demonstrated by market maturity, which could be further defined by how large the incumbents are and how saturated the market has become.

Google is a giant now, and has made the search engine market both mature and saturated (along with its competitors). Before Google came along, there were several other search engines, but the market wasn't at all saturated. It was much easier to enter that market with a well placed innovation in searching while the market was primed to grow even larger among a set of competitors who had not dominated it yet. The market was large enough that an innovative company would capture first time users and spread itself organically among them, eventually stealing them from other engines in the process.

Conversely, the search engine market has become saturated. To start a new search engine, you'll need to steal users from an existing search engine, because there is virtually no one who doesn't already use one. As a basic concept, it's extremely difficult to differentiate on the core idea of a search engine. You will not beat Google just by offering a new technical innovation, you'd likely have to beat Google by adding search innovations to a novel platform that is emerging to challenge Google before they capture that platform as well. That sort of combination would offer a new fertile market that hasn't been saturated. As an example, a search engine for applications would have been a legitimate threat, had Google not already developed it for Android and Chrome with Apple competing.


Right, but I was only disputing OPs point about "its already here" being an illegitimite concern about a startup company.

The future is a spiral of iterative improvement with occasional leaps up a few rungs on the spiral.

Google didn't make the first search engine, either.

What general ideas did we sow in the past 20-30 years that, through minor iterations, are coming to (surprising) fruition today?

Machine learning

Yup

You can trace the origins of the ANN to the 40s and 50s. Since then, statistical ML has come into vogue a few times, but not like today. Phones and other data sources (plus the algorithms to properly deal with all the data) have ushered in our current renaissance.


Huh. Why do you single out phones as a data source? That seems like a weird statement to make. Neural nets are in vogue because everyone has a phone?

Everybody has a camera in their pocket that takes pictures for free. That's increased the number of images that you can train a neural network on (and the number of images that you might want to classify with it) dramatically.

I remember when I was a kid, you had huge binders of photo albums, and the total number of pictures that a family might own numbered in the hundreds. Now I take that many photos on a week-long vacation, or sometimes just in ordinary life, and my cloud backup has maybe tens of thousands. Multiply that by a billion cell phones and that's a lot of images out there.


Data was never the issue for neural nets. It was the compute power required to run them. The most popular image recognition dataset today was mostly taken pre-smartphones.

When I was in school, studying ML, we were always able to beat NN performance with SVMs on the datasets available. It's only through larger training sets that NNs start to shine.

NNs will still be beat by svms on simple datasets, even big ones. Where NNs shine is when the data has an underlying structure that can be exploited. Like how convolutions take advantage of the 2 dimensional nature of images, or RNNs take advantage of sequential data. Algorithmic improvements like dropout and proper weight initialization have also made them bunch better than they used to be.

If you know what the structure is, then you could write a proper kernel and then SVMs are better again. NNs win when you don't know what the structure is and it's too complex to approximate with a standard kernel.

I'm not aware of anything like a convolutional SVM, but I'm not terribly familiar with them. As I understand it SVMs are fundamentally shallow learners, and if you try to hack them to make them deep and recurrent or convolutional, you just get weird NNs.

Shallow in the sense that SVMs are not layered, but the layering of a NN is only to enable the humble perceptron (logistic or sigmoid, usually) to model more complex functions. In contrast, the SVM doesn't need to layer, because the kernel can be arbitrarily complex.

Machine learning and neural networks are in vogue because it's easier than ever before to collect data for them to be trained upon and process. Phones are a part of that for sure.

Meh Duda and Hart came out in 1973. I could just be somewhat old and bitter.

Free unix.

Cryptocurrencies.

Military robotics.

Personal drones.

Augmented reality.

Freedom of information.

Pervasive media piracy.

Darknets.

State level hacking.

Technofascist dystopia in general.


Increasingly cheap wind, solar and battery hardware.

Smartphones (also enables vastly cheaper batteries and drone components).


The idea of search engine wasn't new but even today millions of linear algebra students are amazed when they find out how google/page rank works

How does it work?

https://en.wikipedia.org/wiki/PageRank

Really neat, simple, iterative algorithm.


Might a parallell be draw to Kant here?

General and bland would be as akin to "analytic a priori". All bachelors are unmarried, that sorta thing.

Specific and surprising would be like "synthetic posteriori". Such and such plant is blue and grows in the Himalayas.

The real gold would be in the "synthetic a priori" findings. Blending the generalness of logic with the newness of empiricism. Like figuring out geometrical laws from observing nature. New information AND far-reaching implications.

Am I on to something here?

https://en.wikipedia.org/wiki/Analytic%E2%80%93synthetic_dis...


https://plato.stanford.edu/entries/analytic-synthetic/

"While some trivial a priori claims might be analytic in this sense, for Kant the seriously interesting ones were synthetic."


Synthetic a priori = getting epistemological alpha (to speak financese) :)

    [T]he more general the ideas you're talking about, the less you should worry
    about repeating yourself.
Organisms are used to train over and over to get better at things. Certain abilities are innate, like a calf knows to stand up as soon as out of the womb. But to stand firmly the calf has to keep trying.

When it comes to an idea, I believe that the broader I present the idea, the easier it is to start a conversation, because others are more likely to show interest, and eventually reach the point you find a focus point. However, I still repeat myself, because I wouldn't invent a whole new dialogue next time I bring up the same broad idea with another person.

On the other hand, I feel developing a product (startup or not), the more general the idea is, the harder for me to explain my ideas concretely, much like films have leading roles. The ideas present by the film can be broad, but you don't want camera rolling on 200 actors in 1 hour 30 minutes, would you? Your message can be interesting and novel, but I am going to be lost.

But yeah, on HN topics like "I quite FB", "security is hard", "Google is evil", etc are common here, but I still comment on those topics when I feel like to. My responses are usually similar, but I might add new thoughts, or rephrase what I had written before. I called this introspection. I believe we are not comfortable repeating ourselves, because we want to be interesting and original, but we still have the urge to try to perfect our speech the next time someone asks.


When I read stuff like this and reflect on the fact that this is about the peak quality of the philosophical discussion that is going on in our little niche of the world, I get slightly depressed. I get that it's hard to find people at the intersection of astounding writing and technical chops, but still, it makes me want to distance myself and just go back to reading the classics.

What community centered around professional skills do you think has better philosophical discussion? Sure, there exists better philosophical discussion elsewhere, and communities centered around academic disciplines can be more sophisticated because that is their raison d'etre. But when I think of professional communities (e.g., accounting, medicine, law, etc.), I can't think of any better except insofar as they bleed into academia (e.g., blogs by law professors, not practicing lawyers).

The medical field is an embarrassment of riches when it comes to talented essayists, journalists, and biographers. I've also read a few wonderful biographies written by restauranteurs. Our field unfortunately lags behind, I think partially because it's rather dry compared to other fields, and partially because SV ideology is pervasive, stifling, and indifferent to many swathes of the human experience.

This is interesting. Would you say that engineers and mathematicians also suffer the same embarrassment? If so, could all of this be a side-effect of being in the world of precision vs being in the world of creative trial and error (where every attempt at writing is judged on the interpretive aspect of it)?

I am saying: I get nasty ineloquent on days I'm zoned into coding. Almost like I'd rather not say than say anything I judge to be imprecise.

Like this post.


This is something I've noticed too. I tend (when I'm coding well) to write with many clauses, and also elaborate them with elaborate punctuation, [, ( ,- , etc...

ak, I believe you may have just written something moderately insightful. Anyone else?


I'm not sure if this is exactly the sort of insight you're looking for, but personally I find that long periods of coding and being "in the zone" tend to affect my speech in the same way. Essentially, I tend to try to be as precise as possible when speaking, often going out of the way on increasingly specific tangents or finding the most elaborate examples to demonstrate what I mean. These tangents and examples always come back to the main point (so I'm not just wandering off), but when considering conversations in retrospect I find that they tend to just muddy the point I was trying to make.

Written language and code can afford the verbosity, because you can always review the earlier points to see how they tie in, but speech should be short and succinct, IMO.

I believe this is similar to (but not exactly) what is meant by "circumstantial speech" [0].

That said, the wikipedia article seems to indicate that what I'm trying to describe is similar to circumstantial speech, but actually isn't:

"Some individuals with autistic tendencies may prefer highly precise speech, and this may seem circumstantial, but in fact it is a choice that posits that more details are necessary to communicate a precise meaning, and preempt more disastrous ambiguous communication."

Does anyone know of a more accurate term for this?

[0]: https://en.wikipedia.org/wiki/Circumstantial_speech


I've experienced this, and, per my other comment, I think it's a good example of the sort of thinking that is (analytical) philisophical, in contrast to the sorts of artistic prose m_fayer brought up.

I've also experienced this, especially when working from home for extended periods of time. Thankfully the compiler's application of rules aren't capricious nor does the computer get annoyed when I repeatedly ask it tiny variations on the same question.

I don't necessarily disagree, but little of that falls under the heading of "philosophical discussion". Medicine is romantic and salient to laymen, so of course journalists are going to write more biographies and whatnot for them on that topic. I almost never find such pieces philosophically insightful, and they aren't competing with Paul Graham essays.

Examples of these riches, please?

I got something out of it. How about being constructive and share some writing you think is good, or some of your own writing?

A page of text is not meant to compete with the "classics", so if that's what you're looking for you shouldn't read this site at all.


I like HNs culture to reply with a call to action in response to complaints. I mostly wrote my comment with the hope that it might nudge all of us to think about our prose a slight bit more. (I know we're all mostly writing as a charity and do not have much incentive to make any improvement, other than maybe the creative joy or reputational benefits.) I definitely do not hope my comment will shift the trend of our community towards negativity over encouragement. But sometimes sharing your honest feeling can incentivise the writers to improve.

Eh, that's not what your tone indicated. It seemed like more of a middlebrow dismissal. But I'm glad you changed your attitude. It's a lot easier to criticize than to contribute.

Well, don't keep us in suspense - tell us of these classics that make concise points, and don't go on and on for hundreds of pages (I'm assuming you're talking about philosophers here)


Only slightly related but there is a great book for your children: "What to do you do with an idea"

https://www.amazon.com/What-Do-You-Idea/dp/1938298071


What should one do?

Paul Graham is running out of ideas these days. (Partly joking).

This goes on to my same discovery: Life itself is a constant process of incremental trial and error builds-up. Over a few billions years, you get something out of it (human being playing snapchat)

If you keep counting from 0 to an infinitely large number, at some numbers you'll "discover": Windows 7, Windows 7 Korean Language, Microsoft Office, OSX, and every other game that human designed and developed. Plus a bunch of other software from alternate realities that compile on our machines. Cool.

The process the human are doing is not much different (albeit one can argue that it is much more efficient than stupidly counting) "Everything" is already there. The new Audi A4? It was already "possible".

If there is one thing that will change the future, it is our discovery of the link between "abstract" and "real" concepts. They share some important points that I'd not be surprised that "real" and "abstract" are the same thing/continuum. Like space and time.


That's an interesting thought. Just want to point out that complexity theory from Computer Science fits nicely into this, because it asks, not what is "possible" (e.g. finding all programs by counting all the integers), but what is possible within limited resources, like e.g. time.

A nice addition to the thought that, theoretically, even if you can find the Windows 10 source code by counting all integers, you can't actually do it within the lifetime of the universe without a smarter algorithm.


More like infinite resources. But there is a lifetime for the universe?

> keep counting

You can always find the successor of a given integer, but there's not really an analogous process for human discovery, where it's easy to go in circles. The process really is that different.

Are there theoretical situations where random searches are guaranteed to be complete on infinite spaces given "infinite time" (probability of finding solution approaches 1)?


Before you lay one thousand blocks of marble. Inside every one is a world famous sculpture. Eventually you will find it...

A very academic / theoretical post. Not entirely sure the point.

It might have been easier to understand if bound to real world examples.


>that territory tends to be picked clean, precisely because those insights are so valuable

People often think that a field has been 'picked clean' and conclude that either no further progress is possible or that further discoveries are going to be rare and incremental.

e.g. "the future truths of physical science are to be looked for in the sixth place of decimals" -- Michelson, 1894

But there are always deep problems and this implies that unlimited progress is possible. The really big discoveries can happen at any time and are of an unpredictable nature, not least because they affect multiple fields. They leave behind plenty of smaller problems for everyone to pick up.


That's especially funny from the same Michelson who did the Michelson-Morley experiment that basically kicked off relativity.

The essay describes a 2x2 grid with axes of generality and surprise. Things that are suprising and general are valuable. Things that are general but unsurprising are platitudes, and things that are specific but surprising are gossip. The essay itself is a platitude.

I think "Repeating close variations on your usual theme unlocks far more value than you'd expect given minimal novelty value" is a surprising result. I utterly buy it.

The advice I give which has produced the single biggest deltas in outcomes is "Charge more." It is so simple that I could literally print it on T-shirts and wear it to any event which discusses pricing. People know it is my catchphrase and sometimes I get knowing laughter when I say it...

... and then a few minutes later they've agreed to try charging more, despite having an accurate model which suggests "Hah, I bet when we ask Patrick about our new pricing he is going to ask us what it is, think about it for less than five seconds, and then suggest charging more." They knew what I'd say before I even got in the room, but even the tiniest marginal connection to their own pricing grid / customers / data pushes them to actually try it.


> think "Repeating close variations on your usual theme unlocks far more value than you'd expect given minimal novelty value" is a surprising result. I utterly buy it.

I was hoping for an example, but your example of repeating your catch phrase doesn't quite fit imo. There's no variation. Is there a variation of your catch phrase or message you're not mentioning here?


I think the variation at each point is that he says "charge more because (some detail about the askers customers/costs/business model)".

Are you actually surprised (isn't that just "practice makes perfect", a platitude), or is "more value than you'd expect" just telling you to be surprised?

It misses an important element that may seem obvious but isn't: truth. Some insights are indeed, general and surprising, but when examined closely, turn out to be false.

That's what happened to many of the ideas presented by Daniel Kahneman in Thinking, fast and slow. He obviously aimed for "general and surprising" but accepted as solid results the outcome of lone, irreproducible experiments.

We (our current culture -- it wasn't always like that) love originality and "surprises", but as a rule, the more surprising the result, the more scrutiny it should withstand.


Circulating ideas where people disagree about whether or not they're true is an excellent way to discover things. Not just whether the particular ideas true, but why or why not, and where the uncertainty comes from.

In your example, perhaps the most lasting value of priming studies is a much greater understanding of how p-hacking actually misleads us. Everyone knew that it could in theory, but it was general and surprising that entire fields could have an apparent scientific consensus entirely based on p-hacking and file drawer effects.


Maybe we will learn... but maybe not. From Daniel Kahneman himself:

> there is a special irony in my mistake because the first paper that Amos Tversky and I published was about the belief in the “law of small numbers,” which allows researchers to trust the results of underpowered studies with unreasonably small samples. (...) Our article was written in 1969 and published in 1971, but I failed to internalize its message.

We simply want to believe.


Brutal but accurate summary! I get the feeling there is something he is not (or cannot) saying that prompted the essay.

Yeah, it has a bit of a vaguebooking feel!

I saw this essay as a process to come up with an interesting idea. He's even applying this process to the essay itself. Start with something general and iterate until you find something novel. Maybe he's there, maybe something is not quite right but it's another step in the gradient ascent random walk.

What you describe is the setup. The key point of the essay lies in the later part about the delta of novelty and how useful an insight can be in relation to its novelty and generality.

Which is a pretty banal conclusion that stems right from the setup.

You don't say why that is meant to make it a platitude. Why?

It's funny, a lot of my imposter syndrome comes from an anxiety about not having massive "deltas of novelty", as PG would call it.

I guess all I can do now is hope that whoever is listening can see the value in incremental change :)


Most probably your impostor syndrome comes from an anxiety. That's it. There's nothing more to it. It's unhealthy.

Stop being anxious, go out in the world and see what your ideas are worth.

- Capitalism would define their value on how much you'd earn from these ideas.

- Writers would value them based on how many readers buy your books.

- Developers would value them based on how many contributors your open-source projects has.

- Or die and keep your diary and maybe in 50 years you will turn out to be the genius who saw the future.

But still - none of that absolutely requires anyone to be anxious.


> Most probably your impostor syndrome comes from an anxiety. That's it. There's nothing more to it.

This implies that anxieties themselves have no causes.

> Stop being anxious

I'd like to see your advice on other things. "Stop being sick!", "Stop being afraid of spiders!", "Stop being introverted!"


Anxiety is caused by a form of ruminating about the future in which you focus on uncertainty. Get out there and eliminate all uncertainty and you will feel your anxiety disappear! Tell your boss you feel unappreciated, talk to strangers, and generally do things that you feel anxious about.

If your anxiety is so strong that you can't do those things, then get therapy! It changed my life, and a good therapist can change yours as well.


I think it's great that you are sharing your personal experience. This can help others. But saying "and a good therapist can change yours as well" is a dangerous thing. It worked for you but you can't know for sure if it will work for others.

Share your experience but don't tell others that your way will work for everyone.


I used "can" to mean that there is a possibility. I did not say "will" which removes all doubt.

"Can" means that he can do it. "May" would be a better word.

I don't see any connection between what I wrote and your reply.

In any case, you are wrong about the cause of anxiety. That might be a cause of it, but there are plenty of other causes. See in particular the "medical causes" section of http://www.mayoclinic.org/diseases-conditions/anxiety/sympto... It can be a symptom of an underlying illness.


The OP seemed to indicate a mild social anxiety to share their ideas, not a crippling, clinical anxiety that requires professional attention, so "stop being anxious" is a perfectly valid piece of advice. IMO the easiest way to get over imposter syndrome is to "fake it till you make it," which is another way of saying "stop being anxious and do it anyway."

[flagged]


Reminds me of Coach McGuirk from Home Movies when he has insomnia.

"Oh, you mean all I need to do to fall asleep, is just close my eyes and go to sleep?!"

In all seriousness, I mean, who doesn't have anxiety, right? If someone broke into your house, you'd start feeling anxious.

I focus on building things, on going outside and being happy.

And yet people still don't buy my product or offer me a job ^.^

If that doesn't make anyone start wondering what might be wrong, then I'd like to know more.


>If someone broke into your house, you'd start feeling anxious.

I'd start feeling murderous.


In some jurisdictions, that will get you jailed, whether right or wrong :)

Namely NYC and Boston, especially if you are using a gun.


Living in those jurisdictions would make me anxious then. ;p

Hahahahahahah I can't argue with you there.

It's a shame that a lot of these jurisdictions are also "where the jobs are".

Fortunately, we are much more likely to be affected by so many other dangerous things rather than home invasion that there's just many things to be anxious about ^.^


Same here, you constantly wonder if what your thinking is truly unique.

I believe the key is figuring out when your view is unique. What I have been doing is asking people questions on their view of a problem or something I am thinking about and looking at the deltas there. I have not thought about it with the term delta. Maybe looking for deltas would be another addition to the toolkit when searching for insights.

I prefer to use the term contradictions. Sometimes that leads you to further insights.


And you spend most of your life trying to prove how special you are, instead of productively producing things and playing with the world that is also shaping them?

People tend to forget that ideas are worth nothing. It's not that hard to build a model of the world which would be parametrisable for any possible outcome.

Then those parameters would be defined by your experience, which requires you to get out of your mind and earn on the outside.


It seems my comment was not precise enough. The result is the following assumptions being made:

1. The thinking/insight/ideas is in pursuit of proving my own special nature 2. The thinking/insights/ideas are purely theories and have no practical application

Notice that the word "problem" was used. What is meant here an actual problem that I am trying to solve with actual practical implications.

Let me clarify what I mean, using concrete examples: In product development/startups/business space: "My customers are telling me their problem is X, but what they are doing or how they are doing it is Contradictory. Why is this?"

"Two customers have completely different and contradictory views on what feature/problem is important to them, why is this?"

The actual purpose is to understand your customer better so you can cater for what they actually need.

This part is confusing, can you clarify what you mean:

"Then those parameters would be defined by your experience, which requires you to get out of your mind and earn on the outside"


Thanks for writing :)

Right now I would say that I have some insights that I feel are really unique and interesting, at least to me, and maybe a few friends.

However, they aren't monetizable to fans or marketable to employers. This has been fairly evident to me after a year of youtubing and twittering and twitching and etc.

So, even though I enjoy it, I really start to question if my ideas are actually that great, or if the addressable market is just not very large.

Good luck !!! :)


A couple of years ago I wrote a book about a new philosophy of science based on the following methodology:

- Obtain a large dataset of observations related to a phenomenon of interest (eg English text, bioimaging data, economic reports, car-mounted camera recordings, etc)

- Develop a theory of the phenomenon, or revise a previous theory

- Build the theory into a lossless data compression program

- Score the theory by invoking the compressor on the dataset, taking into account the size of the compressor itself; lower codelength is better.

- Adopt or discard the theory as appropriate and return to step #2.

I believe that this is a valid variant of the traditional scientific method, in which data compression takes the place of experimental prediction. In Graham's terms, this idea is a small delta of novelty, but since it refers to a predecessor idea of tremendous significance and generality, it could be very important. The book is available here:

https://arxiv.org/abs/1104.5466

I've spent a few years pursuing this methodology in the domain of English text. The result is a sentence parser, which works quite well, but was built completely without the use of hand-labelled training data (eg the Penn Treebank). You can check out the parser here:

http://ozoraresearch.com/crm/public/parseview/UserParseView....

I'm always happy to talk to people about these ideas, ping me at daniel dot burfoot at gmail if you're interested.


This is interesting. I don't have an intuition on how well this could work on different problems.

Do you think this approach would work for something, let's start simple, like ballistics neglecting rotation of the Earth and drag? If you fed in thousands of starting velocities and angles, and the resulting distance traveled, will this process end up with equations of motion that are as compact as can be found in first year physics?


I think the problem here is that you are still the one that needs to develop the equations of motion that will be tested for compression.

What this describes is a method for testing theories, not generating them. You still need to study the data to try and properly analyze it so you can understand the rules yourself, or at least be able to see the edges of them.

Now if you could find a way to generate moderately accurate equations automatically by analyzing the dataset, I think that would be a real leap forward. Is there a related mechanism to the one presented by the parent which could be used to generate theories?


Yes, it is a method for testing theories; it does not give you a procedure for developing the theories. It is agnostic about whether the theories are developed by humans or algorithms.

There is not going to be any universal and practical algorithm for automatically generating the theories, at least in the short term. But you can think about algorithms that apply to particular domains. For example in my field of text analysis, you can imagine techniques for automatically inferring grammar rules from a large quantity of text. Some such methods already exist, but have not been pursued extensively.

We could also develop algorithms for inferring equations of motion from astronomical and celestial observations, as the parent mentioned. The compression principle wouldn't give you the real Newtonian equations, but it would correctly identify them as being superior to other candidates. But this would be a bit of an academic exercise, since we already have the Newtonian equations and relativity. What's more interesting to me is an attempt to build modern astronomical theories (eg about black holes, the Big Bang, cosmic inflation, dark energy, etc) into a compressor that will apply to an astronomical data set like the Sloan Digital Sky Survey. My guess is that such an attempt would reveal important shortcomings in the theories, or perhaps highlight regions of space that confound the theories.


Interesting, and I just read the abstract from your book, why do you think size of compression code indicates strength of theory?

Regularities in data/predictability of data?


Simplifying drastically, fully general lossless data compression is impossible because of a simple No Free Lunch theorem. To achieve compression in practice, a compressor must exploit empirical regularities in the data. For example, PNG can achieve compression on most real world images because it exploits a regularity that they exhibit, which is that adjacent pixels tend to be highly correlated. So lossless data compression is about documenting and describing empirical regularities in real world observations; exactly the same as empirical science.

I think pg is falling prey to the whole post-hoc founder story phenomenon here. He sees so many startups, he hears so many stories about them, and he works hard to distill it into some kind of actionable and scalable investment strategy.

The problem is that the facts which led to the success of the business are not the same thing as the best insights a founder can come up with about why it succeeded. The latter is subject to an incredible amount of confirmation bias and the simple desire to tell a good story—in fact you have to do that to get investment.

In reality, novelty of ideas is meaningless, because the massively overwhelming majority of ideas in the world never get executed on. Even within contexts where people and companies have the means, most ideas—even good ones—don't get executed on because they conflict with other ideas and don't win the battle for resources.

What matters for an idea is execution and context. If you bring the right resources: skills, connections, go-to-market strategy, smaller supporting ideas, and it gels with the current zeitgeist of whatever market you are in, then it can get some traction. From there you learn and gain more insights which can be leveraged to scale up. You do this over and over again, and if you are lucky you can make an 8, 9, 10-figure company.

By the time all this happens you'll have some tremendous insights, but they will be specific to the context in which they were gleaned. The "generality" we observe of these insights is just a result of our pattern-matching and story-telling brains. The "surprisingness" comes after the fact to varying degrees based on existing preconceptions and perhaps how the world has changed over time. In other words, I don't think there is such a thing as a hugely valuable insight per se, rather this: we have insights, we believe in them to varying degrees, and based on how well it maps to our actions and their results, we later declare the "value" of the insight.


On the "surprising" part of this - Murray Davis had some related research and writing about being interesting: https://proseminarcrossnationalstudies.files.wordpress.com/2...

"An audience finds a proposition ‘interesting’ not because it tells them some truth they thought they already knew, but instead because it tells them some truth they thought they already knew was wrong."


Is this a stealth "I'm done with the small delta of novelness Arc already has and am stopping on it. If you can't appreciate that then you need to look harder" article? :-)

At the beginning:

The most valuable insights...

Toward the end:

So it's doubly important not to let yourself be discouraged by people who say there's not much new about something you've discovered. "Not much new" is a real achievement when you're talking about the most general ideas. Maybe if you keep going, you'll discover more.

I really feel like Paul is conflating valuable and novel here. When you create something that solves someone's problem, who cares whether the ideas behind it are novel or not?


This is a good hackers and painters juxtaposition: in STEM, it doesn't have to be novel as long as it is better. In art it doesn't have to be better as long as it's novel :-)

PG makes it all look so neat and tight, however he fails to mention the number of serendipitous instances that drive insights and new ideas.

I think the essay is intended to describe one approach, not to provide a complete account of how you can get good ideas.

Well, that was certainly general.

...and not very surprising. Thus, platitude! :)

Indeed. The number of votes this has received is an amusing insight into how many people must upvote based on origin and not content. If this were published on some anonymous medium blog, it would be lucky to be +2.

It sounds like a clip from some kind of a speech. There's some missing context. or, there's hardly any content.

Like, I'm not sure what he's trying to get at.


Nothing new here. The messenger always plays an important role.

Not everything that is a platitude is necessarily bad. Reinforcing, clarifying or rediscovering knowledge certainly has value.

If Paul Graham writes an essay then that is relevant to Hacker News so it should be upvoted regardless of whether it is shit.


So...you are agreeing with me that this has been upvoted based on origin and not content, and you're just repeating what I said to reinforce the point?

Origin and context give extra meaning and weight to content. That seems trivially true to me. If Paul Graham writes this then that means the topic is on the mind of someone influential and important in our community, thus it is worthy of scrutiny and thought, thus worthy of an upvote.

Hacker News is not some game where people compete to create content with maximum merit. It is a source of new content relevant to this community.


Thou shalt not take PG's name in vain.

There's also a class of valuable insights that are general and not surprising, but rather obvious. For these, the challenge is that no one previously had the insight to observe and state what is obvious. Maybe the surprise for these isn't in the truth but rather that no one had stated them before.


F = ma is interesting because there are universal forces like electricity, gravitation and electromagnetism that we use for our daily life. Moreover there is a marriage between maths and physics since acceleration is the second derivative of space with respect of time. If you are a new Newton you should create the new concepts and develop the new maths to create a surprising step forward. Looking for a word that goes from gossip to F=ma is like using a telescope to watch a worm. Concepts are about frameworks and using a right scale to focus concepts.

For me, general and surprising things are coming in the form of "X doesn't seem to work".

The reasoning goes like this: X is described like a good thing and a lot of resources poured in it. But there are documented failures of X - are they spurious or causes by X not actually doing what is advertised?

More often than not the working answer "it probably doesn't work" which is often "a thing you can not say" due to general consensus, good feelings and above mentioned amount of resources poured in X.


Can you give examples of such things that probably don't work, but you can't say it if you don't want to be ostracized?

For example, Yugoslavia had what was described as the most progressive ethnic minorities policy in the world (regarding official languages use, representation of minorities, etc, etc), and then it violently fell apart in a series of ethnic conflicts.

This post is lacking in the latter.

> It's not true that there's nothing new under the sun. There are some domains where there's almost nothing new. But there's a big difference between nothing and almost nothing, when it's multiplied by the area under the sun.

PG, I see what you did there. The rest of the essay is just a setup for this single paragraph. This is your own small delta of novelty. Well done.


I'd say this post was "general, not surprising".

Too abstract to be useful.

My personal image about this topic is the fact that the general relativity theory is more advanced than the special relativty theory.

It reminds me that creating something more general is harder than creating something specialized for a specific problem. For programmers this seems to be obvious, but for others this is counter intuitive.


For anybody that ever used some kind of math, it is obvious. But "counter intuitive" could be Mathematics middle name.

> It's not true that there's nothing new under the sun.

Philosophically this statement is false.

Mental health wise, it's a lot nicer to keep in mind than the truth of reality that you reside within.


Hm. Can something be philosophically false but factually correct?

Love is an example I can think of.

This guy pretends to be an intellectual but is the biggest wannabe I know. He tried to block the latest Steve Jobs film because it showed how much of an asshole Jobs was

The guy, ironically enough, is the living proof that capitalism doesn't reward talent.

There are also the platitudes that everybody knows but that nobody does anything about.... Oftentimes there is virtue in simply being unusually conscientious.

Diff eq-like behavior is everywhere: species population, popularity, technology, market saturation, economics, spring rates, etc.

(Malthus was partially ;) mistaken.)


What are some examples of past insights someone has made by discovering something slightly surprising about the general world?

I think Nassim Nicholas Taleb's work epitomizes the "general yet surprising" territory.

Clearly Paul Graham is defending the Bodega vending machine here ;)

tl;dr if you have an idea that isn't very "new" then keep working on it until it is "more new"

There should be a twitter account that just summarizes rambling think pieces.


It's very resonant,ths for sharing.

F=ma is surprising? We took two values and combined them to define a new one. You can define all sorts of new things this way, how is that surprising?

You're making a valid point in a not-so-nice way.

What he meant was F=ma combined with other formulas that make use of F, make up a system that's surprisingly useful while remaining simple to use.

I imagine you already knew that.

If you're acting as an editor, you can phrase your point better. if you're merely nitpicking for things to 'gotcha' someone - I assure you that's not a smart longterm strategy :)


Paul Graham doing what he does best. Pseudo-philosophical musings with no basis or any kind of rigorous pass through. Why is this gaining visibility? If anyone else had posted this it'd never see the light of day on here.

I found it to be a nice little reminder of 'don't worry about being original, keep digging and maybe there'll be a gem there someday'.

The simple things are good to be reminded of, on a regular basis, that's why :)


From gospel of Hacker News, blog post. May the Founder be with you!

This is not true as often you find surprising things that are not general which ARE NOT GOSSIP. To stay with the physics theme, the Schwarzschild solution of general relativity is a very special one and I don't think anybody thinks that black holes are gossip.

And it is certainly not surprising that amateurs in general forget that F = ma is only valid if the mass does not change. The more general expression is F = dp/dt, where p is the momentum. But this, of course, is also only valid in inertial systems. It's not really important to the article but it does kind of annoy me that he uses the most special case of an expression in an argument about it being general.

He could have actually made the point about generality by comparing this expression to the most general one for the force (in a frame of reference that accelerates). That would have also shown why generality can quickly become infeasible in practice. If he knew how many approximations people make in the real world, not because they want to, but because they HAVE TO, his worldview might be a different one.

I feel like he's trying to make a point about a very specific scenario but doesn't mention it explicitly. Instead he tries to be general and therefore fails to understand that his view doesn't actually apply in general.


Theoretical entities called "black holes" cannot exist in finite universe (no matter the perspective changes that are applied). The result (if I understand it correctly) applies to a universe in which there is a single massive body and universe is eternal.

Hence, if time dilation exists (under increasing levels of gravity) "black hole" cannot form in any finite time. Since there are many massive bodies in our universe, solution to problem is not applicable.

So, in that context, "black holes" ARE GOSSIP!!!!

Let the "GOSSIP" wars commence.

Other than that, I think you make some very salient points, especially about the approximations that we make and the validity of those approximations to the specific applications/situations being studied.


Yes, the current understanding is that general relativity breaks down inside of black holes and, to understand how it really works, we need a quantum theory of gravity.

However, I would not call it gossip since, by your reasoning, anything could be gossip then. I understand gossip as being something unimportant but the Schwarzschild solution was a major milestone in the understanding of general relativity. Moreover, all scifi movies considering wormholes and such, can be traced back to the usual visualization of the black hole distorting space time. Pretty consequential discovery I'd say.


You miss the point that the Schwarzschild solution was for a very specific universe that does not in any way match ours. Hence, saying that the solution is applicable to our universe and WILL give rise to the theoretical entity know as a "black hole" is pushing the solution well beyond it actual applicability.

Much as I like good SF (and even mediocre SF), I believe there is enough evidence to say that our understanding is very incomplete and that GR (though it give some good approximations in general) may be a complete furphy.

Our problem at this time is that any time conflicting evidence comes up. it tends to get either buried or ignore or those bringing it up get ad hominem attacked.

If the evidence is obviously incorrect then it should be a simple matter of showing exactly how it is incorrect. My observations of the actions of the proponents of GR are that they tend towards dogmatism and and not discussion, ridicule and not rebuttal.

If odd things are found, then the prevailing theory (in this case GR) should be able to fairly handle these discrepancies.

There are no "stupid" questions.


He wrote "e.g. gossip", not "i.e. gossip."

You should have read the draft of this, instead of the people who did.

This genre of writing is also neatly categorised into one of three categories: Obvious, unactionable, or wrong.

PG sometimes meanders into actual meaning. But the likes of Tim Ferris or Seth Godin absolutely excel at this drivel.


>Thanks to Sam Altman, Patrick Collison, and Jessica Livingston for reading drafts of this.

This (a very basic 400 word essay/post) needed "drafts" and 3 people going through them?

I like some of PG's essays, but this one just has a couple of trivial insights.


The drafts were probably longer and worse. It's pretty normal to need a few people for feedback. Like the other commenter said, it's normal in writing.

True, but that is also exactly the point of what he was saying, so there is something poetic about how bland it was.

Yes it's called writing.

Wooosh. That would be a relevant answer if I had asked about the practice of writing drafts and having friends go through them in general.

My point was different: several drafts and editors were needed for this tiny and banal result?

It's like seeing someone ordering a $10,000 AGA cooker, buying all kinds of spices, herbs and fresh ingredients, amassing several Kitchenaid appliances, getting into an apron and a chef's hat, and then proceed to make ...a grilled cheese sandwich.


PG lets his inner circle read all of his posts first, whether it's strictly necessary or not. Also, I thought it was commonly accepted that a lot of work goes into distilling simple expressions of ideas.

>Also, I thought it was commonly accepted that a lot of work goes into distilling simple expressions of ideas.

That's only for when you actually distill complex ideas into simple expression. E.g. explaining a complex subject in a succinct and simple to understand way, like Feynman's lectures.

A mere simple expression of something banal to begin with shouldn't take lots of work, the same way writing "The sun is hot, we should better not walk on it" shouldn't.


May be if the sandwich was going to be consumed by one person. But this "sandwich" will be read by thousands. Well worth the effort of three reviewers.

If you expect thousands, and have 3 reviewers, at least come up with a decent sandwich. Or, failing that, just don't show anything at all until you got something good.

This isn't even a grilled cheese sandwich. It's not even toast. It's just a slice of slightly stale Wonderbread.

The fact that PG does this (even with longer essays actually) bothers me to no end and always has. It's as if he doesn't have confidence in what he is saying. Or as if he thinks that what he is saying has some earth shattering implications or impact. [1] And it's not printed in a book. He could easily change it and update it after reading what others said afterwords.

I can understand if he were to write an opinion piece for a major newspaper. Or perhaps having Jessica give it the once over. Or another person with no need to acknowledge or thank them.

And it's certainly not representative of the world that the rest of us operate in when posting comments here on HN. You know the downvotes and gray out comments when someone doesn't like what we have said or doesn't agree with it.

How about an essay on why he has people read his essays before posting them?

One other thing. I don't think honesty requires someone to even give thanks like he does. I think that detracts from the essay and doesn't add to it. There is no requirement to give acknowledgement in that fashion if you are getting help in that way. Assumes the help is minor if the help is major you have to question why someone even is writing essays.

Lastly, I think this sets a bad example for younger people on the way up. The reason? You have to learn from your mistakes and from the brutal honesty of people and how they react to what you say. While this is not importantly to PG (he has already made it) I think constantly having others 2nd guess what you do is not the path to being able to think on your own feet.

[1] Like he is a world leader, or a corporate CEO, and has to tread carefully for fear of a bad outcome from his words.


I don't think you deserve the downvotes for expressing your opinion.

However, there's nothing wrong with having trusted friends read your work. In fact, it shows respect to your readers, especially if you have a megaphone as PG does. Thanking your editors is polite and that is something we could use more of.




Applications are open for YC Winter 2018

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: