Hacker News new | past | comments | ask | show | jobs | submit | frgtpsswrdlame's favorites login

"A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work. You have to start over with a working simple system."

– John Gall (1975) Systemantics: How Systems Really Work and How They Fail

https://en.wikipedia.org/wiki/John_Gall_(author)#Gall's_law


Your link has John O'Shaughnessy being interviewed by Barry Ritholtz (two respected folks in finance), and O'Shaughnessy later corrected himself:

* https://twitter.com/jposhaughnessy/status/115517108366392524...

While I do believe set-and-forget passive investing is best for the vast majority of people, last time I checked that Fidelity study does not actually exist, and the story is apocryphal (no one seems to be able to actually link to it).

If you ask Fidelity about it, they'll tell you it does not exist:

* https://www.morningstar.com/columns/rekenthaler-report/archi...


I like Dijkstra's quote: "The question of whether machines can think is about as relevant as the question of whether submarines can swim."

Interesting approach. Recently (in SwiftUI) I did something similar using Ramer–Douglas–Peucker to retain only significant points. Not as sophisticated, but took about half a day to implement. Render lag went from annoying to barely noticeable.

If you want to see how some very simple notations greatly simplifies some math, check out J. H. Conway's proof of Morley's theorem.

Background: Morley's theorem is a non-trivial theorem in planar euclidean geometry stated in 1899 (first proof appeared 15 years later). The proofs are not easy. One can use complicated trignometry identities to prove it. Even the "simple" proofs are sometimes quite involved.

Conway introduced some notation and almost trivialized it. The notation he introduced was just a* := a + 60 where a is the degree of an angle. No one would believe this notation can do anything good, but with them (and some other insight) Conway can explain the proof in just a few sentences! (One might think anyone who understand that the interior angles of a triangle will always have a sum of 180° can come up with this simple proof, but that just didn't happen for 100 years until Conway revealed it.)

See page 3-6 here: http://thewe.net/math/conway.pdf


I really like the idea behind μMon. It reminds me of when software was simpler. I remember using a program called "Everything" by voidtools. It was small but could search a lot of files quickly. Nowadays, some projects use big tools like Elasticsearch just to search a few things. Some even use PostgreSQL, a big database, for small tasks. I wish more software would keep things simple.

Here are a few insights on the rentech process from Nick Patterson, one of their senior statisticians there for a decade:

[...I joined a hedged fund, Renaissance Technologies, I'll make a comment about that. It's funny that I think the most important thing to do on data analysis is to do the simple things right. So, here's a kind of non-secret about what we did at renaissance: in my opinion, our most important statistical tool was simple regression with one target and one independent variable. It's the simplest statistical model you can imagine. Any reasonably smart high school student could do it. Now we have some of the smartest people around, working in our hedge fund, we have string theorists we recruited from Harvard, and they're doing simple regression. Is this stupid and pointless? Should we be hiring stupider people and paying them less? And the answer is no. And the reason is nobody tells you what the variables you should be regressing [are]. What's the target. Should you do a nonlinear transform before you regress? What's the source? Should you clean your data? Do you notice when your results are obviously rubbish? And so on. And the smarter you are the less likely you are to make a stupid mistake. And that's why I think you often need smart people who appear to be doing something technically very easy, but actually usually not so easy.]

[[at] my hedge fund, which was not a very big company, we had 7 Phd's just cleaning data and organizing the databases]

http://www.thetalkingmachines.com/episodes/ai-safety-and-leg...

quotes from ~30:06 & ~38:03


For anyone looking for actually free, no strings attached (and no subscription) camping, check https://freecampsites.net/ instead. It's a community wiki of free camp sites, usually on federal lands of various sorts (National Forests and BLM lands often have primitive campsites with fire rings and not much else). It's great for travel around national parks, especially. But please do leave no trace, pack out what you bring in.

Edit: I should add that much of federally protected lands are free to camp on, within certain limits that I can't remember offhand. Things like no more than X days within a month, must be further than Y from a street or river, may or may not need a fire permit, etc. Even if undocumented and unlabeled on a map, you can typically just pull off the road and camp alongside, perfectly legally. It's part of their intended use, though that's never really made clear to the public.

What this website provides isn't the land itself (which is paid for by taxpayers) but curation, so you can easily find places with a good view, cell reception, fire rings, minimal traffic and whatnot. A lot of national lands aren't exactly desirable to camp on even if you're totally within your rights to do so.


A student of mine did a rather good thesis on de-clouding. In the process she discovered the term "cloud repatriation" a fair bit of literature on the movement to bring control of data and hardware back 'on-prem'.

She also noted that when you search on these terms the main engines (all run by cloud service providers) return poor results not congruent with the scale of the phenomenon. They are dominated by the opposite message obviously heavily SEO'd up to the top, plus shill pieces pushing cloud services but presented as "critical". Dig deep if you want to find the real scale of the "anti-cloud" issues.

Her main conclusion was very interesting though. That the big issue is not finance, reliability or control - but de-skilling.

As companies move their operations out to the cloud it's not the disappearance of hardware from the premises but the loss of skill-sets. Later they don't even know who to hire or how to write the JD to bring people back in.

A good example was the broadcast industry. Entire branches of that industry doing post-production, colour, transcoding, and whatnot moved it all out to AWS. After price hikes, they wanted to go back to running their own services. But they can't. Because nobody knows about how to set-up and run that any longer - especially specialist things like buffers and transcoders. I mean, try finding a sys-admin who can just do simple tasks like set up and properly configure a mail server these days.


> AlphaDev uncovered new sorting algorithms that led to improvements in the LLVM libc++ sorting library that were up to 70% faster for shorter sequences and about 1.7% faster for sequences exceeding 250,000 elements.

As someone that knows a thing or two about sorting... bullshit. No new algorithms were uncovered, and the work here did not lead to the claimed improvements.

They found a sequence of assembly that saves... one MOV. That's it. And it's not even novel, it's simply an unrolled insertion sort on three elements. That their patch for libc++ is 70% faster for small inputs is only due to the library not having an efficient implementation with a *branchless* sorting network beforehand. Those are not novel either, they already exist, made by humans.

> By open sourcing our new sorting algorithms in the main C++ library, millions of developers and companies around the world now use it on AI applications across industries from cloud computing and online shopping to supply chain management. This is the first change to this part of the sorting library in over a decade and the first time an algorithm designed through reinforcement learning has been added to this library. We see this as an important stepping stone for using AI to optimise the world’s code, one algorithm at a time.

I'm happy for the researchers that the reinforcement learning approach worked, and that it gave good code. But the paper and surrounding press release is self-aggrandizing in both its results and impact. That this is the first change to 'this part' of the sorting routine in a decade is also just completely cherry-picked. For example, I would say that my 2014 report and (ignored patch of) the fact that the libc++ sorting routine was QUADRATIC (https://bugs.llvm.org/show_bug.cgi?id=20837) finally being fixed late 2021 https://reviews.llvm.org/D113413 is quite the notable change. If anything it shows that there wasn't a particularly active development schedule on the libc++ sorting routine the past decade.


The idea that SQLite shouldn't be use for web applications is about a decade out-of-date at this point.

But... actual guidance as to how to use it there is still pretty thin on the ground!

Short version: use WAL mode (which is not the default). Only send writes from a single process (maybe via a queue). Run your own load tests before you go live. Don't use it if you're going to want to horizontally scale to handle more than 1,000 requests/second or so (though vertically scaling will probably work really well).

I'd love to see more useful written material about this. I hope to provide more myself at some point.

Here are some notes I wrote a few months ago: https://simonwillison.net/2022/Oct/23/datasette-gunicorn/


I would argue LaTeX is the wrong tool for a resume. A resume, like a poster or graphical magazine article, is all about visual presentation and layout. This requires pixel-perfect control over where things are, and somewhat detailed ways to coordinate positions of things. All of these are things LaTeX is bad at[1].

For visual presentation, I would suggest desktop publishing software are the right tool for the job. I personally like Scribus because it's open soucre. Something like this is trivial with desktop publishing and would be really time consuming with LaTeX: https://i.xkqr.org/screenshot_2023-01-24T19:21:25.png

[1]: The strength of LaTeX is in consistently typesetting long texts well – it's nearly unbeatable in that area.


I read recently about Francis Bacon’s scientific method. It was not the modern method. It focused more on creating lists of contrasting cases, focusing on inductive reasoning. Honestly, I didn’t entirely understand. Robert Hooke had a method, too, which he described as a philosophical “superstructure.”

Some variety in how knowledge is automatically/systematically developed seems important.


Linear Algebra Done Right by Sheldon Axler is indeed a good book if you are looking for a rigorous proof based book to learn linear algebra.

Here [1] you can find Sheldon Axler himself explaining the topics of the book in his YouTube channel! How wonderful is that!

Here [2] you can find the solutions to the exercises in the book.

This [3] Lectures might help as well, among the books this course follow is Algebra Done Right.

Good luck learning the subject of Linear Algebra you'll have fun doing so.

[1] https://www.youtube.com/playlist?list=PLGAnmvB9m7zOBVCZBUUmS...

[2] http://linearalgebras.com/

[3] http://nptel.ac.in/courses/111106051/


From the Shameless Self-Promotion Dep't:

My book https://www.albertcory.io/inventing-the-future has a chapter set in 1979 or so, at the Dutch Goose, where AI Winter 1.0 is fully underway:

===== So, Grant brought up one more thing before they left the topic of work. “What do you guys think about artificial intelligence? Is that ever going to go anywhere?”

Porter chuckled, “There’s a paper from a few years back called ‘Artificial Intelligence Meets Natural Stupidity.’ You might want to read that.” The other guys laughed.

Patrick announced, “I’ll never forget his opening line, ‘As a field, artificial intelligence has always been on the border of respectability, and therefore on the border of crackpottery.’”

Porter sat back with a wide grin, “I think they crossed that border several years ago. That’s why they can’t get any more funding.”

Ray said, “Hey, the big breakthrough is only 10 years away. And always will be!”

Grant had the strong impression from their approving looks that they weren’t impressed with AI. He wasn’t ready to give up quite yet, “Didn’t they do some cool stuff, like Blocks World? I loved how you could say ‘pick up a big red block,’ and it did it.”

Ray interrupted, “Winograd’s around all the time. You could talk to him.”


This thing is absurd. I work in ML research and I use my Macbook Air M1 to train LLMs from anywhere, but I would never let them touch my local machine. Even if they'd run (most of them wouldn't - even on a 3080 Ti), they would ruin my battery life. So I simply connect to a GPU server via VPN and ssh + vs code remote. Only students or beginner enthusiasts might think that training this stuff on the go would actually need a beefy laptop.

10 Simple Rules for a Supportive Lab Environment: https://direct.mit.edu/jocn/article/35/1/44/113591/10-Simple...

> These rules were written and voted on collaboratively, by the students and mentees of Professor Mark Stokes, who inspired this piece.


When I was quite a bit younger I went through a Wendy Carlos obsession, and spent a lot of time with Switched On Bach, Clockwork Orange, and some of Wendy’s other synth works.

Much later on, after my interests had wandered a long way from analog synth stuff, I went through a Bossa Nova phase, and at some point discovered Joao Gilberto’s 1973 self titled album which shortly became one of my favourite albums of all time. It’s an absolutely hypnotic, beautifully stripped back yet harmonically complex distillation of the bossa sound.

Only after having listened to it surely hundreds of times I read up on the recording and discovered that Carlos was the sound engineer!


I would also require reading on the major OSS search engines, e.g. Lucene, Solr and then Elasticsearch.

https://en.wikipedia.org/wiki/Apache_Lucene

https://en.wikipedia.org/wiki/Apache_Solr

https://en.wikipedia.org/wiki/Elasticsearch

those who dont know history will repeat it, etc

then maybe the newer stuff like https://typesense.org/

to be clear i dont know these things but thats what I would do and I'd happily read fly.io style blogposts drip feeding out knowledge over time


I dipped into a couple of the links. I can recommend The 'Shamanification' of the Tech CEO (https://www.wired.com/story/health-business-deprivation-tech...)

The author, an anthropologist, argues that the fads we see tech CEOs indulging in today (nootropics, intermittent fasting, weird diets) are similar to the practices used by shamans in hunter-gatherer societies to show how special they are, making them credible interfaces with the supernatural powers they claim to channel.


The 2-minute brute-force explanation of Fourier transforms in a recent Veritasium video was my "aha!" moment.

https://www.youtube.com/watch?v=nmgFG7PUHfo&t=464s


"the former CFTC Acting Chair is FTX’s Head of Policy and Regulatory Strategy and a former CFTC attorney to a former CFTC Chair is FTX’s General Counsel."

So that's how they did it!


I'm a software engineer, but these have been instrumental in my success in a way no coding book can compare to(though John Ousterhout's "A Philosophy of Software Design" would have, if it came out earlier in my life).

Personal time/task management- The classic, Getting Things Done(https://www.amazon.com/Getting-Things-Done-Stress-Free-Produ...). The power this has on people cannot be understated. Turns out that most of how life is conducted is rife with forgetfulness, decision paralysis, prioritization mistakes, and massive motivation issues. This book gives you specific workflows to cut through these in a magical way.

Personal Knowledge Management- The equally classic, How to Take Smart Notes(https://www.amazon.com/How-Take-Smart-Notes-Technique/dp/398...). Where GTD(above) does this for well-defined tasks/work, this book does it for open-ended work, giving you an amazing workflow for introducing "Thinking by Writing", which is frankly a superpower. This lets you see things your friends/colleagues simply won't, lets you deconstruct your feelings better, learn new/deeper subjects faster, and connect thoughts in a way to produce real insight.

For Product/Business Management, Gojko Adzic's "Impact Mapping"(https://www.amazon.com/Impact-Mapping-software-products-proj...) feels like it could make nearly every software team/business 10x better by just reading this book. I've personally watched as enormous portions of my life were spent on things that barely moved the needle for companies, or merely didn't keep the metric from rising. So many projects taken on faith that if you work on X, X will improve, without ever measuring, or asking if you could have accomplished that with less. The world looks insane afterward.


According to Kenji, you don't even need boiling water to cook pasta:

https://www.seriouseats.com/how-to-cook-pasta-salt-water-boi...

https://www.seriouseats.com/tips-for-better-easier-pasta

"Want to get even more crazy efficient? Try soaking pasta in water as your prep your other ingredients. By the time your sauce is ready, you can just add the pasta (along with some of its soaking liquid) to the pan and, with a minute or so of cooking, your meal will be ready to go. One less pot to clean and a pretty satisfying magic trick to boot. Still skeptical? Here's the science to back it all up. The one exception here is fresh pasta—because it contains eggs, it needs to cook in boiling water to set properly."


Deming gets the credit, but in Japan it was Homer Sarasohn[1][2] who did the foundational work[3]. In fact, he handed the reins off to Deming.

One particularly interesting thing is that, at least once upon a time, Canon didn't even bother inspecting their copiers and printers, because the variance was so low it was a waste of time[4].

[1] https://en.wikipedia.org/wiki/Homer_Sarasohn

[2] https://honoringhomer.net/

[3] https://apnews.com/article/e2908339e16a48dc8efe37d77a905daa

[4] http://qualiticien.over-blog.com/article-679016.html


Peter Drucker & W. Edwards Deming These are the two best management thinkers of the 20th century and they are largely ignored today.

Here's your chance to engage in some information arbitrage and profit.

Drucker wrote A LOT. Assumption from your question: you have yet to gain much, if any, idea about what good management is. If so, The Daily Drucker [1] is a good, easily digestible entry point to the entire body of Peter Drucker's work.

Deming came from a statistics, systems, and manufacturing background, which at a glance, makes his work seem from a different world. That couldn't be more wrong. His principles are broadly applicable. Toyota and much of Japanese industry post-WWII learned from Deming and built their businesses on his principles. The easiest place to start with him is with his 14 Points [2].

Read a bit. Compare it to what you have seen or not seen in your work experience. Read more.

I spend a lot of time thinking about management and the more I learn, the less convinced I am that there has been anything truly new since these two thinkers.

[1] https://www.amazon.com/Daily-Drucker-Insight-Motivation-Gett...

[2] https://www.deming.org/theman/theories/fourteenpoints


If you read nothing else, read Edwards Deming. He clearly outlines what the management philosophy of modern creative jobs must be like:

- Focus on what matters.

- An organisation needs an aim, a meaningful goal, that's not just "do what we have always done except better".

- Don't invest in shiny things because they are shiny – investments basically cost you the interest rate on a loan, rain or shine, even on Sundays.

- You can't inspect quality into a product or process.

- When your inspectors fix minor quality problems themselves instead of rejecting the product, the upstream process learns that minor quality problems are acceptable.

- Quality starts at the upstream process – this applies also to the upstream process.

- Making and then fixing quality problems can be a huge part of an organisations expenses, but of course there's no line item on the books for "mistakes", so that huge cost gets absorbed as the cost of doing business.

- The organisation hierarchy doesn't matter – how the work flows between people is what matters.

- Management by numeric goals results in cheating, information hiding, and bad decisions to meet the quota at any cost.

- Management by results ends up wasting a lot of effort trying to correct for statistical noise.

- To improve outcomes, you must improve the process.

- Don't compare yourself to arbitrary goals – estimate what the current process costs you and what you could earn with an improved process, then find out if that's worth the investment.

- To get a sense of the process, use statistical process control charts.

- Unless there's something truly extraordinary going on, don't judge individuals for their performance.

- Individual performance is almost always random noise compared to team performance.

- Judge team performance, and reward fairly based on that.

- In the rare case where individual performance is consistently beating team performance and you have statistical evidence of this, see it as a learning opportunity: if the rest of the team did what this person did, its performance would be insanely improved – find out what that is.

- Team performance is the outcome of manager performance.

- The point is not to be right, but to be learning.

- Humans have a right to take joy and pride in their work, free from fear of grading or ranking.

- When optimising an organisation, parts of the organisation might have to operate at a loss – trying to operate every part of an organisation at profit easily deoptimises the organisation.

- Avoid unnecessary paperwork or approval procedures, rely instead on statistics and sampling to ensure the system is stable.

- Well, technically, if inspection is really, really cheap (which it rarely is – consider also delay costs!) or mistakes really, really expensive (which they sometimes are, like for code that hits production), 100 % inspection makes sense.

- Putting out fires or fixing problems with clear, assignable causes is not process improvement – it's simply putting the process back where it should have been to begin with.

- Help people understand the greater context: how is the product used, who uses it, what do they think, how is the company making money, and so on, and so on.

- Leaders should be skilled in the jobs of the people they lead.

- How do you know whether you're doing your job well? How does other people in your organisation know?

- Everyone thinks it's obvious what success or failure would look like, but when probed for details, it turns out no two persons have the same definition. Ensure everyone agrees on how a test for success is performed.

- If you find signal in the data, first verify the validity of the data. Look first for errors in measurement.

- Plan, Do, Check, Act. Use the Shewhart cycle to learn!

- Once a process is in statistical control, a sample drawn from that process gives you no information whatsoever that you didn't already have. A sample from a process in statistical control is by definition a random number to you.

As you can see, there's a very wide variety of important things one can learn from Deming. Everyone should do it.


W. Edwards Deming observed "Nothing happens without personal transformation."

I think it's this personal transformation that's one of the hardest things about a startup. You start out wanting to change the world and end up at 3AM wondering what's gone wrong and realizing that it's you has to change first.

Barry Moltz wrote a great book "You Have To Be A Little Crazy" that addresses the emotional roller coaster that every entrepreneur faces, observing "Entrepreneurs start businesses because..they have no choice. Passion and energy drive them on good days and sustain them on bad days." Some references follow:

http://www.amazon.com/You-Need-Be-Little-Crazy/dp/079318018X...

http://www.barrymoltz.com/

http://www.skmurphy.com/blog/2006/12/27/you-need-to-be-a-lit...


> On Monday, August 26, 1991 l 6:12:08 PM UTC+12, Linus Benedict Torvalds wrote:

> Hello everybody out there using minix -

> I'm doing a (free) operating system (just a hobby, won't be big and

> professional like gnu) for 386(486) AT clones. This has been brewing

> since april, and is starting to get ready.


Yes. It was so frustratingly broken I had to stop reading and do something about it. Running this in the (Chrome) browser console fixes it for me:

  window.removeEventListener('wheel', getEventListeners(window)['wheel'][0].listener);

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: