Hacker Newsnew | past | comments | ask | show | jobs | submit | pradn's commentslogin

They have direct feedback that many people return them bc they’re too heavy, and yet… It’s just Apple being stubborn. I guess it’s not a big enough problem for them, and they don’t care about losing the market. One must laugh.

Is AI-driven clean room implementation a wild west at the moment? I suppose there haven't yet been any cases to test this out in real life?

He came to give a lecture at UT Austin, where I did my undergrad. I had a chance to ask him a question: "what's the story behind inventing QuickSort?". He said something simple, like "first I thought of MergeSort, and then I thought of QuickSort" - as if it were just natural thought. He came across as a kind and humble person. Glad to have met one of the greats of the field!

Happy to meet you. I was there and I remember that question being asked. I think it was 2010.

If I remember correctly he had two immediate ideas, his first was bubble sort, the second turned out to be quicksort.

He was already very frail by then. Yet clarity of mind was undiminished. What came across in that talk, in addition to his technical material, was his humor and warmth.


That's right - it was bubble sort first. Absolutely - frail, yet sharp. I'm happy to hear several of us didn't forget this encounter with him.

I remember this vividly! I believe he said that he thought of _Bubble Sort_ first, but that it was too slow, so he came up with QuickSort next

Good to hear from you after a while, Gaurav (I think?!).

He discusses this and his sixpence wager here: https://youtu.be/pJgKYn0lcno

(Source: TFA)


Haha I was there too. I remember he made thinking clearly seem so simple. What a humble man.

If I remember correctly, his talk was about how the world of science-the pure pursuit of truth-and the world of engineering-the practical application of solutions under constraints-had to learn from each other.


I'm glad you remember it as well! I didn't think to see if there was a recording or something of this talk, until now. It looks like the text of the talk was published here: https://www.cs.utexas.edu/~EWD/DijkstraMemorialLectures/Tony...

And the talk wasn't a random talk, but a memorial talk for Dijkstra: "The 2010 Edsger W. Dijkstra Memorial Lecture". I forgot this aspect as well!


The problem can be complex, which sometimes means the solution needs to be complex. Often, you can solve a complex problem in simple ways. There’s many ways to do that:

a) finding a new theoretical frame that simplifies the space or solutions, helps people think through it in a principled way

b) finding ways to use existing abstractions, that others may not have been able to find

c) using non-technical levers, like working at the org/business/UX level to cut scope, simplify requirements,

The way you can make complexity work for your career, is to make it clear why the problem is complex, and then what you did to simplify the problem. If you just present a simple solution, no one will get it. It’s like “showing your work”.

In some orgs, this is hopeless, as they really do reward complexity of implementation.


One of the odd things people do with tech is taking someone else's random projections at face value?

What does it mean to say "we were promised flying cars", or "every city would have micro-factories, that 3D printing would decentralize production"?

The people creating these narratives may a) truly believe it and tried to make it a reality, but failed b) never believed it at all, but failed anyway, c) or be somewhere else on this quadrant of belief vs actuality.

Why not just treat it as, "a prediction that went wrong". I suppose it's because a narrative of promise feels like a promise, and people don't like being lied to.

It's a strange narrative maneuver we keep doing with tech, which is more future-facing than most fields.


Well, there's also the almost never mentioned Rock's Law:

https://en.wikipedia.org/wiki/Moore%27s_second_law

We do have flying cars, and we do have printers that print other printers, but both were some combination of really expensive/poor quality. Technically speaking, if you take it that most cities have 3D printers, most cities then do have micro factories, however that says nothing about general feasability...

Technology requires infrastructure and resources, and our infrastructure is strained and our resources are even more so... Until the costs become pocket change for the average person, technology will just remain generally unavailable.


> What does it mean to say "we were promised flying cars"...

I don't know about the other things you mentioned, but I think you have this in the wrong category. "We were promised flying cars" is one half of a construction contrasting utopian promises/hype with dystopian (or at lest underwhelming) outcomes. I think the most common version is:

> They promised us flying cars, instead we got 140 characters.

Translation: tech promised awesome things that would make our life better, but instead we actually got was stuff like the toxicity of social media.

IMHO, this insight is one of the reasons there's so much negativity around AI. People have been around the block enough to have good reason to question tech hype, and they're expecting the next thing to turn out as badly as social media did.


> What does it mean to say "we were promised flying cars"

This promise did get fulfilled: helicopters do exist.


It's extremely painful that there's are free, OSS dictation tools that can run on-device, that are so much better than Apple's dictation, and yet it's quite difficult to use them on the iPhone. I'm referring to Whispr. Microphone access is a pain for custom keyboards -- for good reason, but still.


> When I was taken to the Tate Modern as a child I’d point at Mark Rothko pieces and say to my mother “I could do that”, and she would say “yes, but you didn’t.”

Actually, no you couldn't. The subtlety of the choice of colors, their shading, and their soft shaping, and the program of their creation over many years - you couldn't do that. They're lovely and sublime, and wonderful and an abyss. If you want to throw all that away and reduce it two boxes of paint, go ahead - but you'll be wasting a lifetime's engagement, of the joy of seeing with your intellect wide open.


> The value got extracted, but compensation isn't flowing back. That bothers me, and it deserves a broader policy conversation.

It bothers me, too. But, look at the history of the internet. There's no reason to expect we'll be able to fix this problem.

1. Search engines drove traffic to news/content sites, which monetized via ads. Humans barely tolerate these ad filled websites. And yet, local news went into steep decline, and the big national players got an ever-larger share of attention. The large, national sites were able to keep a subscriber-based paywall model. These were largely legacy media sites (ie: NYT).

2. News sites lost the local classifieds market, as the cost of advertising online went to zero (ie: Craigslist). This dynamic was a form of creative destruction - a better solution ate the business of an older solution.

3. Blog monetization was always tough, beyond ads. Unless you were a big blog, you couldn't make a living. What about getting a small amount of money per view from random visitors? The internet never developed a micro-payment or subscription model for the set of open sites - the blogosphere, etc. The best we got were closed platforms like Substack and Medium, which could control access via paywalls.

All this led to the internet being largely funded through the "attention economy": ads mostly, paywalls & subscriptions some.

The attention economy can't sustain itself when there are fewer eyeballs:

1. Tailwind docs have to be added just once to the training set for the AI to be proficient in that framework forever. So one HTTP request, more or less, to get the docs and docs are no longer required.

2. Tailwind does change, so an AI will want to access the docs for the version its working with. This will require access at inference time. This is more analogous to visiting a site.


All this measurement is useful only if you change your behavior in response. How often is this the case?


A common pattern I see is data-plane nodes receiving versioned metadata updates from the control-plane. As long as the version is higher than the node's previous one, it's correct to use. So, the metadata is a sort of monotonic counter with a bag of data attached to it. This pattern produces a monotonic counter, which I assume is a naive sort of CRDT - though the data itself doesn't benefit from CRDT merge semantics. In this world, as long as a node gets an update, its going to incorporate it into its state. In the article's terms, the system has Strong Convergence.

I'm trying to figure out under what practical circumstances updates would result in Eventual Convergence, not Strong Convergence. Wouldn't a node incorporate an update as soon as you receive it? What's causes the "eventual" behavior even after a node gets an update?

It seems to me the trouble is actually getting the update, not the data model as such. Yes, I realize partial orders are possible, making it impossible to merge certain sequences of updates for certain models. CRDTs solve that, as they're designed to do. (Though I hear that, for some CRDTs, merges might result in bad "human" results even if the merge operation follows all the CRDT rules.)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: