Hacker News new | past | comments | ask | show | jobs | submit | timr's comments login

That's a bit like saying that the invention of the airplane proved that animals can fly, when birds are swooping around your head.

I mean, sure, prior to alphafold, the notion that sequence / structure relationship was "sufficient to predict" protein structure was merely a very confident theory that was used to regularly make the most reliable kind of structure predictions via homology modeling (it was also core to Rosetta, of course).

Now it is a very confident theory that is used to make a slightly larger subset of predictions via a totally different method, but still fails at the ones we don't know about. Vive la change!


I think an important detail here is that Rosetta did something beyond traditional homology models- it basically shrank the size of the alignments to small (n=7 or so?) sequences and used just tiny fragments from the PDB, assembled together with other fragments. That's sort of fundamentally distinct from homology modelling which tends to focus on much larger sequences.

> and used just tiny fragments from the PDB

3-mers and 9-mers, if I recall correctly. The fragment-based approach helped immensely with cutting down the conformational search space. The secondary structure of those fragments was enough to make educated guesses of the protein backbone’s, at a time where ab initio force field predictions struggled with it.


Yes, Rosetta did monte carlo substitution of 9-mers, followed by a refinement phase with 3-mers. Plus a bunch of other stuff to generate more specific backbone "moves" in weird circumstances.

In order to create those fragment libraries, there was a step involving generation of multiple-sequence alignments, pruning the alignments, etc. Rosetta used sequence homology to generate structure. This wasn't a wild, untested theory.


I don't know that I agree that fragment libraries use sequence homology. From my understanding of it, homology implies an actual evolutionary relationship. Wheras fragment libraries instead are agnostic and instead seem to be based on the idea that short fragments of non-related proteins can match up in sequence and structure space. Nobody looks at 3-mers and 9-mers in homology modelling; it's typically well over 25 amino acids long, and there is usually a plausible whole-domain (in the SCOP terminology).

But, the protein field has always played loose with the term "homology".


> Rosetta used sequence homology

Rosetta used remote sequence homology to generate the MSAs and find template fragments, which at the time was innovative. A similar strategy is employed for AlphaFold’s MSAs containing the evolutionary couplings.


Yep. That's what I'm saying.

> If you're considering searching for a solution and cutting and pasting it as your answer, know that leetcode has basic tools to detect this. They track your time and keystrokes.

"Fun" story: years ago, I was interviewing for some position at a now-defunct startup that was then a darling of the SF tech scene. Several people worked there who had worked with me, as an engineer, at a prior company. It would have been trivial to verify my engineering ability by asking their existing employees.

Instead, as a screening interview I was given an at-home, timed, leetcode-style test in a UI with an abysmally bad web-based code editor. I quickly noped out of the terrible thing, opened emacs, coded the solution, and pasted it into the editor window. I submitted, the code passed whatever tests, and I was assured a response.

The next day I was summarily rejected, with only a vague comment about "lacking technical ability".


I'm not surprised. If you do this again, maybe put a disclaimer at the top. Simply mentioning emacs will likely get you points with many.

Sometimes you just have to take it on the chin and move on.

I noped out of intelligence test when looking in 2020. It was a series of extremely difficult word problems that go increasingly harder. There were more questions than you could solve. You were judged on accuracy and speed. I took a 15 minute practice test and after a few minutes called the recruiter. You wanna talk architecture, you wanna judge critical thinking with programming challenges sure. But this, he'll no.


> Sometimes you just have to take it on the chin and move on.

...or you just stop.

I basically stopped being an engineer and moved over to PM because I couldn't take the stupidity of tech interviews anymore. It's just such a myopic way of doing things, and it only rewards youth, inexperience and the "raw horsepower" dimension of the problem. I tried to imagine being 50+ years old with 30+ years of industry experience, and having somebody 2 weeks out of Foo Bootcamp judging me for not implementing some trending version of towers of hanoi quickly enough while tapdancing backwards [1]. It all started to feel like a sick game, designed to keep the rats fighting in the pit.

PM interviews certainly have their own problems, but they tend to be the sort that reflect the actual difficulties of hiring a human who has to work with other humans on hard intellectual problems. It's squishy and messy and hard to quantify, and you can't do it with an "objective" test. Bad people can fake their way through the process on occasion, but then again...I've seen plenty of terrible engineers who do well on leetcode questions, too.

I miss coding sometimes, but whenever I get the urge to go back, the thought of having to do the leetcode gauntlet annoys me enough to keep me away.

[1] Meanwhile, the CEO of the company is likely running the entire organization with minimal prior relevant experience; the investors/board of directors got their jobs because they were in the same fraternity; most everyone else on the executive team got there via glorified personality interviews (and a healthy dose of "who do you know?") Objective tests of competence are only for the little people, you see!


> We've collectively complicated the hell out of things.

Yes.

> I loved Rails, it made me as a young developer able to finally realize way more ambitious projects than I'd ever done before. I also liked the promise (not implementation) of Meteor - it felt like the clear future, I guess just a bit too early (and a bit too scope-creeped).

Meteor and Rails were fundamentally incompatible views of the world, and the root of the "complicated the hell out of things" problem is that the Meteor approach wasn't practical for the vast majority of web development projects.

Once you've decided to build everything in the front end and turn the server channel into a glorified database connection, you've created a nightmare for yourself on multiple levels. Setting aside the fact that javascript lacked (lacks?) even the basics of a practical programming language -- real packages and dependency management and the like, all of which needed to be implemented via a panoply of non-standard libraries that have been subsequently re-written every 2.5 weeks -- you're re-inventing wheels (like routing, rendering, cacheing and a request/response paradigm) that the browser does for you, for free. Maybe that's worth it in a few niche cases (say, making a web-based code editor), but for the vast majority of CRUD websites, you're just swimming upstream.

Spinning languages and libraries misses the point that the problem is fundamental to the approach. I guess what I'm saying is: lean into your first instinct. It's all too complex. Go back to the root of what makes Rails -- traditional web development, really -- good.


Given that we're not dumping JS anytime soon, I think that the shortcomings of JS are kind of moot.

Once you accept that 1) JS and the ecosystem has a ton of shortcomings, 2) JS isn't going to be replaced on the web for front-end for a while, then I think you can move on and think about things more objectively.

I like this project because at this point in 2024, it feels like it should be easier to build and deploy cross platform web/mobile apps (I blame Apple for PWA not taking off). React is not my cup of tea, but if One's mission pans out, then I'd use it to streamline my workflow if I were building across web and mobile.

I'd personally prefer PWAs and Apple making it easier to install PWAs as apps since that would make it even easier and more streamlined from a DX perspective. Right now, we all have to jump through these hoops and build this extraneous tooling and infrastructure because of Apple's resistance on installable web apps.


Nobody said "dump Javascript", and that clearly isn't what I meant [1]. I put that part in only to illustrate that these problems were knowable at the time that Meteor was created.

What I am saying is that the paradigms of "modern" front-end development are all wrong. Don't re-invent basic functionalities of the browser. Don't try to turn the web server connection into a glorified database (either by implementing bastardized HTTP-SQL, or by treating the socket as a bidirectional JSON channel). Accept that server-side generation of HTML is fundamental to the medium. Embrace progressive enhancement. I could go on.

None of these things requires that you drop JS as your language. If you wanted to build Rails in JS, you totally could [2].

If I had to draw a crude metaphor, the state of web development today would be as if an entirely new generation of engineers decided that TCP was wrong because most stacks are implemented in C, and therefore re-created bad versions of handshakes, etc. by writing a new protocol on top of UDP in Javascript. It's not the language, it's the architecture.

[1] Though you absolutely should consider that JS isn't perhaps isn't a great language for most forms of development, and be open to, you know...using better tools when they're available, instead of trying to do everything in the lowest common denominator, while patching over all of the stuff you're missing with gobs and gobs of bad libraries.

[2] Though again, why would you want to? I submit that the core cognitive dissonance here is that as soon as you accept that these "modern" paradigms are wrong, you quickly conclude that Javascript is a net burden on your development process. But sure, if you really like it for some reason, then great. You can use it.


I'm not a web dev, but I see a conflict between server-side HTML generation and a local-first application. ;)

I think there's 2 distinct use cases here: a) a classic "web-site" that people "visit", interact with and leave b) an "application" that people (in the past would have installed and then) use over a longer period of time that needs to persist and sync state

Case a) doesn't lend itself to local-first, because it's unpredictable which information the user wants to query (think of Wikipedia – I don't want to download all of Wikipedia before reading an article). Yet, I still find it important that the website renders quickly, which I guess means removing all the layers of JS, as you described, and returning HTML from the server. Or replacing the JS frameworks with something much more light-weight and asking what really needs to be done there. I'd aim at 100 ms page loading time (without initial network delay).

Case b) is the prime case for local-first, which I would phrase as a grumpy old man's "just bring back programs like we had in the past (immediate response, data is local), and add multi-user sync on top". This means reaction times in the ballpark of 16 ms. I'm very happy that the local-first movement brings this experience into web apps.

And about people re-implementing TCP: I first thought you referred to QUIC. Because it's also a non-C implementation – in userspace. It is better in some scenarios, but also much more ineffecient and slower on high-bandwidth links (cf. e.g.: B. Jaeger, J. Zirngibl, M. Kempf, K. Ploch, G. Carle. 2023. QUIC on the Highway: Evaluating Performance on High-rate Links. In Proc. of IFIP Networking Conference (IFIP Networking)).


You're correct about the two use cases. The problem is that too many websites/developers think they are case b) while they really are case a)

> as if an entirely new generation of engineers decided that TCP was wrong because most stacks are implemented in C, and therefore re-created bad versions of handshakes, etc. by writing a new protocol on top of UDP in Javascript

WebSockets would like a word


100% honest as someone who loved Rails - i really love building with react and tamagui if you put together the right tooling.

95% of apps people glue together are quite bad so i get that some people see it as a lost cause, but i could never go back, it's clearly better to me but only if you do manage to have the time to put together a great stack.

once zero is released i'd be interested to see your opinion of trying it out.


To play devil’s/react-dev’s advocate: the appeal of local-first is that you can easily configure which business logic you want done on the server, leaving anything that can be done on the client to the client for latency and cloud bill reasons.

  javascript lacked (lacks?) even the basics of a practical programming language -- real packages and the like
Interesting. As a typescript user with a million npm packages this surprises me. In the sense of “managed dependencies” JS/TS has tons of packages, but I’m getting the sense that you mean something else? Like, splitting up your code into packages gets you some performance benefit in other languages? I have python packages (read: __init__.py files) in my Flask app, but I’m not sure I’ve ever missed them in TS - am I missing some big picture? In general web bundling is so complex I tend to treat it as a black box, so that could easily be on me.

  you're re-inventing wheels (like routing, rendering, a request/response paradigm) that the browser does for you, for free. 
I’m having trouble seeing what you mean here. I know the typical “implementing an SPA is tough because you have to fake pathname manipulation/history” warnings, but I think you’re going a step beyond that. React does rendering on a way more performant level than browsers if you’re just changing subsets of the DOM, and I’m really not sure what you mean by the “request/response” paradigm at all, sorry. SPAs do have to handle async requests on the client and normal websites make the server do that when building the page, but ultimately the difference seems slight. What am I missing?

  for the vast majority of CRUD websites
I mean, do people making crud websites really need any frameworks, and do they browse hacker news? Aren’t static sites a pretty solved problem by people like WordPress? Seems like the hard part of that would just be writing content and filling templates, maybe some styling — not any engineering work.

Agree with you pushing back on the TS hate. The ecosystem and tooling have gotten very mature and very productive IMO for building UIs.

However, static sites are by definition not CRUD sites and yes I would guess most HN web devs make CRUD sites. And I would submit that in 2024 there is still not a rails-level happy path for doing that in ts (though Redwood is trying).


> Interesting. As a typescript user with a million npm packages this surprises me. In the sense of “managed dependencies” JS/TS has tons of packages, but I’m getting the sense that you mean something else?

The very fact that you cite Typescript (itself a hack built on top of Javascript to make up for its deficiencies), and npm (a third-party packaging system...since, you know...JS doesn't/didn't have one), is just emphasizing my point.

Setting aside the goodness or badness of either of these things, you're so immersed in the complexity of all of the hacks that you don't even see it anymore. It's the old joke about the fish and water.

Imagine a world where packages and types were a built-in, first-class feature of the language...oh wait.


What language has a built in dependency system…? How would that even work? I’m confused. Every dependency system is a registry of dependencies vetted by some group, along with one or more compilers/runtimes that know how to speak that group’s format. Who cares if the group is GitHub (TIL… ew, tbf) vs. the language nonprofit itself?

Also, more importantly by far: can you point me to what language(s) you’re talking about? C++ doesn’t seem to have a “built in” dependency manager, just vcpkg and Conan. Python has poetry, anaconda, and pip, the latter of which is run by the Python nonprofit — but something tells me that’s not the old school classic you’re referencing lol. Java has ant, gradle, and Maven, none of which seem official. I guess we’re talking about ruby, since their manager is published by a group that claims ownership of the language (https://rubycentral.org/)?

Basically I’m still not sure what im missing. I very much might be a fish in water, and I am a newbie to all of this, relatively! But I’ve never once noticed a real difference in the languages I’ve developed in professionally (those mentioned above, minus ruby) so I’m not sure what ruby’s being more “first-class” gets me. At a certain point, if your build system is setup using boilerplate, then the two steps are 1. Download package, 2. Import package into relevant file and start calling its members. What part of that can be improved?


> What language has a built in dependency system…?

Literally every language with support for modules that extend across files has some form of dependency management. Even C -- simple as it is -- has headers and source files which separate interface from implementation. Some languages go further (e.g. Ruby), and have far more developed, standard systems of version management built right in -- rubygems will do the graph analysis of your module includes, resolve dependency cycles, enforce versioning, and so on. Python is a total mess when it comes to high-level dependency management, but only because it offers 3-4 different "standard" ways of doing it!

Notably, Javascript didn't even have the the export or import keywords until ES6 (2015). It had literally no support for even the primitive, C-like version of the idea, and there were multiple competing implementations of it, from Node, to CommonJS, to RequireJS, Webpack, Babel and so on. But you still can't get rid of webpack, though -- JS has no built-in notion of minification or tree pruning, so that's all still implemented via a panoply of third-party tools.


This bomb was buried. Nobody knew it was there...which is sort of crazy, considering that they built a runway over it and it didn't detonate until now.

Some types of explosives (e.g. detonators) become more unstable with age. See for example: https://royalsocietypublishing.org/doi/10.1098/rsos.231344

The proposed law was so egregiously stupid that if you live in California, you should seriously consider voting for Anthony Weiner's opponent in the next election.

The man cannot be trusted with power -- this is far from the first ridiculous law he has championed. Notably, he was behind the (blatantly unconstitutional) AB2098, which was silently repealed by the CA state legislature before it could be struck down by the courts:

https://finance.yahoo.com/news/ncla-victory-gov-newsom-repea...

https://www.sfchronicle.com/opinion/openforum/article/COVID-...

(Folks, this isn't a partisan issue. Weiner has a long history of horrendously bad judgment and self-aggrandizement via legislation. I don't care which side of the political spectrum you are on, or what you think of "AI safety", you should want more thoughtful representation than this.)


Anthony Weiner is a disgraced New York Democratic politician who does not appear to have re-entered politics after his release from prison a few years ago. You mentioned his name twice in your post, so it doesn't seem to be an accident that you mentioned him, yet his name does not seem to appear anywhere in your links. I have no idea what message you're trying to convey, but whatever it is, I think you're failing to communicate it.


Yes, it was a mistake. I obviously meant the Weiner responsible for the legislation I cited. But you clearly know that.

> I have no idea what message you're trying to convey, but whatever it is, I think you're failing to communicate it.

Really? The message is unchanged, so it seems like something you could deduce.


He meant Scott Wiener but had penis on the brain.


>you should want more thoughtful representation than this.

Your opinion on what "thoughtful representation" is is what makes this point partisan. Regardless, he's in until 2028 so it'll be some time before that vote can happen.

Also, important Nitpick, it's Scott Weiner. Anthony Weiner (no relation AFAIK) was in New York and has a much more... Public controversy.


> Public controversy

I think you accidentally hit the letter "L". :P


** Anthony != Scott Weiner


you've got the wrong Weiner dude ;)


Lol, I thought "How TF did Anthony Weiner get elected for anything else again??" after reading that.


And you can dismiss any argument with your response.

"Your argument is just a reductive rhetorical strategy."


Sure if you ignore context.

"a probabilistic syllable generator is not intelligence, it does not understand us, it cannot reason" is a strong statement and I highly doubt it's backed by any sort of substance other than "feelz".


I didn't ignore any more context than you did, but just I want to acknowledge the irony that "context" (specifically, here, any sort of memory that isn't in the text context window) is exactly what is lacking with these models.

For example, even the dumbest dog has a memory, a strikingly advanced concept model of the world [1], a persistent state beyond the last conversation history, and an ability to reason (that doesn't require re-running the same conversation sixteen bajillion times in a row). Transformer models do not. It's really cool that they can input and barf out realistic-sounding text, but let's keep in mind the obvious truths about what they are doing.

[1] "I like food. Something that smells like food is in the square thing on the floor. Maybe if I tip it over food will come out, and I will find food. Oh no, the person looked at me strangely when I got close to the square thing! I am in trouble! I will have to do it when they're not looking."


> that doesn't require re-running the same conversation sixteen bajillion times in a row

Lets assume the dog visual systems run at 60 frames per second. If it takes 1 second to flip a bowl of food over then that's 60 datapoints of cause-effect data that the dog's brain learned from.

Assuming it's the same for humans, lets say I go on a trip to the grocery store for 1 hour. That's 216,000 data points from one trip. Not to mention auditory data, touch, smell, and even taste.

> ability to reason [...] Transformer models do not

Can you tell me what reasoning is? Why can't transformers reason? Note I said transformers not llm's. You could make a reasonable (hah) case that current LLMs cannot reason (or at least very well) but why are transformers as an architecture doomed?

What about chain of thought? Some have made the claim that chain of thought adds recurrence to transformer models. That's a pretty big shift, but you've already decided transformers are a dead end so no chance of that making a difference right?


For years, there was a Commodore 64 in COSI (the science museum) in Columbus, OH, connected to CompuServe. I was fascinated with that thing as a kid -- it was this window into a parallel world that I didn't really understand, but immediately understood in a sort of a Snow Crash way. Looking back, it's quaint, but such a harbinger of the future!


I feel like that was better than what we have now.


Because most people can't afford to buy the apartment they are renting. Some huge percentage of US citizens can't scrape together enough money to pay for a car repair. How are they going to afford an apartment?


Doesn't that indicate we should be building more and cheaper housing?

The other components of the market aren't fixed and immutable.

If state governments wanted to mandate denser rezoning and smaller minimum unit sizes once unaffordability reached a certain level... they could.

Pointing at expensive housing and saying "We therefore have to make it cheaper by forcing more people into bargains in which they receive none of the gains" seems like the tail wagging the dog.


Sure, but it wouldn't change anything, unless your plan is to make housing so cheap that it literally costs less than a month of rent to buy the whole place, I don't think you're going to have a lot of luck.


The mentioned problem isn't that renters couldn't buy a property with a month's rent, but that they couldn't do so with reasonable savings.

Micro housing that's cheaper should certainly be able to meet that bar.


This is trivially false. The sum of the rents of the tenants of the property is enough to cover the landlord's purchase of the building, any interest on any loan used to secure that purchase, hire upkeep for the building, and still have money left over for profit in the landlord's pocket.

Therefore, the sum of the rents of the tenants of the property is enough to cover the purchase of the building, which is a subset of those costs.


> The sum of the rents of the tenants of the property is enough to cover the landlord's purchase of the building, any interest on any loan used to secure that purchase, hire upkeep for the building, and still have money left over for profit in the landlord's pocket.

The "sum of the rents of the tenants" isn't relevant to a single tenant, the loan isn't used to "secure the purchase", and you're confusing cash flow for total equity.

A building owner puts down a deposit for some percentage (say, 20%) of the building's cost, gets 80% in debt, and pays down that debt over time with the cash flow from the tenants' rent payments. They make a small margin net of expenses, plus whatever equity accumulates over time.

Setting aside the (significant) question of whether or not a given tenant would have the credit necessary to do such a thing, a tenant is not necessarily able to do the same thing, just because they can afford to pay the rent.


You're missing the downpayment.

Real scenario: Someone buys a 4-plex for $1M. They put in $240K down payment. That's $60K per apartment. Will those tenants have that $60K they can put down to continue living there?

Sure, they'll own it and can get it back if they sell, but the majority of tenants don't have that kind of cash lying around.


> There's a wide variety of increasingly common health conditions that have an unknown etiology -- ADHD, autism, CFS, IBS, mental illness, obesity, etc.

And the number of people named Killian corresponds well with the rise in incidents related to airbags:

https://tylervigen.com/spurious/correlation/2599_popularity-...

You don't just get to pull random things that are going up (assuming that they're going up, which you haven't actually shown) and blame them on something else that is ostensibly going up. That doesn't suggest anything. Ever. [1]

That is pretty sus, after all. Oh look, I just made MSCI's stock go up:

https://tylervigen.com/spurious/correlation/1639_google-sear...

[1] But hey, waitaminute...violent crime is down since the 1970! Did microplastics do that, or does it only work for bad stuff?


Taking a step back, my point was simply that there's a large number of common ailments that we don't understand. We don't know what causes them and we don't know how to cure them. Our understanding of human biology is grossly incomplete. We are nowhere close to first principles here.

We also have a long history of introducing new compounds into the environment and then many decades later saying oops, turns out those are toxic. BPA and PFAs recently, to name a few.

Where does that leave us? Who knows. I'm not saying microplastics are responsible for any of those diseases. I'm saying we don't really know what the hell is going on, so some alarm and caution seems appropriate. And, call me crazy, I'd say accidentally coating the planet with a new substance isn't a good thing. There is nothing reassuring about this situation.


No no no, you don't get to point at "common ailments" and assume you've explained anything. What common ailments do you think we don't understand, don't know the causes of, or can't cure?


Go to your doctor with eczema and ask what the problem is. They won't know, and sorry there is no cure. Hell go to your doctor with a migraine -- good luck. Have a food allergy? That should be simple to explain and fix, right... well actually we can't.

Maybe we can cure your asthma? Nope. We can't cure your diabetes either, sorry. Type 1? We don't know how you got that.

Better hope you never have any auto-immune disorder or mental illness. Doctors won't be able to explain it or cure it. And hopefully you never got long Covid. Cross your fingers on cancer, alzheimers, and parkinsons.

I could go on but I wonder, isn't our lack of understanding an obvious truth?


Arguing that travel you don't like is unnecessary because it harms the environment?

Yes, that is useless and lazy nihilism.

OP is arguing that observed preferences are far more reliable indicators of what people actually value than what people say to surveyors/the press/reddit/HN. That's not "nilhilism", it's just a fact. Presumably, academics travel to conferences because they're getting value out of it, and they deem that value worth whatever costs they believe it incurs.


I was in and among that set for the last 12 years. Plenty of people travel for fun because someone else is paying. Many conferences are nowhere near a central location and instead are in somewhere exotic because one of the organizers wanted it there. This never sat well with me, but is completely standard and almost never questioned.


Sure, ok. Let's posit that the value they're getting is that they enjoy travel, and wish to do it. My response is the same.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: