We don't even need to go into the 2000's. The author openly dismisses Generalized Method of Moments (published in 1982 by Lars Hansen [0]) as a 'complex mathematical technique' that he's 'guessing there are a lot of weird assumptions baked into' it, the main evidence being that he 'can't really follow what it's doing'. He also admits that he has no idea what control variables are or how to explain linear regression. It's completely pointless trying to discuss the subtleties of how certain statistical techniques try to address some of his exact concerns, it's clear that he has no interest in listening, won't understand and just take that as further evidence that it's all just BS. This post is a rant best described as Dunning-Kruger on steroids, I have no idea how this got 200 points on HN and can just advise anyone who reads here first to spare themselves the read.
[0] edit: Hansen was awarded the Nobel Memorial Prize in Economics in 2013 for GMM, not that that means it can't fail, but clearly a lot of people have found it useful.
I think you are significantly misrepresenting what the author said. He didn't say he has no idea what control variables are. What he said is:
> The "controlling for" thing relies on a lot of subtle assumptions and can break in all kinds of weird ways. Here's[1] a technical explanation of some of the pitfalls; here's[2] a set of deconstructions of regressions that break in weird ways.
> He didn't say he has no idea what control variables are
He did say exactly that.
> They use a technique called regression analysis that, as far as I can determine, cannot be explained in a simple, intuitive way (especially not in terms of how it "controls for" confounders).
> "generalized method of moments" approaches to cross-country analysis (of e.g. the effectiveness of aid)
Which is an entirely reasonable criticism. GMM is a complex mathematical process, wiki suggests [0] that it assumes data generated by a weakly stationary ergodic stochastic process of multivariate normal variables. There are a lot of ways that a real world data for aid distribution might be nonergodic, unstationary, generally distributed or even deterministic!
Verifying that a paper has used a parameter estimation technique like that properly is not a trivial task even for someone who understands GMM quite well. A reader can't be expected to follow what the implications are from reading a study; there is a strong element of trust.
Every statistical model makes assumptions. As a general rule, the more mathematically complex the model, the fewer (or weaker) assumptions are made. That's what the complexity is for. So the criticism 'it looks complex, so the assumptions are probably weird' doesn't make sense.
If as a reader you don't understand a paper (that's been reviewed by experts), then the best thing to conclude is that you're not the target audience, not that the findings can be dismissed.
He isn't saying that, he's saying he does understand the paper and therefore the findings can be viewed with some suspicion. That is the nature of research, clear conclusions are rare because real data is messy.
> Every statistical model makes assumptions. As a general rule, the more mathematically complex the model, the fewer (or weaker) assumptions are made. That's what the complexity is for. So the criticism 'it looks complex, so the assumptions are probably weird' doesn't make sense.
This is an argument of the form [X -> Y. Y. Y has a purpose. Therefore not(Y->Z)]. It isn't valid; the fact that a criticism is general doesn't make it weaker (or stronger, for that matter). It is a bit like saying meat contains bacteria so someone can't complain that some meal gave them food poisoning. They can certainly complain about it and it is possible (indeed likely) that some meat is bad because of excessive bacteria.
> He isn't saying that, he's saying he does understand the paper
He literally says 'I can't really follow what it's doing', linking to a paper that discusses some issues with instrumental variable regression (what GMM is used for).
Fyi, it's quite common to have a paper rejected on the first (couple of) submission(s). With feedback along the lines of "not interesting enough", it's usually worth it trying it again somewhere else. Sounds like that venue was looking for methodological innovation within optimization, you could try it at a more domain-oriented outlet that would appreciate how your solution improves upon the SOTA for cloud resource selection.
What I'm also saying: just because you had a paper rejected once with a specific comment doesn't mean that every other paper where such a comment could vaguely fit needs to be rejected as well.
I had several comments. I posted below but here are some
This was my first and only research paper submission under my MS in Systems Engineering. It wasn’t a requirement but the department head was impressed that any Masters student might try to submit at all.
It was to IEEE. Maybe that was hard mode? It was for a Cloud Conference in 2023.
One of the comments
“ This paper shows a linear programming problem to address cloud security. It's not clear why the LP program addresses security. Comparison with SOTA is also lacking.”
“ The paper attempts a fresh perspective on Cloud Security Decisions using a well-known approach.
Unfortunately, as currently presented, it does not deliver any significant research results, and thereby its contribution is limited.
The paper as formulated is closer to a vision paper -- not that it actually is -- than to a research paper. This reviewer notes the limited results. No surprises, the paper is also missing an evaluation analysis of the results.”
I had evaluation of results. I had never heard of a vision paper.
This was one of the better comments that actually provide the best feedback. The section that they mention was exceptionally long was my literature review:
“ This paper describes the idea of leveraging linear programming to determine how to allocate security resources in the cloud.
It would be better if the paper focused more on describing the proposed solution. The current description in Sec III is insufficient. First, there are many assumptions that were made to explain and formulate the linear programming for security resource allocation. These assumptions need to be explained further, e.g., to provide information on parameterizing values such as the budget and weights. Second, it would be better if the paper describes the system models based on a current cloud service provider so that the discussion is more concrete.
The implementation section can be reduced significantly, e.g., the Sec V.B. can be condensed to only show important lines of code.
There is no evaluation of the proposed solution. It is hard to assess how well the proposed solution will work without evaluating the formulation based on existing cloud service providers. Additionally, it is important to choose some baselines to compare the proposed solution.
The related work section takes 2 out of the 6 pages and feels unnecessarily long. It might be sufficient to describe how prior work solves this problem and how this paper differs.
A final remark is about the problem formulation. It is unclear what the challenges of allocating security resources are and how those challenges differ from other cloud-based resource allocation problems.”
It's hard giving feedback without having seen the paper. I think you need to ask yourself how much you'd value having a peer-reviewed publication and how much effort/frustration tolerance you want to conjure up for this.
Getting it accepted might entail multiple submissions and significant extra work on top of what you've already done. The value of the bragging rights might be limited on the non-academic job market. And getting it published is still a different thing from having impact. Up to you to decide.
Contrary take: "AI founders will learn the bitter lesson" (263 comments): https://news.ycombinator.com/item?id=42672790 the gist: "Better AI models will enable general purpose AI applications. At the same time, the added value of the software around the AI model will diminish."
Both essays make convincing points, I guess we'll have to see. I like the Uber analogy here, maybe the winners will be some who use the tech in innovative ways that only leverage the underlying tech.
The differentiator is whether or not your company operates with domain specific data and subject matter experts that those big companies don’t have (which is quite common).
There’s plenty of applications to build that won’t easily get disrupted by big AI. But it’s important to think about what they are, rather than chase after duplication of the shiny objects the big companies are showing off.
And they don't have to pay the margin on the API calls. So an equal UX on the same model API will be twice as profitable when operated by the first-party.
> Epicureans had a very specific understanding of what the greatest pleasure was, and the focus of their ethics was on the avoidance of pain rather than seeking out pleasure.
So whether the description of the work (as GP critiques) is correct, really comes down to whether the definition of 'pleasure' used is as in Epicureanism. Certainly someone unfamiliar would misunderstand it.
Finding minimum complexity explanations isn't what finding natural laws is about, I'd say. It's considered good practice (Occam's razor), but it's often not really clear what the minimal model is, especially when a theory is relatively new. That doesn't prevent it from being a natural law, the key criterion is predictability of natural phenomena, imho. To give an example, one could argue that Lagrangian mechanics requires a smaller set of first principles than Newtonian, but Newton's laws are still very much considered natural laws.
Maybe I'm just a filthy computationalist, but the way I see it, the most accurate model of the universe is the one which makes the most accurate predictions with the fewest parameters.
The Newtonian model makes provably less accurate predictions than Einsteinian (yes, I'm using a different example), so while still useful in many contexts where accuracy is less important, the number of parameters it requires doesn't much matter when looking for the one true GUT.
My understanding, again as a filthy computationalist, is that an accurate model of the real bonafide underlying architecture of the universe will be the simplest possible way to accurately predict anything. With the word "accurately" doing all the lifting.
I'm sure there are decreasingly accurate, but still useful, models all the way up the computational complexity hierarchy. Lossy compression is, precisely, using one of them.
The thing is, Lagrangian mechanics makes exactly the same predictions as Newtownian, and it starts from a foundation of just one principle (least action) instead of three laws, so it's arguably a sparser theory. It just makes calculations easier, especially for more complex systems, that's its raison d'être. So in a world where we don't know about relativity yet, both make the best predictions we know (and they always agree), but Newton's laws were discovered earlier. Do they suddenly stop being natural laws once Lagrangian mechanics is discovered? Standard physics curricula would not agree with you btw, they practically always teach Newtownian mechanics first and Lagrangian later, also because the latter is mathematically more involved.
I will argue that 'has least action as foundation' does not in itself imply that Lagrangian mechanics is a sparser theory:
Here is something that Newtonian mechanics and Lagrangian mechanics have in common: it is necessary to specify whether the context is Minkowski spacetime, or Galilean spacetime.
Before the introduction of relativistic physics the assumption that space is euclidean was granted by everybody. The transition from Newtonian mechanics to relativistic mechanics was a shift from one metric of spacetime to another.
In retrospect we can recognize Newton's first law as asserting a metric: an object in inertial motion will in equal intervals of time traverse equal distances of space.
We can choose to make the assertion of a metric of spacetime a very wide assertion: such as: position vectors, velocity vectors and acceleration vectors add according to the metric of the spacetime.
Then to formulate Newtonian mechanics these two principles are sufficient: The metric of the spacetime, and Newton's second law.
Hamilton's stationary action is the counterpart of Newton's second law. Just as in the case of Newtonian mechanics: in order to express a theory of motion you have to specify a metric; Galilean metric or Minkowski metric.
To formulate Lagrangian mechanics: choosing stationary action as foundation is in itself not sufficent; you have to specify a metric.
So:
Lagrangian mechanics is not sparser; it is on par with Newtonian mechanics.
More generally: transformation between Newtonian mechanics and Lagrangian mechanics is bi-directional.
Shifting between Newtonian formulation and Lagrangian formulation is similar to shifting from cartesian coordinates to polar coordinates. Depending on the nature of the problem one formulation or the other may be more efficient, but it's the same physics.
You seem to know more about this than me, but it seems to me that the first law does more than just induce a metric, I've always thought of it as positing inertia as an axiom.
There's also more than one way to think about complexity. Newtownian mechanics in practice requires introducing forces everywhere, especially for more complex systems, to the point that it can feel a bit ad hoc. Lagrangian mechanics very often requires fewer such introductions and often results in descriptions with fewer equations and fewer terms. If you can explain the same phenomenon with fewer 'entities', then it feels very much like Occam's razor would favor that explanation to me.
Indeed inertia. Theory of motion consists of describing the properties of Inertia.
In terms of Newtonian mechanics the members of the equivalence class of inertial coordinate systems are related by Galilean transformation.
In terms of relativistic mechanics the members of the equivalence class of inertial coordinate systems are related by Lorentz transformation.
Newton's first law and Newton's third law can be grouped together in a single principle: the Principle of uniformity of Inertia. Inertia is uniform everywhere, in every direction.
That is why I argue that for Newtonian mechanics two principles are sufficient.
The Newtonian formulation is in terms of F=ma, the Lagrangian formulation is in terms of interconversion between potential energy and kinetic energy
The work-energy theorem expresses the transformation between F=ma and potential/kinetic energy
The work-energy theorem: I give a link to an answer by me on physics.stackexchange where I derive the work-energy theorem
https://physics.stackexchange.com/a/788108/17198
The work-energy theorem is the most important theorem of classical mechanics.
About the type of situation where the Energy formulation of mechanics is more suitable:
When there are multiple degrees of freedom then the force and the acceleration of F=ma are vectorial. So F=ma has the property that the there are vector quantities on both sides of the equation.
When expressing in terms of energy:
As we know: the value of kinetic energy is a single value; there is no directional information. In the process of squaring the velocity vector directional information is discarded, it is lost.
The reason we can afford to lose the directional information of the velocity vector: the description of the potential energy still carries the necessary directional information.
When there are, say, two degrees of freedom the function that describes the potential must be given as a function of two (generalized) coordinates.
This comprehensive function for the potential energy allows us to recover the force vector. To recover the force vector we evaluate the gradient of the potential energy function.
The function that describes the potential is not itself a vector quantity, but it does carry all of the directional information that allows us to recover the force vector.
I will argue the power of the Lagrangian formulation of mechanics is as follows:
when the motion is expressed in terms of interconversion of potential energy and kinetic energy there is directional information only on one side of the equation; the side with the potential energy function.
When using F=ma with multiple degrees of freedom there is a redundancy: directional information is expressed on both sides of the equation.
Anyway, expressing mechanics taking place in terms of force/acceleration or in terms of potential/kinetic energy is closely related. The work-energy theorem expresses the transformation between the two. While the mathematical form is different the physics content is the same.
Nicely said, but I think then we are in agreement that Newtownian mechanics has a bit of redundancy that can be removed by switching to a Lagrangian framework, no? I think that's a situation where Occam's razor can be applied very cleanly: if we can make the exact same predictions with a sparser model.
Now the other poster has argued that science consists of finding minumum complexity explanations of natural phenomena, and I just argued that the 'minimal complexity' part should be left out. Science is all about making good predictions (and explanations), Occam's razor is more like a guiding principle to help find them (a bit akin to shrinkage in ML) rather than a strict criterion that should be part of the definition. And my example to illustrate this was Newtonian mechanics, which in a complexity/Occam's sense should be superseded by Lagrangian, yet that's not how anyone views this in practice. People view Lagrangian mechanics as a useful calculation tool to make equivalent predictions, but nobody thinks of it as nullifying Newtownian mechanics, even though it should be preferred from Occam's perspective. Or, as you said, the physics content is the same, but the complexity of the description is not, so complexity does not factor into whether it's physics.
Laws (in science, not government) are just a relationship that is consistently observed, so Newton's laws remain laws until contradictions were observed, regardless of the existence of or more alternative models which would predict them to hold.
The kind of Occam’s Razor-ish rule you seem to be trying to query about is basically a rule of thumb for selecting among formulations of equal observed predictive power that are not strictly equivalent (that is, if they predict exactly the same actually observed phenomenon instead of different subsets of subjectively equal importance, they still differ in predictions which have not been testable), whereas Newtonian and Lagrangian mechanics are different formulations that are strictly equivalent, which means you may choose between them for pedagogy or practical computation, but you can't choose between them for truth because the truth of one implies the truth of the other, in either direction; they are the exactly the same in sibstance, differing only in presentation.
(And even where it applies, its just a rule of thumb to reject complications until they are observed to be necessary.)
Newtownian and Lagrangian mechanics are equivalent only in their predictions, not in their complexity - one requires three assumptions, the other just one. Now you say the fact that they have the same predictions makes them equivalent, and I agree. But it's clearly not compatible with what the other poster said about looking for the simplest possible way to explain a phenomenon. If you believe that that's how science should work, you'd need to discard theories as soon as simpler ones that make the same predictions are found (as in the case of Newtownian mechanics). It's a valid philosophical standpoint imho, but it's in opposition to how scientists generally approach Occam's razor, as evidenced eg by common physics curricula. That's what I was pointing out. Having to exclude Newtownian mechanics from what can be considered science is just one prominent consequence of the other poster's philosophical stance, one that could warrant reconsidering whether that's how you want to define it.
Windows 11 looks like the perfect reason to give UNIX-based systems another try. Literally the only thing that's kept me hooked to Windows are the Office apps. They're baked into so many of my workflows, from creating simple graphics to doing my personal finances, and of course plenty of legacy documents that I'd like to continue being able to use. They're really Windows-native I've found, even the official versions for iOS seem to be missing some features (last time I checked was in the past year, and I couldn't find some paragraph-level formatting options I wanted in Word, eg). Google Docs seem like a different product, they apparently have great APIs, but the "click-based" features are no match. It's been ages since I tried LibreOffice, but it was no match back then either.
I'm thinking, either I need to get used to different workflows or just try virtualization. I heard Figma is great for presentations, anything that Excel can do where the alternatives are lacking is probably better done in R/Python anyway, but for Word I don't see an alternative. No way I'll use LaTeX for all my writing, and anything Markdown-based just won't cut it formatting-wise. Or just use something like Wine I guess. Anyone facing a similar situation?
Long-time Windows user here that made the jump from Windows 11+WSL to Linux a few months ago. After test driving a few distros, I settled on CachyOS (an Arch-based distro)[1].
Performance wise it's smooth as heck, and Geekbench scores show it performing better than Win11 across the board. The default install uses KDE Plasma for its desktop, which is a perfect fit for Windows users like myself in terms of UX/UI.
For an alternative to MS Office, I've been using OnlyOffice[2] with no compatibility issues yet (though I am only a casual user and not a hardcore Word/Excel user).
I reinstalled Win11 last week to confirm whether or not I was experiencing bias, and there was noticeable feeling of "lag" when using Win11 compared to CachyOS (this test was with the latest Win drivers and patches on relatively recent Thinkpad hardware). I went back to Cachy with no hesitation after that.
> Yes, every dependency onlyoffice uses is outdated. They even use v8 8.9 that doesn't include any security patches. They also uses outdated CEF binary downloaded from an http url and doesn't check its integrity at all. Even worse, that CEF binary might be closed source as suggested by dbermond in https://github.com/ONLYOFFICE/DesktopEditors/issues/1664
> I would advise anyone who uses onlyoffice to avoid opening any untrusted documents with it. It appears that onlyoffice upstream doesn't care about security at all. See https://github.com/ONLYOFFICE/DesktopEditors/issues/1664 for more details
Ahaha, I've become that person I guess. I only mentioned Arch as I've always used Ubuntu when using Linux desktop VMs, and even test drove Kubuntu before trying out Cachy. Apart from some brief time getting used to pacman as a package manager instead of apt, I haven't encountered any other items that felt different to Ubuntu.
Can't recommend this enough, I was letting a few games with anticheat keep my personal use on Windows and I decided to jettison those and make the plunge and couldnt be happier.
I went with Mint instead of an arch-based distro, but my experience has been really great even dealing with Geforce drivers.
I use the 365 suite in a web browser if I need to work on it , no issues.
+1 for only office. When I was a data analyst I made this custom graph in Excel that rendered some lines as speedometers. It calculated the rotation based on the input numbers to align them in the right position. LibreOffice could not handle it (and I don't blame them). I was shocked when I opened the file in OnlyOffice and it worked!
I run Linux on my work machine and my office is full Windows/MacOS shop.I've so far been able to get away with using either office web apps for things like Teams, Outlook, Excel and Word and I also have a Window 11 VM that has all the desktop versions of the same apps.
I would say that 99.9% of the time I can get away with using the web app versions, even for things like Teams meetings it works really well. Once in a blue moon I will have a document that I can't open in the web versions so I fire up the VM and open it on there.
There are definitely some annoyances around this workflow but IMHO the annoyances pale in comparison to the annoyance of having to use Windows or MacOS every day.
It's probably worth trying LibreOffice again if your last install was a couple of years ago. They take document compatibility bugs pretty seriously and fix a bunch with every release.
That's probably the easiest step to take next, before looking at virtualization or a full Linux install with Wine.
Calc is... bad. It's slow and I've run into bugs in formulas; would rather use google sheets, which are a different kind of bad, but better than calc. No issues with writer, haven't used anything else.
When I see people waking up now I wonder what's taken them so long. I could see this 15 years ago and jumped off Windows at that point. Been using Linux ever since. It's become so easy since then I've intentionally made my life more difficult by switching to Gentoo about 5 years ago. I'm so glad none of my work is locked into the products of rent seeking companies like Microsoft. It was easier for me because 15 years I didn't already have a body of work and an investment into any tools, but I still think it's something you'll be glad you did in another 15 years.
How the documents look is everything. That’s what separates desktop publishing like Word from Notepad. The documents have to look the same and have to print the same. Legal cases depend on it. Academic submissions depend on it (Nature Communications template is not latex, it is word). This is not something that can be omitted.
Ah, but, “pdfs aren’t editable” and “pdfs cost more money to view”. People absolutely do use Word when they want documents to look the same, and will complain when the documents look different.
That's the conventional short-term wisdom, but you'll find just about any rule is bendable to breakable when market conditions change, folks get scared, or they simply decide to.
There's no document formatting that can't be copied elsewhere. Start with new documents and convert the old ones (to pdf or whatever) at some point.
Since Windows XP that is is going to be the year of Linux migration following the Windows exodus after each Windows version is announced, and here we are.
Even Valve can't get the folks targeting Android to port their NDK powered games into SteamDeck, they have to translate Windows/DirectX instead.
I can vouch that the OnlyOffice flatpak is worth at least giving a try. Just sending sth important without requiring Microsoft office at all, feels so good. Granted I have a docx template and generated the initial version with pandoc, so I'm not doing any formatting or anything, just back and forth over editing.
Office is moving web based. OWA is first class now, with Outlook New being a thin wrapper around it with some natives. Also their mockups all use macs primarily so “go figure”
There is a long, long road ahead for that to happen. Excel has to not only radically change itself, but so does Power BI. The 3rd party ecosystem has slowly changed from COM add-ins to the JS-based Add-ins, but even then there are many 3rd parties that continue to go the COM route, hence the very long deprecation road for 'legacy' Outlook in the enterprise.
I am curious what made you give up? The EOL of Windows 7 is precipitated my switch.
I went down the fun path of running Windows on Linux with a pass-through VM for a while but found that most of what I was trying to do worked well in Linux.
Of course, I don't do any development or work on my own computer. Work computer is now 11 and I dislike it but honestly the IT lockdown drives more ire than the Microsoft redesign
I've used windows for 30+ years, and I'm getting a Mac this year. I seriously considered Linux on a Thinkpad and even test-drove Debian on my older X1 Carbon. I tried, but too many things didn't quite work. I'd get stuck on the login screen for no apparent reason. VMware modules were a pain to build and sign. Something (might have been VMware modules) caused it to freeze. Hidpi support isn't ready. And nothing was really polished.
As someone who has used OSX for .. 21 years now and is slowly, but surely moving off: the grass is not greener on the other side.
Bugs aplenty, a user interface which has seriously deteriorated over the last decade bundled with an ever-increasing user hostility and tendency to lock you out of your system.
One example: you can no longer manage which applications may run as daemons/background tasks. Any application can register itself with the OS to do so, and your only recourse is a little tiny switch in the system preferences.
Only, in the case of Google Chrome this does not work; the application constantly re-registers itself, overriding the setting. I can no longer prevent Chrome from doing whatever the hell it wants to do, and — adding insult to injury — every time it does, I get a persistent notification from macOS that it is now doing what ever the hell it wants to do. About a dozen times a day.
Sounds like my 6th Gen X1, only I replaced the battery last fall. I also noticed the display glitches sometimes when I open it, and the USB-C ports have connection issues sometimes.
Give Linux a try. After seeing how ad-centered Windows 11 has become, I made the decision to wipe my drive and go full Linux, and I couldn't be happier. Is it perfect? No. Is it better for my workflow and caters to my more advanced usage? A big resounding yes.
It cannot replace Microsoft Office, but it's getting close. Most people don't use the full functionality of Microsoft Office, so LibreOffice and Google's online suite are good enough, but I still keep a remote Windows Virtual Machine (VM) around for those time I need Windows-specific stuff and RDP into the VM. I look forward to the day Microsoft finally wakes up and ports Microsoft Office to Linux.
I'm working on a cross-platform native-first, offline-first replacement for Excel and PowerPoint, so hopefully it can help you and others make the switch.
I, too, spent far too long trapped in Windows because I couldn't get away from MS Office
For me it is only Excel. I am not even a power user, but its strongest features is its integration with powerquery. In many use cases it is perfectly enough to quickly analyze some data and it is still friendly enough to give non-tech workers possibilities to refresh the newest data available.
Apart from that every other part of the MS ecosystem is replaceable. If there would be a solution for corporate IT account management, Windows could be replaced without much friction.
Office is moving to the cloud, so the current dying breed of desktop apps should be covered by WINE, eventually. Or cave in and use O365, like I do for work - the irony is that Microsoft's insistence on O365 has completely defeated the purpose of their OS.
I mean... Office also just runs just fine on a Mac. But I agree, Linux is the way to go. VMs are not so bad, but you can also use Steam's Proton to run most Windows software just fine, I would be surprised if people don't just run Office from Steam's flavor of Wine, since the game support is phenomenal.
Ok, but what is a good pattern to leverage AI tools for coding (assuming that they have some value there, which I think most people would agree with now)? I could see two distinct approaches:
- "App builders" that use some combination of drag&drop UI builders, and design docs for architecture, workflows,... and let the UI guess what needs to be built "under the hood" (a little bit in the spirit of where UML class diagrams were meant to take us). This would still require actual programming knowledge to evaluate and fix what the bot has built
- Formal requirement specification that is sufficiently rigorous to be tested against automatically. This might go some way towards removing the requirement to know how to code, but the technical challenge would simply shift to knowing the specification language
I'd challenge if this is specific to coding? If you want to get a result that is largely like a repertoire of examples used in a training set, chat is probably workable? This is true for music. Visual art. Buildings. Anything, really?
But, if you want to start doing "domain specific" edits to the artifacts that are made, you are almost certainly going to want something like the app builders idea. Down thread, I mention how this is a lot like procedural generative techniques for game levels and such. Such that I think I am in agreement with your first bullet?
Similarly, if you want to make music with an instrument, it will be hard to ignore playing with said instrument more directly. I suspect some people can create things using chat as an interface. I just also suspect directly touching the artifacts at play is going to be more powerful.
I think I agree with the point on formal requirements. Not sure how that really applies to chat as an interface? I think it is hoping for a "laws of robotics" style that can have a test to confirm them? Reality could surprise me, but I always viewed that as largely a fiction item.
"Actual product stakeholders" in this space clearly don't actually have any magic sauce to spill. Everyone is building more or less the same chat-based workflows on the same set of 3rd-party LLMs.
The space is ripe for folks with actual domain expertise to design an appropriate AI workflow for their domain.
Disclaimer: Haven't used the tools a lot yet, just a bit. So if I say something that already exists, forgive me.
TLDR: Targeted edits and prompts / Heads Up Display
It should probably be more like an overlay (and hooked into context menus with suggestions, inline context bubbles when you want more context for a code block) and make use of an IDE problems view. The problems view would have to be enhanced to allow it to add problems that spanned multiple files, however.
Probably like the Rust compiler output style, but on steroids.
There would likely be some chatting required, but it should all be at a particular site in the code and then go into some history bank where you can view every topic you've discussed.
For authoring, I think an interactive drawing might be better, allowing you to click on specific areas and then use shorter phrasing to make an adjustment instead of having an argument in some chat to the left of your screen about specificity of your request.
Multi-point / click with minimal prompt. It should understand based on what I clicked what the context is without me having to explain it.
Rationalism? The term has been used a lot of times since Pythagoras [0], but the combination of Bay Area, Oxford, existential risks, AI safety makes it sound like this particular movement could have formed in the same mold as Effective Altruism and Long-Termism (ie, the "it's objectively better for humanity if you give us money to buy a castle in France than whatever you'd do with it" crowd that SBF sprung from). Can somebody in know weigh in?
- SBF and Alameda Research (you probably knew this),
- the Berkeley Existential Risk Initiative, founded (https://www.existence.org/team) by the same guy who founded CFAR (the Center for Applied Rationality, a major rationalist organization)
- the "EA infrastructure fund", whose own team page (https://funds.effectivealtruism.org/team) contains the "project lead for LessWrong.com, where he tries to build infrastructure for making intellectual progress on global catastrophic risks"
- the "long-term future fund", largely AI x-risk focused
Rationalism is simply an error. The thing being referred to here is "LessWrong-style rationality", which is fundamentally in the empirical, not rational school. People calling it rationalism are simply confused because the words sound similar.
(Of course, the actual thing is more closely "Zizian style cultish insanity", which honestly has very very little to do with LessWrong style rationality either.)
Just like HN grew around the writing of Paul Graham, the "rationalist community" grew around the writings of Eliezer Yudkowsky. Similar to how Paul Graham no longer participates on HN, Eliezer rarely participates on http://lesswrong.com anymore, and the benevolent dictator for life of lesswrong.com is someone other than Eliezer.
Eliezer's career has always been centered around AI. At first Eliezer was wholly optimistic about AI progress. In fact, in the 1990s, I would say that Eliezer was the loudest voice advocating for the development of AI technology that would greatly exceed human cognitive capabilities. "Intentionally causing a technological singularity," was the way he phrased it in the 1990s IIRC. (Later "singularity" would be replaced by "intelligence explosion".)
From 2001 to 2004 he started to believe that AI has a strong tendency to become very dangerous once it starts exceeding the human level of cognitive capabilities. Still, he hoped that before AI starts exceeding human capabilities, he and his organization could develop a methodology to keep it safe. As part of that effort, he coined the term "alignment". The meaning of the term has broadened drastically: when Eliezer coined it, he meant the creation of an AI that stays aligned with human values and human preferences even as its capabilities greatly exceed human capabilities. In contrast, these days, when you see the phrase "aligned AI", it is usually being applied to an AI system that is not a threat to people only because it's not cognitively capable enough to dis-empower human civilization.
By the end of 2015, Eliezer had lost most of the hope he initially had for the alignment project in part because of conversations he had with Elon Musk and Sam Altman at an AGI conference in Puerto Rico followed by Elon and Sam's actions later that year, which actions included the founding of OpenAI. Eliezer still considers the alignment problem solvable in principle if a sufficiently-smart and sufficiently-careful team attacks it, but considers it extremely unlikely any team will manage a solution before the AI labs cause human extinction.
In April 2022 he went public with his despair and announced that his organization (MIRI) will cease technical work on the alignment project and will focus on lobbying the governments of the world to ban AI (or at least the deep-learning paradigm, which he considers too hard to align) before it is too late.
The rationalist movement began in November 2006 when Eliezer began posting daily about human rationality on overcomingbias.com. (The community moved to lesswrong.com in 2009, at which time overcomingbias.com became the personal blog of Robin Hanson.)
The rationalist movement was always seen by Eliezer as secondary to the AI-alignment enterprise. Specifically, Eliezer hoped that by explaining to people how to become more rational, he could
increase the number of people who are capable of realizing that AI research was a potent threat to human survival.
To help advance this secondary project, the Center for Applied Rationality (CFAR)
was founded as a non-profit in 2012. Eliezer is neither an employee nor a member of the board
of this CFAR. He is employed by and on the board of the non-profit Machine Intelligence
Research Institute (MIRI) which was founded in 2000 as the Singularity Institute for Artificial Intelligence.
I stress that believing that AI research is dangerous has never been a requirement for posting on lesswrong.com or for participating in workshops run by CFAR.
Effective altruism (EA) has separate roots, but the two communities have become close over the years, and EA organizations have donated millions to MIRI.
He has no formal education. He hasn't produced anything in the actual AI field, ever, except his very general thoughts (first that it would come, then about alignment and doomsday scenarios).
He isn't an AI researcher except he created an institution that says he is one, kind of as if I created a club and declared myself president of that club.
He has no credentials (that aren't made up), isn't acknowledged by real AI researchers or scientists, and shows no accomplishments in the field.
His actual verifiable accomplishments seem to be having written fan fiction about Harry Potter that was well received online, and also some (dodgy) explanations of Bayes, a topic that he is bizarrely obsessed with. Apparently learning Bayes in a statistics class, where normal people learn it, isn't enough -- he had to make something mystical out of it.
Why does anyone care what EY has to say? He's just an internet celebrity for nerds.
It is true that he has no academic credentials, but people with academic credentials have been employed on the research program led by him: Andrew Critch for example, who has a PhD in math from UC Berkeley, and Jesse Liptrap who also has a math PhD from a prestigious department although I cannot recall which one.
It's not only that he has no academic credentials, he also has no accomplishments in the field. He has no relevant peer reviewed publications (in mainstream venues; of course he publishes stuff under his own institutions. I don't consider those peer reviewed). Even if you're skeptical about academia and only care about practical achievements... Yudkowsky is also not a businessman/engineer who built something. He doesn't actually work with AI, he hasn't built anything tangible, he just speaks about alignment in the most vague terms possible.
At best -- if one is feeling generous -- you could say he is a "philosopher of AI"... and not a very good one, but that's just my opinion.
Eliezer looks to me like a scifi fan who theorizes a lot, instead of a scientist. So why do (some) people pay any credence to his opinions on AI? He's not a subject matter expert!
Ok, but hundreds of thousands of people have worked for Google without being experts on AI. Anyone who employs one, doesn't automatically become more credible. If you believe that then I want you to know that this comment was written by an ex-Google employee and thus must be authoritative ;)
Good point! If I could write the comment over again, I'd probably leave out the ex-Googlers. But I thought of another math PhD who was happy to work for Eliezer's institute, Scott Garrabrant. I could probably find more if I did a search of the web.
If you believed (like Eliezer has since about 2003) that AI research is a potent danger, you are not going to do anything to help AI researchers. You are for example, not going to publish any insights you may have that might advance the AI state of the art.
Your comment is like dismissing someone who is opposed to human cloning on the grounds that he hasn't published any papers that advance the enterprise of human cloning and hasn't worked in a cloning lab.
> [...] remember the point I was responding to, namely, Eliezer should be ignored because he has no academic credentials.
That's not the full claim you were responding to.
You were responding to me, and I was arguing that Yudkowsky has no academic credentials, but also no background in the field he claims to be an expert on, he self-publishes and is not peer-reviewed by mainstream AI researchers or the scientific community, and he has no practical AI achievements either.
So it's not just lack of academic credentials, there's also no achievements in the field he claims to research. Both facts together present a damning picture of Yudkowsky.
To be honest he seems like a scifi author who took himself too seriously. He writes scifi, he's not a scientist.
OK, but other scientists think he is a scientist or an expert on AI. Stephen Wolfram for example sat down recently for a four-hour-long interview about AI with Eliezer, during which Wolfram refers to a previous (in-person) conversation the 2 had and says he hopes the 2 can have another (in-person) conversation in the future:
His book _Rationality: A-Z_ is widely admired including by people you would concede are machine-learning researchers: https://www.lesswrong.com/rationality
Anyway, this thread began as an answer to a question about the community of tens of thousands of people that has no better name than "the rationalists". I didn't want to get in a long conversation about Eliezer though I'm willing to continue to converse about the rationalists or on the proposition that AI is a potent extinction risk, which proposition is taken seriously by many people besides just Eliezer.
He has received a salary for working on AI since 2000 (having the title "research fellow"). In contrast, he didn't start publishing his Harry Potter fan-fiction till 2010. I seem to recall his publishing a few sci-fi short stories before then, but his non-fiction public written output has always greatly exceeded his fiction output until a few years ago after he became semi-retired due to chronic health problems.
>He’s basically a PR person for OpenAI and Anthropic
How in the world did you arrive at that belief? If it was up to him, OpenAI and Anthropic would be shut down tomorrow and their assets returned to shareholders.
Since 2004 or so, he has been of the view that most research in AI is dangerous and counterproductive and he has not been shy about saying so at length in public, e.g., getting a piece published in Time Magazine a few years ago opining that the US government should shut down all AI labs and start pressuring China and other countries to shut down the labs there.
> He has received a salary for working on AI since 2000 (having the title "research fellow")
He is a "research fellow" in an institution he created, MIRI, outside the actual AI research community (or any scientific community, for that matter). This is like creating a club and calling yourself the president. I mean, as an accomplishment it's very suspect.
As for his publications, most are self-published and very "soft" (on alignment, ethics of AI, etc). What are his bona fide AI works? What makes him a "researcher", what did he actually research, how/when was it reviewed by peers (non-MIRI adjacent peers) and how is it different to just publishing blog posts on the internet?
On what does he base his AI doomsday predictions? Which models, which assumptions? What makes him different to any scifi geek who's read and watched scifi fiction about apocalyptic scenarios?
A great example of superficially smart people creating echo chambers which then turn sour, but they can't escape. There's a very good reason that, "Buying your own press" is a cliched pejorative, and this is an extreme end of that. More generally it's just a depressing example of how rationalism in the LW sense has become a sort of cult-of-cults, with the same old existential dread packaged in a new "rational" form. No god here, just really unstable people.
My explanation for why Eliezer went from vocal AI optimist to AI pessimist is that he became more knowledgeable about AI. What is your explanation?
I've seen the explanation that AI pessimism helped Eliezer attract donations, but that does not work because his biggest donor when he started going public with his pessimism (2003 through 2006) was Peter Theil, who responded to his turn to pessimism by refusing to continue to donate (except for donations earmarked for studying the societal effects of AI, which is not the object of Eliezer's pessimism and not something Eliezer particularly wanted to study).
I suspect that most of the accusations to the effect that MIRI or Less Wrong is a cult are lazy ad-hominems by people who have a personal interest in the AI industry or an ideological attachment to technological progress.
correct. there isnt a single well founded argument to dismiss AI alarmism. people are very attached to the idea that more technology is invariably better. and they are very reluctant to saddle themselves with the emotional burden of seeing whats right in front of them.
> there isnt a single well founded argument to dismiss AI alarmism.
I don't think that's entirely true. A well-founded argument against AI alarmism is that, from a cosmic perspective, human survival is not inherently more important than the emergence of AGI. AI alarmism is fundamentally a humanistic position: it frames AGI as a potential existential threat and calls for defensive measures. While that perspective is valid, it's also self-centered. Some might argue that AGI could be a natural or even beneficial step for intelligence beyond humanity. To be clear, I’m not saying one shouldn’t be humanistic, but in the context of a rationalist discussion, it's worth recognizing that AI alarmism is rooted in self-preservation rather than an absolute, objective necessity. I know this starts to sound like sci-fi, but it's a perspective worth considering.
the discussion is about what will happen, not the value of human life. even if human life is worthless, my predictions about the outcome of AI are correct and theirs are not
A combination of a psychological break when his sibling died and that being a doomsayer brought him a lot more more money, power, and worship per unit of effort and particularly per unit of meaningful work-like effort.
It's a lot easier to be a doomsayer bullshiter than other kinds of bullshitters, the fomer just screams stop the latter is expected to accomplish something now and again.
He was already getting enough donations and attention from being an AI booster, enough to pay himself and pay a research team, so why would he suddenly start spouting AI doom before he had any way of knowing that doomsaying would also bring in donations? (There were no AI doomsayers that Eliezer could learn that from when Eliezer started his AI doomsaying: Bill Joy wrote an article in 2000, but never followed it up by asking for donations.)
Actually, my guess is that doomsaying never did bring in as much as AI boosterism: his org is still living off of donations made many years ago by crypto investors and crypto founders, who don't strike me as the doom-fearing type: I suspect they had fond memories of him from his optimistic AI-boosterism days and just didn't read his most recent writings before they donated.
The very ones, both of them had and have every reason to hype AI as much as possible, and still do for that matter. Altman in particular seems to relish the use of the "oh no what I'm making is so scary, it's even scaring me" fundraising method.
Eliezer was hyping AI back in the 1990s though. Really really hyping it. And by the time of the conversations with Sam and Elon in 2015, he had been employed full time as an AI researcher for 15 years.
Here is an example (written in year 2000) of Eliezer's hyping of AI:
>The Singularity holds out the possibility of winning the Grand Prize, the true Utopia, the best-of-all-possible-worlds - not just freedom from pain and stress or a sterile round of endless physical pleasures, but the prospect of endless growth for every human being - growth in mind, in intelligence, in strength of personality; life without bound, without end; experiencing everything we've dreamed of experiencing, becoming everything we've ever dreamed of being; not for a billion years, or ten-to-the-billionth years, but forever... or perhaps embarking together on some still greater adventure of which we cannot even conceive. That's the Apotheosis. If any utopia, any destiny, any happy ending is possible for the human species, it lies in the Singularity. There is no evil I have to accept because "there's nothing I can do about it". There is no abused child, no oppressed peasant, no starving beggar, no crack-addicted infant, no cancer patient, literally no one that I cannot look squarely in the eye. I'm working to save everybody, heal the planet, solve all the problems of the world.
>The Plan to Singularity ("PtS" for short) is an attempt to describe the technologies and efforts needed to move from the current (2000) state of the world to the Singularity; that is, the technological creation of a smarter-than-human intelligence. The method assumed by this document is a seed AI, or self-improving Artificial Intelligence, which will successfully enhance itself to the level where it can decide what to do next.
>PtS is an interventionist timeline; that is, I am not projecting the course of the future, but describing how to change it. I believe the target date for the completion of the project should be set at 2010, with 2005 being preferable; again, this is not the most likely date, but is the probable deadline for beating other, more destructive technologies into play. (It is equally possible that progress in AI and nanotech will run at a more relaxed rate, rather than developing in "Internet time". We can't count on finishing by 2005. We also can't count on delaying until 2020.)
No longer is he hyping AI though: he's trying to get it shut down till (many decades from now) we become wise enough to handle it without killing ourselves.
That castle was found to be more cost-effective than any other space the group could have purchased, for the simple reason that almost nobody wants castles anymore. It was chosen because it was the best calculation; the optics of it were not considered.
It would be less disingenuous if you were to say EA is the "it's objectively better for humanity if you give us money to buy a conference space in France than whatever you'd do with it" crowd -- the fact that it was a castle shouldn't be relevant.
Nobody wants castles anymore because they’re impractical and difficult to maintain. It’s not some sort of taboo or psychological block, it’s entirely practical.
Actually, the fact that people think castles are cool suggests that the going price for them is higher than their concrete utility would make it, since demand would be boosted by people who want a castle because it’s cool.
Did these guys have some special use case where it made sense, or did they think they were the only ones smart enough to see that it’s actually worth buying?
> That castle was found to be more cost-effective than any other space the group could have purchased
In other words, they investigated themselves and cleared themselves of any wrongdoing.
It was obvious at the time that they didn't need a 20 million dollar castle for a meeting space, let alone any other meeting space that large.
They also put the castle up for sale 2 years later to "use the proceeds from the sale to support high-impact charities" which was what they were supposed to be doing all along.
The depressing part is that the "optics" of buying a castle are pretty good if you care about attracting interest from elite "respectable" donors, who might just look down on you if you give off the impression of being a bunch of socially inept geeks who are just obsessed with doing the most good they can for the world at large.
Both are factual, the longer statement has more nuance, which is unsurprising. If the emphasis on the castle and SBF - out of all the things and people you could highlight about EA - concisely gives away that I have a negative opinion of it then that was intended. I view SBF as an unsurprising, if extreme, consequence of that kind of thinking. I have a harder time making any sense of the OP story in this context, that's why I was seeking clarification here.
Why buy a conference space. Most pubs will give you a seperate room if you promise to spend some money at the bar. There are probably free spaces had they researched.
If I am donating money and you are buying a conference space on day 1 I'd want it to be filled with experienced ex-UN field type of people and Nobel peace prize winners.
Somewhere between “once a year” conferences hosted at hotels and the continual conferences of a university lies the point where buying a building makes sense.
The biggest downside, of course, is all your conferences are now in the same location
There is significant overlap between the EA and Lesswrongy groups, also parallel psychopathic (oh sorry, I mean "utilitarian navel gazing psychopathy") policy perspectives.
E.g. there is (or was) some EA subgroup that wanted the development of a biological agent that would genocide all the wild animals, because-- in their view-- wild animals lived a life of net suffering and so exterminating all of them would be a kindness.
... just in case you wanted an answer to the question "what would be even less ethical than the Ziz-group intention to murder meat-eaters"...
Tesla's investors voted again last year and 70% confirmed they still wanted to award the original package (the court still blocked it, though, btw). Does that change your view?
Elon is allowed to vote in his self interest. If he owned 51% this should still stand. The fact that he had a majority yes without his own vote makes this doubly true.
[0] edit: Hansen was awarded the Nobel Memorial Prize in Economics in 2013 for GMM, not that that means it can't fail, but clearly a lot of people have found it useful.
reply