Hacker News new | past | comments | ask | show | jobs | submit | mattbuilds's comments login

Is it really the best approach though if we sink all this capital into it if it can never achieve AGI? It’s wildly expensive and if it doesn’t achieve all the lofty promises, it will be a large waste of resources IMO. I do think LLMs have use cases, but when I look at the current AI hype, the spend doesn’t match up with the returns. I think AI could achieve this, but not with a brute force like approach.


There's still even a more fundamental question before getting there, how are we defining AGI?

OpenAI defines it based on the economic value of output relative to humans. Historically it had a much less financially arrived definition and general expectation.


You really can't take anything OpenAI says about this kind of thing seriously at this point. It's all self-serving.


It's still important though, they are the ones many are expecting to lead the industry (whether that's an accurate expectation is surely up for debate).


Market will sort that out just like it did dotcom or tulip madness.

Another big push back is copyrighted content. Without proper revenue model how to pay for that?

That will also restrict what can be "learned". Already there's lawsuit, allegations of using pirated books etc


I'll be surprised if anything meaningful comes of those issues in the end.

Copyright issues here feel very similar to claims against Microsoft in the 80s and 90s.


Just to further your point, we are in a thread about Kubrick who did numerous book adaptations including Lolita, Dr. Strangelove, The Shining, and Clockwork Orange and this is just off the top of my head. Tons of directors adapt novels. Bringing the story to the screen is the skill.


As an aside, 2001 is an interesting case as it was produced concurrently with the novel. It's clearly not an adaptation, but I wouldn't say it's clearly original material either.


It’s funny because in my many years of development I don’t think I’ve ever encounter a “mess of shell scripts” that was difficult to maintain. They were clear, did their job and if they needed to be replaced it was usually simple and straightforward.

Can’t say the same for whenever the new abstraction of the day comes along. In my experience what the OP is saying is exactly my experience. The abstractions get picked not because they are best but because they reduce liability.


Hello. I have found the mess of shell scripts. Please don't do this.

I was able to deal with the weird skaffold mess by getting rid of it, and replacing it with argocd. I was able to get rid of jenkins by migrating to github actions. I have yet to replace the magic servers with magic bash scripts. They take just enough effort that i can't spend the time.

Use a tool i can google. If your bash script is really this straight forward and takes you from standard A to standard B, and it's in version control then bash is AMAZING. Please don't shove a rondom script that does a random thing on a random server.


Bash is good but can grow out of control. The problem is solo engineers and managers who push/approve 500+ line bash scripts that do way too much. A good engineer will say it's getting too complicated and reimplement it in Python.


Wasn't there a rule about that?

Something like "in software development the only solution that sticks is the bad one, because the good ones will keep getting replaced until it's so bad, nobody can replace it anymore"


i have encountered messes of shell scripts that were difficult to maintain; in my first sysadmin job in 01996 i inherited a version control system written as a bunch of csh scripts, built on top of rcs

but they were messy not because they lacked 'abstractions' but because they had far too many

i think shell scripts are significantly more bug-prone per line than programs in most other programming languages, but if the choice is hundreds of thousands of lines in an external dependency, or a ten-line or hundred-line shell script, it's easy for the shell script to be safer


If it was in RCS, then you could directly move the archives under a CVSROOT and use them natively.

CVS had been out since Brian Berliner's version of 1989.

I actually moved a PVCS archive into RCS->CVS this way, and I'm still using it.


that version control system provided a number of facilities cvs didn't (locking, and also a certain degree of integration with our build system permitting the various developers to only recompile the part of the system they were working on, which was important because recompiling the whole thing usually took me about a week, once a month), but it had never actually occurred to me that turning an rcs repository into a cvs repository like that was a possibility. also i never realized pvcs used rcs under the covers. thank you very much


PVCS did not use the RCS format, but the RPM distribution included a perl script to convert the archives.

  $ rpm -ql cvs | grep pvcs
  /usr/share/cvs/contrib/pvcs2rcs


ooh. that would have been very useful two jobs later when i got stuck with pvcs


Shell seems great until your tens of lines in googling every other line of obscure error-prone syntax.


… or maybe you are not proficient at shell scripting? I never had this issue, including large projects written in tcl, bash or perl in the 90s when it was more normal to do so.

The modern answer seems to be some kind of dsl with yaml syntax mixed with Unix (and thus bash) snippets which are often incredibly verbose and definitely not easier to read than a well written bash script. The only thing I think of when I see those great solutions is; another greenspun’s tenth rule in action.


bash and other sh related approaches have a lot of "foot guns". python, or powershell, or even C++ are often easier to read and follow.

> are often incredibly verbose and definitely not easier to read than a well written bash script

define well written -- getting into no true scotsman here.

bash is fine for what it was and what it did, and i'm glad to know enough sed and awk to be dangerous, but it's a PITA unless we're forced to use it


The way I tend to look at it is to solve the problem you have. Don't start with a complicated architecture because "well once we scale, we will need it". That never works and it just adds complexity and increases costs. When you have a large org and the current situation is "too simple", that's when you invest in updating the architecture to meet the current needs.

This also doesn't mean to not be forward thinking. You want the architecture to support growth that will more than likely happen, just keep the expectations in check.


> Don't start with a complicated architecture because "well once we scale, we will need it".

> You want the architecture to support growth that will more than likely happen

The problem is even very experienced people can disagree about what forms of complexity are worth it up-front and what forms are not.

One might imagine that Google had a first generation MVP of a platform that hit scaling limits and then a second generation scaled infinitely forever. What actually happens is that any platform that lives long enough needs a new architecture every ~5 years (give or take), so that might mean 3-5 architectures solving mostly the same problem over the years, with all of the multi-year migration windows in between each of them.

If you're very lucky, different teams maintain the different projects in parallel, but often your team has to maintain the different projects yourselves because you're the owners and experts of the problem space. Your leadership might even actively fend off encroachment from other teams "offering" to obsolete you, even if they have a point.

Even when you know exactly where your scaling problems are today, and you already have every relevant world expert on your team, you still can't be absolutely certain what architecture will keep scaling in another 5 years. That's not only due to kinds of growth you may not anticipate from current users, it's due to new requirements entirely which have their own cost model, and new users having their own workload whether on old or new requirements.

I've eagerly learned everything I can from projects like this and I am still mentally prepared to have to replace my beautifully scaling architectures in another few years. In fact I look forward to it because it's some of the most interesting and satisfying work I ever get to do -- it's just a huge pain if it's not a drop-in replacement so you have to maintain two systems for an extended duration.


I've had similar things in the past. What was the case for me, and maybe for you is that I was using the idea of "organizing" and "planning" to procrastinate doing actual work. I was working on difficult problems and I felt like if I could just organize everything correctly, the results would fall into place. This isn't how it works though, you need to just do the work.

Organization and tools like that can be helpful for communicating and coordinating across a company, but at the individual level it's usually a waste of time. You don't need them. Remember, the whole point is to get work done. If organizing isn't moving you closer to your goal, it's not doing it's job.

My suggestion is to do what I do now.

One list: TODO.txt.

I put tasks in priority order and just work through them. Sometimes I write a note or an idea at the bottom, but that's it. No over complication. It only has what I need to get the next thing done and keep moving through my tasks.


A more or less single-todo-file system got me through college, it’s a great approach.


The reason is growth. A sysadmin will make sure the lights stay on and the place doesn’t burn down, but they aren’t creating a new revenue stream. And when all these companies need to grow forever, they need to be creating new things, which developers do and sysadmins don’t.


And that is why platform/devops is safer than the sysadmin. Not only do they help developers create new things, but make it complicated enough that you can't disband their team ... {tongue in cheek, sort of...}


Which is funny, because they aren’t very good at it.


I mean he’s a mouthpiece for Thiel, not really that surprising. His livelihood depends on viewing the world in a very specific way.


I really disagree with the pros listed on over-engineering, specifically "future-proofing" and "reusability". I doubt you can accurately predict the future and whatever assumptions you make will likely be wrong in some way. Then you are stuck having to solve the problem that you created by trying to predict. As for reusabilty, it's similar. Start with solving what you have to, then abstract as you see it fit. Again, don't try to predict. Be thoughtful and really understand what is actually happening in your system. Don't try to follow some pattern you read online because it seems like a good fit.

Realistically you should engineer for the problem you have or can reasonably expect you are going to have pretty soon. You can solve future problems in the future. I'm also not saying to write horrible unmaintainable code, but don't try to abstract away complexity you don't actually have yet. Abstractions and where to separate things should become apparent as you build the system, but it's really hard to know them until you are actually using it and see it come together.


> I really disagree with the pros listed on over-engineering, specifically "future-proving" and "reusability"

Yes, someone who argues that "over-engineering" leads to "future-proving" is caught by the bug.

When you future-prove something, that's called "engineering". Over-engineering is by definition failing to foresee future needs, imagining generic future needs ten steps ahead instead of the less ambitious future needs two steps ahead.

It is easier to modify early, simplistic assumptions than it is to walk back from premature generalisations over the wrong things.


Exactly. A thing so small and simple that you can rewrite it in an afternoon is more futureproof than any 8000 LOC monstrosity.


> Exactly. A thing so small and simple that you can rewrite it in an afternoon is more futureproof than any 8000 LOC monstrosity.

As I've said multiple times, here and elsewhere, it is easier to fix the problem of under-engineering that the problem of over-engineering.

I also disagree with the article's "pros" for over-engineering. There is no pro that I can think of that doesn't boil down to resume driven development.

The pros of under-engineering is (the way you say it) obvious: very little time was spent to figure out that you did it wrong.


Perfectly said, that was the exact point I was trying to make. I've seen so many bad decisions made in the name of "future proofing". The the future comes and you are fighting those decisions. I wonder if people switch jobs and projects so often they don't get to see the results of all that future proofing.


> I doubt you can accurately predict the future and whatever assumptions you make will likely be wrong in some way. Then you are stuck having to solve the problem that you created by trying to predict. As for reusabilty, it's similar. Start with solving what you have to, then abstract as you see it fit.

Kinda sorta. It's not a binary: you can "predict the future," just not too far out and not with complete certainty. The art is figuring out what the practical limits are, and not going past them.

> Realistically you should engineer for the problem you have or can reasonably expect you are going to have pretty soon. You can solve future problems in the future.

Another factor is comprehensibility. Sometimes it makes sense to solve problems you don't technically have, because solving them makes the thing complete (or a better approximation thereof) and therefore easier to reason about later.


Agreed. These feel backwards.

When I think "Under engineer", I think "keep it simple, because you can't predict the future". Simplicity is a great enabler of flexibility and tends to go hand in hand with scalability.


It's usually comparatively easy to make something that's too simple a bit more complex.

It's often much harder to make something that's too complex more simple.


It's interesting to try to fit what is often talked about as "future-proofing" and "reusability" into the development of a general-purpose CPU, since CPUs are in a sense the ultimate reusable system.

In an overly simplified textbook example of designing/building a CPU, you have an ISA you're building the CPU to support. The ISA defines a finite set of operations and their inputs, outputs, and side effects (like storing a value in a particular register). Then you build the CPU to fulfill those criteria.

In my experience, designers that want reusability usually don't have enough precision in how they want to reuse a system so an ISA-like design can't be created.

And practically, it's the rare (I might even say non-existent) day-to-day business problem that needs CPU-like flexibility. Usually a system just needs to support a handful of use cases, like integrating with different payment providers. An interface will suffice.


> Usually a system just needs to support a handful of use cases, like integrating with different payment providers.

Building EV chargers is a good dose of electrical engineering combined with talking to dozens of car models and their own particular quirky interpretations of common protocols, which is like designing websites for a market with dozens of unique browser implementations.

In spite of that, it seems half of the complexity is making sure people pay.


> I doubt you can accurately predict the future and whatever assumptions you make will likely be wrong in some way.

I doubt this for most of us. Computers have been around for a long time. Most of us are not work on new problems. We by now have a pretty good idea of what will be needed and what won't be. There are a lot of things left that haven't been done yet, but if you understand the problem at all you should have a good idea of what those things will be. You won't be 100% correct of course, and exactly when any particular thing you design for will actually get implemented is unknown, but you should already have a good idea of what things your users will want on a high enough level.

Of course if we ever get something new you will be wrong. 10 years ago I had no idea that LLM type AI would affect my program, but it is now foreseeable even though I don't really know what it can do will turn out useful vs what will just be a passing fad. Science fiction has 3d displays, holographic interfaces, teleportation, and lots of other interesting things ideas that may or may not work out.

Likewise, 20 years ago you could be forgiven for not foreseeing the effects that privacy legislation would have on your app, but you better assume it will exist now and the laws will change.


I think this is nicely captured by the concept of "cost of carrying".

Keeping code around is not free. Cost of carrying refers to the ongoing efforts of maintaining code, not to mention side effects such as increased complexity and cognitive load.

If you over-engineer a system you aren't getting value out of the extra bits, but you are still paying the cost of keeping them around.


Obviously we can’t prove this, but my instincts are that we don’t do things with probabilistic decision paths. Not very scientific of me, but I just don’t buy that’s how we make decisions.


There really isn't room for much else.

Decisions making in our universe is a 1-dimensional slider between deterministic and random. That's it.

Write a program that makes non-deterministic, non-random (or any combination) decisions. You can't. It's like asking to create a new primary color.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: