Hacker News new | past | comments | ask | show | jobs | submit | salviati's comments login

This post from three months ago discusses the donut tutorial (a famous blender course) and another one I've never heard about.

https://old.reddit.com/r/blender/comments/1eekomd/beginner_d...

I think to get an expert answer on the topic you'll need to listen to what a beginner has to say, so I believe the link I provided is a good source to look into.


I believe that 0 will be a higher number next year. And an even higher the following year.

Even in a year, I don't think random AI will be "cheap" enough for spamming CAPTCHA on random websites. Maybe for select, ripe targets (your bank, etc.). But for a random business with a form?

Nah.


The term "indexes" serves both as the third-person singular present tense of the verb "to index" and as a plural noun form of "index." In contrast, "indices" is the traditional plural form of "index," particularly prevalent in mathematical and scientific contexts. While "indexes" is commonly used in general English, "indices" is often preferred in technical fields to maintain linguistic precision. Employing "indices" in such contexts helps distinguish between the action of indexing and the plural form of index, thereby enhancing clarity.

FWIW, both are fine (https://www.nasdaq.com/articles/indexes-or-indices-whats-the...), and SQLite and PostgreSQL documentation (as two popular examples) use "indexes".

Try pluralizing "time series". You won't get far.

So what I've seen in Finland is people using "time series" for the plural and "time serie" for the singular.


I wonder if one could make a grammar-argument that it's like "Attorneys General." :p

Says who with what authority?

All major RDBMS use the term "indexes".


It depends on your audience. If you're catering to academics, use "indices." If you're catering to the general person, "indices" comes off as pompous.

Nope. Academics prefer “indexes” when discussing databases.

Yes! And the most interesting thing here, to me, is how the crowd splits in two groups: one group basically saying "That's how it goes, if you wan to get paid (and you're not intelligent if you don't) you need to play this game, even if you don't like it" and the other saying "This is unacceptable. We need to first recognize that the state of things is not good, so that we can then act and change them. So let's start by stating that this should not be accepted. It's wrong to accept it".

I belong to the latter group.


What is the alternative, though? This is an honest question, I really want to know: How would the "this is horrible, how can you deign to work that way?" crowd coordinate thousands of people on a project to create something that is bigger than what 5–20 people can create?

Because most of the answers I see here gloss over that part, or strongly imply that engineers will always decide better than business people what should be released. And I can sympathize, especially if you are in an MBA-led org, but I am also certain that if you think you know perfectly what the enterprise or customer needs, and anyone opposing you is a Pointy-Haired Boss, that you are most probably the idiot in that case: 90% of the time a single dev will NOT have better business intelligence than everyone else.


> What is the alternative, though? This is an honest question, I really want to know: How would the "this is horrible, how can you deign to work that way?" crowd coordinate thousands of people on a project to create something that is bigger than what 5–20 people can create?

Nobody needs to be co-ordinating thousands of people. 5–20 people can create Instagram. The entire problem in these companies is that leadership is so out of touch they cannot differentiate between a checklist and a product, and empire-building is their proxy for value. The solution is to change the leadership, but it is usually too late in large orgs (the new leadership has to be brought in somehow from somewhere, and that will be done the same way the current leadership happened).

So the real solution is for those who care to go elsewhere, out-compete, and out-succeed. Then quit after acquisition, if such a thing happens.


"Just never develop software in a medium or large company" is a take, I'll give you that. I'm pretty sure that the vast majority of software developed is vastly more productive than Instagram, and tied to real world processes that need coordination. Your 5 people team will do fuck all to program the control software for a crane arm activator, because they'll never get even close, and if they were actually given access, you would have to coordinate with hardware people and actual engineers (the ones with detailed plans and calculations before building anything, I mean, not us software "engineers"), you would have to figure out what the construction companies using it actually need from your software. A similar story could be told for most any area of software development - it generally is tied to other areas, and unicorn examples of how wonderfully easy the world is if you develop a time wasting app for phones is not applicable to the majority of us.

I totally agree with you last point. Not the least for the egoistical reason of higher chance of better products for me, that way :)


I think the first group is more of “this is how it is” rather than “you’re not intelligent if you don’t play the game”. I personally don’t find this status quo good or healthy either, but I(or anybody) have little control beyond formulating policies at a company at best, and changing their job realistically. (Fortunately my current place seems quite decent in this regard so I’m good/lucky.)

Agreed. What I am a little surprised is that some engineers seem to trying to defend something that is knowingly bad just because it is the status quo.

BTW, I don’t think catering the upper management as described is the only way to get paid. I ignored many upper management’s preferences because they are plainly stupid and in some situations get fired. But I still made good career growth and so do many of my friends. In fact being in a conflict and holding position tends to give me higher rate of returns, even in case not being able to “ship” as defined in the op.


I wonder, given the year is divisible by four so it is on the mind, I wonder if that divide of "acceptance of what is and working with that" vs "we can be better if we make it so" is also the divide that manifests between the two political party system in the US.

I found its repository after searching google. The license is MIT.

[1] https://github.com/playcanvas/supersplat

I remember a time when it was considered unpolite to ask a question without googling first. Is it still the case?


> I remember a time when it was considered unpolite to ask a question without googling first. Is it still the case?

Yes


As I recall it, the asshole move was to reply to a question with a LMGTFY link.


Those were the times when you could actually rely on Google to give you the right results for such a query near the top, and more importantly, give you the same results it would give to the person you're telling to Google it.


Kagi allows you to share a URL to a specific search results page. Maybe it's time to revive the idea...


It's not.

First, Kagi might give the same results today, but what about tomorrow or a year from now? Will Kagi still exist a year from now or will Kagi links all be broken?

A better idea is to follow this HN guideline here and everywhere:

> Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

The kind, non-snarky response to something that could be searched for is to simply answer the question.


> First, Kagi might give the same results today, but what about tomorrow or a year from now?

When you share from Kagi you share the actual results, not the search that led to them. I believe they don't change over time. If they ever disappear, the next search engine will be just as good for that same question.

> The kind, non-snarky response to something that could be searched for is to simply answer the question.

A few times, sure. There are things that cross the line in my mind though. You do occasionally run into questions which took longer to write than it takes to check. I think it's ok to discourage them slightly while still providing the obvious answer. Lgmtfy worked great for those because you know what is and always will be the first answer for them.


The most annoying thing is when you google some question or problem, you find a forum where someone is asking your exact same question and people just tell them to google it. :D


Have you tried https://aider.chat ?


I tried it yesterday and wasn't successful. I spent like 30 minutes trying to explain to it to make a simple change. Every time it made this change and several others as well which I didn't ask for. I asked to undo those several other changes and it undoes everything or does other unrelated things.

It works good until it doesn't.

It's definitely a useful tool and I'll continue to learn to use it. However it is absolutely stupid at times. I feel there's very high bar to use it, much higher than traditional IDEs.


Which LLM were you using? I’ve had a great experience with Aider and Claude Sonnet 3.5 (which is not coincidentally at the top of the Aider leaderboard).


I've been using Claude dev VSCode extension (which just got renamed but I forget the new name), I think it's similar to Aider except that it works via a gui.

I do find it very useful, but I agree that one of the main issues is preventing it from making unnecessary changes. For example, this morning I asked it to help me fix a single specific type error, and it did so (on the third attempt, but to be fair it was a tricky error). However, it persistently deleted all of the comments, including the standard licensing info and explanation at the top of the file, even when I end my instructions with "DO NOT DELETE MY COMMENTS!!".


You may want to peek at the system prompts Aider uses. I think this is part of the secret sauce that makes it so good.

https://github.com/Aider-AI/aider/blob/main/aider/coders/edi...

excerpt: """ Act as an expert software developer. Always use best practices when coding. Respect and use existing conventions, libraries, etc that are already present in the code base. {lazy_prompt} Take requests for changes to the supplied code. If the request is ambiguous, ask questions.

Always reply to the user in the same language they are using.

Once you understand the request you MUST: """ ... etc...


I am a big fan!


Can those kinds of things work in monorepos with 50 million files?


They use this thing called repo map[1]. I only used it for personal projects and it’s been great. You need to add the files you care about yourself, it’ll do its best and add additional files from the repo map if needed.

Since it’s git based, it makes it very easy to keep track of the LLMs output. The agents is really well done too. I like to skip auto commit so I can “git reset —hard HEAD^1” if needed but aider has built in “undo” command too.

[1] https://aider.chat/docs/repomap.html


Thats a cool idea, kind of reminds me of ctags.


Aider had actually used ctags to implement that feature before they switched to tree-sitter.


Can you work in a repo with fifty million files? Can Git? I just checked on my Windows machine using Everything and there are 15,960,619 files total including every executable, image, datafile, &c.

Out of curiosity what does your IDE do when you do a global symbol rename in a repository with fifty million files?

I'm absolutely a real human, and I think this just might be too much context for me! Perhaps I am not general enough.


I thought this was common knowledge but I guess not: Google's monopoly famously has over a billion files. No, Git cannot handle it. Their whole software stack is developed around this from the ground up. But they are one of the largest software employers in the world, so quite a few engineers evidently do make do with 200x more than 50 million files.


Monopoly <> Monorepo This is the funniest typo possible in the context of google


Having worked on a codebase like that, you need to use some extra plugins to get git to work. And even then, it’s very slow. Like 15-30 seconds for a git status to run even with caching. Global renames with an IDE are impossible but tools like sed and grep still work well. Usually there is a module system that maps to the org structure and you don’t venture outside of your modules or dependency modules very often.


No and neither can you. Like you, it works best with small, focused context.

These tools aren’t magic. But they do certain tasks remarkably well.


> No and neither can you.

People do work on monorepos with 50 million+ files, though…


I made a tool that allows you to use LLMs on large codebases. You can select which files are relevant and embed them into the prompt: https://prompt.16x.engineer/

Based on my personal experience it works well as long as each file is not too long.


I believe they can as long as you're able to identify a contained task that touches no more than a handful of files. Still very useful to automate some tedious work or refactoring if you ask me.


That's effectively an answer of "no".


I used to work in a monorepo of that size.

All of the PRs I ever submitted touched a handful of files in my project’s subdirectory.


That's effectively an answer of "yes".

Or what "yes" looks like to you? It can do all the work itself, for a 50m-file monorepo, without a human guiding it which files to look at?

If it were true then human programmers would have been considered obsoleted today. There would be exactly zero human programmers who make any money in 2025.


It doesn't take the whole repo as context, it tries to guess which files to look at. So if you prompt with that in mind, it works well. Haven't tried it on a very large codebase though. You can also explicitly add files, if you know where work should be done.


you can make it work. just think of the many approaches and you'll see that there are actually quite many viable ways to work around pseudo-infinite context.


> I'm pretty sure effective altruists have the same question, crunched the numbers, and figured out it was something like malaria nets or similar interventions in developing countries.

I wonder how much real world experience goes into these calculations. For instance the theory of malaria nets can be quite different from reality. From Wikipedia [1]:

> Where mosquito nets are freely or cheaply distributed, local residents sometimes opportunistically use them inappropriately, for example as fishing nets. When used for fishing, mosquito nets have harmful ecological consequences because the fine mesh of a mosquito net retains almost all fish, including bycatch such as immature or small fish and fish species that are not suitable for consumption. In addition, insecticides with which the mesh has been treated, such as permethrin, may be harmful to the fish and other aquatic fauna.

[1] https://en.wikipedia.org/wiki/Mosquito_net#Usage


Do I understand correctly that you could to this with OBS on any platform, including Wayland? I'm reading many comments that make me think either many people don't know about OBS, or I'm overestimating it's abilities.


You probably can. I never used OBS, but it's probably a bit more than a 20kb binary though ;-)


I don't understand, what is the significance of a 20kb binary? The only person using this would be someone who takes Zoom meetings on a company-issued computer and I can't imagine such machines are disk space-constrained.


I'm not aware of company issued computer with x11. Is it really a thing ?


Some companies let you run Linux on their company issued computer.


It can do that, yes, but it's a bit more work. There are several GUI hoops you'll have to run through to get that to work, and if you have to adjust it each and every time, before a meeting, then it would become burdensome. But yes, it can be done.


OBS lets you share a window or just the client area of an app.


With OBS, you can add an entire screen to your canvas and then add a filter to crop it down to a particular part of that screen. This nets you the same results as the small C++ tool being proposed here.

A lot more work involved, though.


We tried to use it alongside pouchdb to provide offline first experience. We ran into a lot of issues, mainly with pouchdb bugs. Then after reading this list [1] of issues in the couchdb architecture we decided to ditch it and stop trying to make pouchdb work.

[1] https://news.ycombinator.com/item?id=17115649


Out of curiosity, if you would like to start a new, offline-first project that needs syncing, which database would be the best candidate today?


One interesting idea I’ve seen is none[1]. That’s definitely more on the exploration side than the shipping one, though.

[1] https://tonsky.me/blog/crdt-filesync/


I can think of two complex pieces of software which intentionally shunned databases thinking they can just do its thing in "flat files" - git and DokuWiki.

What ended happening is that both developed their own custom shi*ty database engines because it turns out that things like indexes are just generally pretty useful, but doing them right is pretty damn difficult. I'm pretty sure that if Linus/git chose e.g. SQLite instead of "flat files", we'd have way fewer data corruption problems, and a more capable/extensible git.


I don’t know if Git’s storage is deserving of this level of scorn—I can’t lay claim to any in-depth knowledge, but if it were indeed a big problem I’d expect it to come up frequently in comparisons with Fossil, which stores things using SQLite. (As a counterpoint, I use git-annex quite a bit, and it almost certainly couldn’t integrate as neatly into Fossil’s storage approach as it does into Git’s.) So I’d appreciate any details here.

All that is beside the point, though: the article above is not about using or not using flat files as a storage primitive, it’s about using files of whatever nature as a replication and version reconciliation mechanism, in view of the fact that concurrent editing is inevitably application-specific, so we might as well lean into it instead of leaving it to a database. In that sentence, “a database” is not just any database, it’s one of a very short list of multimaster databases with relatively loose schemas, which includes CouchDB and—among legitimately FOSS projects—I’d struggle to name more.

This is not a decision about data storage at all, in other words. It is a decision about protocols. Experience shows that the alternative does not end up being an off-the-shelf database (even CouchDB, which does seem like a major road not taken looking back at Canonical’s efforts a decade ago), the alternative is usually a central synchronization server speaking a custom protocol. (CalDAV, CardDAV, Bitwarden, etc.)

And if you want to do your CRDT or OT or whatnot over per-client SQLite databases instead of per-client text files, all the more power to you.

Finally, I tried to phrase my comment above in a way that makes it clear that it’s a suggestion of a direction to have fun in, not of a principle to architect your production app around. So the sneering in your comment is... honestly disheartening to read. Like, do people even hack anymore? I know they do, but every time I read something like this I become a little bit less sure of it.


hum, allow me to toot my own horn here. I've written something to that effect in 2021. https://raphael.lullis.net/thinking-heads-are-not-in-the-clo...


I am (very) biased but Couchbase has a pretty solid Mobile offering for native apps. I have worked on the Sync Gateway component responsible for replication for the last six years.

Sync Gateway still maintains a CouchDB-compatible REST API, and PouchDB _mostly_ works thanks to that, but there are some corner cases and features that PouchDB does not support so YMMV with it. Our native app libraries have used a more performant websocket-based replication protocol for many, many years now, and I'd really love to have the time investigating a PouchDB adapter using this WS protocol instead.


I'm biased since I work on it, but https://ditto.live/ provides an SDK that allows P2P and cloud sync. You interact with it like a database: write your queries against your data, and it will move between devices automatically


I would still recommend you give CouchDB and PouchDB a fair shot. They are used successfully by many folks.


I tried to subscribe to an online newspaper in my country.

Subscription was very easy. When I wanted to end it, they made it purposefully hard for me to do. I think it's unacceptable, and should be outlawed: the process to unsubscribe should be as hard (or easy) as the one to subscribe.

Since newspapers (at least that specific one) have no problem doing this as long as it's legal, why should the public be more observant of ethics?


If Revolut is available in your country, create a new virtual card for each subscription and when you can't/don't want to pay them anymore just cancel the card in Revolut.


Also works with wise (used to be transfer wise).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: