Hacker Newsnew | past | comments | ask | show | jobs | submit | dsego's commentslogin

Isn't dvorak optimized for alternating fingers but doesn't leverage same hand finger rolls? There are more modern layouts that take rolls into account.

The stagger is beneficial for the right hand but makes it harder for the left hand. An ortholinear layout is not more comfortable, ideally there should either be a symmetrical or columnar stagger.

> isSecure = user.role == 'admin'

I would rather name intermediate variables to match the statement rather than some possible intent, it's basically a form of semantic compression. For example isAdminUser = user.role == 'admin' - here we are hiding away the use of roles which is not relevant for the conditional, but isSecure can mean anything, we don't want to hide the concept of being an admin user, just the details of using roles to determine that. At least that's my take.


Not the GP. I agree that you can always find a better name, as the old joke goes, naming things is hard.

That said, there is a significant difference in cognitive load between isSecure and isAdminUser to condition4.

I've had the pleasure of debugging a piece of code that was something like:

     if (temp2 && temp17) temp5 = 1;
In the end, I gave up, and just reimplemented it, after studying in detal about what its expected inputs and outputs were. (note: this was before the time unit tests were the norm, so it was painful).

Sometimes an established pattern is easier to understand than the improved version. In that case convention is better, for example comparing http codes directly instead of giving them names, since those are easy to read for anyone who's ever done web dev.

That's my fear, it will become a sort of a compiler. Prompts will be the code and code will be assembly, and nobody will even try to understand the details of the generated code unless there is something to debug. This will cause the codebases to be less refined with less abstraction and more duplication and bloat, but we will accept it as progress.

For me, I think it makes it more likely I will pick simple abstractions that have good software verification. Right now the idea of a webservice that has been proven correct to a spec is ridiculous, no one has time to write that, but it seems more likely that sort of thing will become ordinary. Yes, I won't be able to hold the webservice in my head, but reviewing it and making correct and complete statements about how it functions will be easier.

Funny, I'd say that codebases nowadays usually have too many abstractions.

Some certainly do. I have also noticed that the format of the code and structure used depend more on tools and hardware the developer uses rather than some philosophical ideal. A programmer with a big monitor could prefer big blocks of uninterrupted code with long variable names. Because of the big screen area, they can see the whole outline and understand the flow of this long chunk of code. Someone on a small 13" laptop might tend to split big pieces of code into smaller chunks so they won't have to scroll so much because things would get hidden. The other thing is the IDE or editor that's used. A coder who relies on the builtin goto symbol feature might not care as much about organizing folder and file structure, since they can just click on the method name, or use the command palette that will direct them to that piece of code. Their colleague might need the code to be in well organized file structure because they click through folders to reach the method.

Those are all examples for why having a single source for code generation would most likely simply things - basically we will have a universal code style and logic, instead of every developer reinventing the wheel.

And let's face it, 95% of software isn't exactly novel.


Not sure why Laravel decided to have that approach to models. In Django I can update the model properties and just generate a migration from that. I can always open the models.py file to see which fields a model class has. In Laravel I need to either look at the migration files or look inside the database table definition. And then also adding fields means writing the migration by hand I guess. There are other confusing things, like marking fields as fillable in the model, and if you fail to add your field in the fillable property, it won't be inserted into the database, and it took me awhile to figure out why my code wasn't working. Not sure what purpose it serves, I understand marking things as read only in serializers, but this is at the data layer, seems like a footgun.

I would also recommend A Philosophy of Software Design if you haven't read it, a very short and brilliant read with a similar approach.

There is also a discussion between the author of Clean Code and APOSD:

https://github.com/johnousterhout/aposd-vs-clean-code


> x = 4 // assign 4 to x

Ah, the chat gpt style of comments.

> Instead do something like:

The only negative is that there is a chance the comment becomes stale and not in sync with the code. Other coders should be careful to update the comment if they touch the code.


If the what becomes stale, you can tell. If the why becomes stale (and it can become stale), you'll never know, unless the what is also included.

The why becoming stale is a feature, that's when you know there is a VALID reason you thought this looked weird and convoluted, instead of you completely missing the inherent complexity of the problem.

The reason you want people to document the "why" is because you can easily check if one reason has become stale, but you can never check if every single possible reason is still valid.

You usually don't want your code logic to stray from the mental or domain model of business stakeholders. Usually when my code makes assumptions to unify things or make elegant hierarchies, I find myself in a very bad place when stakeholders later make decisions that flip everything and make all my assumptions in the code base structure fall apart.

E2e tests are the hardest to maintain and take a lot of time for little benefit in my experience. I'm talking about simulating a browser to open pages and click on buttons. They are flaky and brittle, the UI is easily the component which gets updated the most often, it's also easy to manually test while developing during QA and UAT. It's hard to mock out things, so you either have to bootstrap or maintain a whole 2nd working system with all the bells and whistles, including authentication, users, real data in the database, 3rd party integrations etc. It's just too overwhelming for little benefit. It's also hard to cover all error cases to see if a thing works correctly or breaks subtly. Most commonly in e2e we test for the happy path just to see that the thing doesn't fall over.

The benefit is certainty that the system you are building and delivering to people works. If that benefit is little, then I don’t quite understand the point of testing.

> it's also easy to manually test while developing during QA and UAT.

As I said in the original comment, e2e tests can definitely be manual. Invoke your CLI, curl your API, click around in GUI. That said, comprehensively testing it that way quickly becomes infeasible as your software grows.


> The benefit is certainty that the system you are building and delivering to people works.

I'd say that works and works correctly and covers all edge cases are different scenarios in my mind. Looking at an exaggerated example, if I build tax calculator or something that crunches numbers, I'd have more confidence with a few unit tests matching the output of the main method that does the calculation part than a whole end-to-end test suite. It seems wasteful to run end to end (login, click buttons, check that a UI element appears, etc) to cover the logical output of one part that does the serious business logic. A simple e2e suite could be useful to check for regressions, as a smoke test, but it still needs to be kept less specific, otherwise it will break on minor UX changes, which makes it a pain to maintain.


An e2e test shows that it works. If your tax calculator’s business logic perfectly calculates the tax, but the app fails with a blank screen and a TypeError in console because a function from some UI widget lib dependency changed its signature, your calculator is as good as useless for all intents and purposes. A good unit test will not catch this, because you are not testing third-party code. An integration test that catches it approaches the complexity of an e2e.

Sure, you wouldn’t have all possible datasets and scenarios, but you can easily have a few, so that e2e test fails if results don’t make sense.

Of course, unit tests for your business logic make sense in this case. Ideally, you would express tax calculation rules as a declarative dataset and take care of that one function that applies these rules to data; if the rules are wrong, that is now a concern for the legal subject matter experts, not a bug in the app that you would need to bother writing unit tests for.

However, your unit test passing is just not a signal you can use for “ship it”. It is a development aid (hence the requirement for them to be fast). Meanwhile, an e2e test is that signal. It is not meant to be fast, but then when it comes to a release things can wait a few minutes.


What's more likely to fail or cause issues? Dependencies failing and parsing errors are usually handled by the build system (type checkers and linters). In the case where they are triggered in production, it can be easily caught by monitoring services like Sentry. Ideally any changes are manually tested before releasing, and a bug in one part of the app that's being worked on is not likely to affect a different section, e.g. not necessary to retest the password reset flow if you're working on the home dashboard. Having a suit of usually flaky end-to-end tests seems like the most sloppy and cumbersome way to ensure the application runs fine, especially for a small team.

That sounds suspiciously like “don’t need to test if I use static typing and monitoring”.

> Ideally any changes are manually tested before releasing, and a bug in one part of the app that's being worked on is not likely to affect a different section, e.g. not necessary to retest the password reset flow if you're working on the home dashboard

That is one can of worms. First, during normal development work it is very common to modify some part that affects multiple parts of the app. In fact, it is inhuman to know exactly what it affects in a big app (ergo, testing). Second, while manual testing is a kind of e2e testing, it is not feasible in a bigger application.

> usually flaky end-to-end tests

Then make them not flaky. It’s amazing what can happen if something stops being treated as an afterthought!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: