Hacker News new | past | comments | ask | show | jobs | submit | eyegor's comments login

https://cran.r-project.org/web/packages/khroma/vignettes/tol...

The tol palettes are the best looking colorblind friendly palettes to me. Most of the others get complaints from non colorblind users about looking bad/desaturated.


It sounds like you're trying to use these llms as oracles, which is going to cause you a lot of frustration. I've found almost all of them now excel at imitating a junior dev or a drunk PhD student. For example the other day I was looking at acoustic sensor data and I ran it down the trail of "what are some ways to look for repeating patterns like xyz" and 10 minutes later I had a mostly working proof of concept for a 2nd order spectrogram that reasonably dealt with spectral leakage and a half working mel spectrum fingerprint idea. Those are all things I was thinking about myself, so I was able to guide it to a mostly working prototype in very little time. But doing it myself from zero would've taken at least a couple of hours.

But truthfully 90% of work related programming is not problem solving, it's implementing business logic. And dealing with poor, ever changing customer specs. Which an llm will not help with.


> But truthfully 90% of work related programming is not problem solving, it's implementing business logic. And dealing with poor, ever changing customer specs. Which an llm will not help with.

Au contraire, these are exactly things LLMs are super helpful at - most of business logic in any company is just doing the same thing every other company is doing; there's not that many unique challenges in day-to-day programming (or business in general). And then, more than half of the work of "implementing business logic" is feeding data in and out, presenting it to the user, and a bunch of other things that boil down to gluing together preexisting components and frameworks - again, a kind of work that LLMs are quite a big time-saver for, if you use them right.


I'll just pitch in as someone who's worked with several 40+ year old codebases and 3+ year old perl, the perl code is significantly harder to maintain. The language lends itself to unreadable levels of terseness and abuse of regex. Unless you use perl every day, the amount of time trying to understand the syntax vs the code logic is skewed too heavily towards obscure syntax. Even the oldest fortran/c is much easier to work with.

Except maybe arithmetic gotos designed to minimize the number of functions. Those are straight evil and I'm glad no modern language supports them.


Sure, Perl can be hard to maintain. (I haven't used it in over 25 years, and don't expect to ever use it again)

From your experience maintaining 40+ year old codebases, is it possible that improving the Perl codebase is the best choice in a certain situation?

Or should we always be "surprised" that Perl exists? (honest question)

In other words, I don't see the connection between "this code is hard to maintain" and "I am surprised that it exists" / "it should be rewritten".


In general, I tend to observe languages that are minimally challenging for amateurs tend to build less sustainable mutagenic ecosystems.

Example:

* Prolog generated the worlds most functional 1 liner code, as it is bewildering for hope-and-poke programmers

* Pythons ease of use combined with out-of-band library packages became a minefield of compatibility, security, and structural problems (i.e. became the modern BASIC)

* NodeJS was based on a poorly designed clown JavaScript toy language, so naturally became a circus

* Modern C++ template libraries became unusable as complexity and feature creep spiraled out of control to meet everyone's pet use-case

* Rust had a massive inrush of users that don't understand why llvm compilers are dangerous in some use-cases. But considering 99% of devs are in application space, they will unlikely ever need to understand why C has a different use-case.

While "Goto" could just be considered a euphemism for tail-recursion in some languages. We have to remember the feature is there for a reason, but most amateurs are not careful enough to use it in the proper edge-case.

I really hope Julia becomes more popular, as it is the first fun language I've seen in years that kind of balances ease of use with efficient parallelism.

wrote a lot of RISC Assembly at one time... where stacks were finite, and thus one knew evil well... lol =3


Why are LLVM compilers "dangerous in some use-cases", and what are those use-cases?


Any Real-Time or mcu system that relies on repeatable code-motion behavior, and clock domain crossing mitigation (a named problem with finite solutions.) Many seem to wrongly conflate this with guaranteed-latency schedulers... However, the danger occurs when a compiler fails to account for concurrent event order dependent machine states that do not explicitly lock (i.e. less efficient un-optimized gcc will reproduce consistent register op ordered in time behavior for the same source-code.)

The llvm abstraction behavior is usually fine in multitasking application spaces, but can cause obscure intermittent failure modes if concurrent register states are externally dependent on the architecture explicitly avoiding contention. I would recommend having a look at zynq DMA kernel module memory mapped to hardware io examples.

It seems rather trivial, Best of luck =3


But if you depend on specific ordering of register operations, isn't this kind of code best hand-written in assembly anyway?


In general, assembly code escape blocks are often used with macros for ease of understanding architecture specific builds.

There are binary optimizers and linkers that can still thrash Assembly objects in unpredictable ways.

Best of luck =3


I also maintain older codebases and this is how I feel about Clojure. It’s far too terse and the syntax is nigh unreadable unless you are using it very often - add to that there seems to be a culture that discourages comments. OTOH so-called “C” languages seem much more readable even if I have haven’t touched them in years or have never written a line of code in them.

On another note, I have done a lot of things with Perl and know it well, and I agree that it can be written in a very unreadable way, which is of course enabled by its many esoteric features and uncommon flexibility in both syntax and methodology. It is simultaneously a high-level scripting language and a low-level system tool. Indeed, I have caught myself writing “clever” code only to come back years later to regret it. Instead of “TMTOWTDI”it is has turned into “TTMWTDI” (there’s too many ways to do it).


> Even the oldest fortran/c is much easier to work with.

Could that be survivor bias? If that old Fortran was hard to work with, maybe it would have been rewritten, and left the set of “oldest code” in the process.


I think it's just procedural languages with a lack of magic.


In the real world, for web things, people use django or fastapi. I'd suggest picking a project with lots of stackoverflow questions and poking around their docs to see which makes you the most comfortable. Personally I tend to favor litestar these days since it has good docs and issues don't sit around for years waiting on one dude to merge prs (fastapi) and it's a lot nicer than django (and I hate django docs).

Flask/quart are painful to work with due to horrible documentation in my experience, but they're popular too. Quart is just an async rewrite of flask by the same owners.

Litestar has a half baked comparison chart here: https://docs.litestar.dev/latest/


For papers sometimes you need to hit a page count to make your sponsor/advisor/conference happy. I've been told "this is a great paper but can you pad it out to 12pgs?", maybe that happened here as well.


Nodejs/js runtimes in general get a lot of development effort to make the runtimes fast from Google et al. It's the default web language so there's a ton of effort put into optimizing the runtime. Python on the other hand is mostly a hacker/data science language that interops well with c, so there's not much incentive to make the base runtime fast. The rare times a company cares about python interpreter speed, they've built their own runtime for python instead.


I don't honestly think anyone can remember bash array syntax if they take a 2 week break. It's the kind of arcane nonsense that LLMs are perfect for. The only downside is if the fancy autocomplete model messes it up, we're gonna be in bad shape when Steve retires cause half the internet will be an ouroboros of ai generated garbage.


Is there a difference between global setattr(object, v) and object.__setattr__(v)? I've seen setattr() in the wild all over but I've never encountered the dunder one.


Note that `object` here is not a placeholder variable but actually refers to the global object type (basically a superclass of pretty much every other type in Python). It allows you to bypass the classes’ __setattr__ and set the value regardless (the setattr() function can’t do that):

  In [1]: from dataclasses import dataclass

  In [2]: @dataclass(frozen=True)
     ...: class Foo:
     ...:     a: int
     ...:

  In [3]: foo = Foo(5)

  In [4]: foo.a = 10
  FrozenInstanceError: cannot assign to field 'a'

  In [5]: setattr(foo, "a", 10)
  FrozenInstanceError: cannot assign to field 'a'

  In [6]: object.__setattr__(foo, "a", 10)

  In [7]: foo.a
  Out[7]: 10


Have you tried using chatgpt/etc as a starting point when you're unfamiliar with something? That's where it really excels for me, I can go crazy fast from 0 to ~30 (if we call 60 mvp). For example, the other day I was trying to stream some pcm audio using webaudio and it spit out a mostly functional prototype for me in a few minutes of trying. For me to read through msdn and get to that point would've taken an hour or two, and going from the crappy prototype as a starting point to read up on webaudio let me get an mvp in ~15 mins. I rarely touch frontend web code so for me these tools are super helpful.

On the other hand, I find it just wastes my time in more typical tasks like implementing business logic in a familiar language cause it makes up stdlib apis too often.


This is about the only use case I found it helpful for - saving me time in research, not in coding.

I needed to compare compression ratios of a certain text in a language, and it actually came up with something nice and almost workable. It didn't compile but I forgot why now, I just remember it needing a small tweak. That saved me having to track down the libraries, their APIs, etc.

However, when it comes to actually doing data structures or logic, I find it quicker to just do it myself than to type out what I want to do, and double check its work.


I don’t really care about broken clocks even if to someone they are useful twice a day.


Coil whine (or capacitor whine) from the gpu running at too high a refresh rate. Easiest thing would be to use nvidia control panel to add an fps cap to something like 2x your monitors max rate for the browser (or globally). It's pretty common with any workload after like 600 fps.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: