Hacker Newsnew | past | comments | ask | show | jobs | submit | gopalv's commentslogin

> We'll need to figure out the techniques and strategies that let us merge AI code sight unseen

Every strategy which worked with an off-shore team in India works well for AI.

Sometime in mid 2017, I found myself running out of hours in the day stopping code from being merged.

On one hand, I needed to stamp the PRs because I was an ASF PMC member and not a lot of the folks who were opening JIRAs were & this wasn't a tech debt friendly culture, because someone from LinkedIn or Netflix or EMR could say "Your PR is shit, why did you merge it?" and "Well, we had a release due in 6 days" is not an answer.

Claude has been a drop-in replacement for the same problem, where I have to exercise the exact same muscles, though a lot easier because I can tell the AI that "This is completely wrong, throw it away and start over" without involving Claude's manager in the conversation.

The manager conversations were warranted and I learned to be nicer two years into that experience [1], but it's a soft skill which I no longer use with AI.

Every single method which worked with a remote team in a different timezone works with AI for me & perhaps better, because they're all clones of the best available - specs, pre-commit verifiers, mandatory reviews by someone uncommitted on the deadline, ease of reproducing bugs outside production and less clever code over all.

[1] - https://notmysock.org/blog/2018/Nov/17/


> Every strategy which worked with an off-shore team in India works well for AI.

Why hasn't SWE then not been completely outsourced for 20 years. Corporations were certainly trying hard.


Cost. Claude code is two orders of magnitude cheaper than an offshore dev.

we are talking 20 - 30 years back when offshore was and still is cheaper.

> * No MagSafe

For my kid who uses a Chromebook right now, Magsafe would've been improvement in how often the power cable pulls the it off the desk.

But otherwise, this checks all the boxes, including applecare.


In case you didn't already know or haven't considered it, you can find right-angle usb-c MagSafe adaptors that basically allow the charging cable to disconnect from the device like MagSafe.

Most of these devices are a fire hazard. And in an environment where kids are needing magsafe, is probably the most dangerous for fire safety.

*edit

https://www.reddit.com/r/UsbCHardware/comments/motlhn/magnet...


So true. Regarding those magnetic USB connectors: not just a fire hazard but also a tendency to eventually burn out whatever is on the other end of them IME.

Maybe ok for giving power if you are careful I think, I never had any fires, knock on wood.

But it's a bummer to zap/kill the data-functionality of USB ports on nice stuff just because a non-spec connector was used in between the two things being connected, for convenience.

So I don't trust them except for conveniently connecting power to low-cost devices. Whether Neo fits that... I doubt but YMMV.


oh, probably not enough contact pressure. resistance is inversely proportional to contact pressure after all.

I am now nerd sniped on the surface physics of connectors. Thanks!

Any you could recommend that are safe?

Mid 2015, I spent months optimizing Apache ORC's compression models over TPC-H.

It was easier to beat Parquet's defaults - ORC+zlib seemed to top out around the same as the default in this paper (~178Gb for the 1TB dataset, from the hadoop conf slides).

We got a lot of good results, but the hard lesson we learned was that scan rate is more important than size. A 16kb read and a 48kb read took about the same time, but CPU was used by other parts of the SQL engine, IO wasn't the bottleneck we thought it was.

And scan rate is not always "how fast can you decode", a lot of it was encouraging data skipping (see the Capacitor paper from the same era).

For example, when organized correctly, the entire l_shipdate column took ~90 bytes for millions of rows.

Similarly, the notes column was never read at all so dictionaries etc was useless.

Then I learned the ins & outs of another SQL engine, which kicked the ass of every other format I'd ever worked with, without too much magical tech.

Most of what I can repeat is that SQL engines don't care what order rows in a file are & neither should the format writer - also that DBAs don't know which filters are the most useful to organize around & often they are wrong.

Re-ordering at the row-level beats any other trick with lossless columnar compression, because if you can skip a row (with say an FSST for LIKE or CONTAINS into index values[1] instead of bytes), that is nearly infinite improvement in the scan rate and IO.

[1] - https://github.com/amplab/succinct-cpp


> correspond to a binary format in accordance with the C ABI on your particular system.

We're so deep in this hole that people are fixing this on a CPU with silicon.

The Graviton team made a little-endian version of ARM just to allow lazy code like this to migrate away from Intel chips without having to rewrite struct unpacking (& also IBM with the ppc64le).

Early in my career, I spent a lot of my time reading Java bytecode into little endian to match all the bytecode interpreter enums I had & completely hating how 0xCAFEBABE would literally say BE BA FE CA (jokingly referred as "be bull shit") in a (gdb) x views.


ARM is usually bi-endian, and almost always run in little endian mode. All Apple ARM is LE. Not sure about Android but I’d guess it’s the same. I don’t think I’ve ever seen BE ARM in the wild.

Big endian is as far as I know extinct for larger mainstream CPUs. Power still exists but is on life support. MIPS and Sparc are dead. M68k is dead.

X86 has always been LE. RISC-V is LE.

It’s not an arbitrary choice. Little endian is superior because you can cast between integer types without pointer arithmetic and because manually implemented math ops are faster on account of being linear in memory. It’s counter intuitive but everything is faster and simpler.

Network data and most serialization formats are big endian by convention, a legacy from the early net growing on chips like Sparc and M68k. If it were redone now everything would be LE everywhere.


> Little endian is superior because you can cast between integer types without pointer arithmetic

I’ve heard this one several times and it never really made sense. Is the argument that y you can do:

    short s;
    long *p = (long*)&s;
Or vice versa and it kind of works under some circumstances?

Yes. In little-endian, the difference between short and long at a specific address is how many bytes you read from that address. In big-endian, to cast a long to a short, you have to jump forward 6 bytes to get to the 2 least-significant bytes.

Wow, I've been living life assuming that little endian was just the VHS of byte orders with no redeeming qualities whatsoever until today. This actually makes sense, thank you!

Network data and most serialization formats are big endian because it's easiest to shift bits in and out of a shift register onto a serial comm channel in that order. If you used little endian, the shifter on output would have to operate in reverse direction relative to the shifter on input, which just causes stupid inconsistency headaches.

Isn't the issue with shift registers related to endianness at the bit level, while the discourse above is about endianness at the byte level? Both are pretty much entirely separate problems

GCC supports specifying endianness of structs and unions: https://gcc.gnu.org/onlinedocs/gcc-15.2.0/gcc/Common-Type-At...

I'm not sure how useful it is, though it was only added 10 years ago with GCC 6.1 (recent'ish in the world of arcane features like this, and only just about now something you could reasonably rely upon existing in all enterprise environments), so it seems some people thought it would still be useful.


I thought all iterations of ARM are little endian, even going back as far to ARM7. same as x86?

The only big-endian popular arch in recent memory is PPC


AFAIK ARM is generally bi-endian, though systems using BE (whether BE32 or BE8) are few and far between.

It started as LE and added bi-endian with v3.

ARM has always been little-endian. Some were configurable endian.

And it's not a hole. We're not about to spend 100 cycles parsing a decimal string that could have been a little-endian binary number, just because you feel a dependency on a certain endianness is architecturally impure. Know what else is architecturally impure? Making binary machines handle decimal.


> The Graviton team made a little-endian version of ARM just to allow lazy code like this to migrate away from Intel chips without having to rewrite struct unpacking

No? Most ARM is little endian.


I would question why is it big endian in the first place. Little endian is obviously more popular, why use big endian at all?

Fuck, the stupidity of humans really is infinite.

> The story isn’t “MCP was a fake marketing thing pushed on us”. It’s a story of how quickly the meta evolves.

The original take was that "We need to make tools which an AI can hold, because they don't have fingers" (like a quick-switch on a CNC mill).

My $job has been generating code for MCP calls, because we found that MCP is not a good way to take actions from a model, because it is hard to make it scriptable.

It definitely does a good job of progressively filling a context window, but changing things is often multiple operations + a transactional commit (or rename) on success.

We went from using a model to "change this" vs "write me a reusable script to change this" & running it with the right auth tokens.


> a clear explanation of a compression algorithm

The huffman tree, LZ77 and LZMA explanation is truly excellent for how concise the explanation is.

The earlier Veritasium video on Markov Chains in itself is linked if you don't know what a markov chain is.

I expected Veritasium to tank when it got sold to private equity & Derek went to Australia, but been surprised to see the quality of the long form stuff churned out by Casper, Petr, Henry & Greg.


I liked the presentation about paint mixing, however I think this is not impossible to find the missing key paint having the public paint and the message paint, but still, this is really close to what RSA is

The paint mixing is actually way closer to the idea of the Diffie-Hellman key exchange, more than RSA.

I couldn't help but notice it was quite similar to a Numberphile video 8 years earlier...

https://youtu.be/NmM9HA2MQGI


> You're encouraged to use AI to solve the problem. Whatever tools you would want to use as an employee, use them during the interview. We'll give you a Claude, Codex, Cursor, or Gemini license if you need one. I want to see you balance LLM-generated code against your own judgment.

I love this approach, because this is a good litmus test of both criteria I am looking for in engineers.

Part one is "can you use these tools fluently" and the second section is more like "what are you without that suit, Tony Stark?" (or Peter Parker, depending on your movie preference).

The AI part has been moved to a take-home test section in my scenario, but the 40 minutes of the interview is "I want you to make a minor change to the tool you built" & see how a change in requirements bounces through the person's head.

The part 3 is the turn, where I pull the rug out of the assumptions in the original business case.

My company has probably hit max engineering size already, but I've found there are people who build "technical savings" in their architectures and not debt, which makes their next 3 steps velocity without compromising on pr quality.


> the future is of course BEV

That's probably the reason - we only need dressage horses and pure bloods now that the real draft horse is getting put to pasture.

These are no longer workhorses.

Six cylinders are the smoothest engines out there.

Honda used to have a 1L 6 cylinder engine for their bikes - the Gold wing has a 6 cylinder still.

The perimeter of the piston goes down in relation to its area (& multiplied by BMEP) when the radius goes up - looking at you Africa Twin.

The perimeter is where the unburnt fuel lives and gets caught up in the emission rules. So fewer larger cylinders is better according to EPA - 500cc each, maybe.

If we're only going to have hobby vehicles with internal combustion, then a six cylinder or doubling up to a v-12 makes sense.

They're toys for the weekend, not to put a 100k miles on it.


>Six cylinders are the smoothest engines out there.

I dunno man, have you ever driven a rotary? The design has a lot of problems, but smoothness isn't one of them...


My older brother had a mighty CBX. Smooth as silk, and very fast. Carburetor synchronization was troubling!

> Like, what do you mean by "taste"?

Imagine the scene from Ratatouille, where Remy explains "taste" and the brother finds it impossible to understand what it is ("Food is food").

The dad goes from being annoyed that Remy is a picky eater instead decides to put him to work as a taster. Gives him the job of approving forage that comes into the family & protect others from being poisoned.

The reason we say "taste" is because that's the closest parallel.

When it is even more vague, I call it a "code smell".


Okay but you can define what good food is, right? Like if you're the best chef in the world, you can clearly define what "taste" of a particular food is the best. It might be subjective but it wouldn't be vague, the chef can clearly pinpoint what makes the food taste better instead of just being like "its what you feel" or other vague terms. My point is that the article doesn't delve into what is good taste in the context of coding. I understand the metaphysical meaning of what taste means but you need to define what it means in your particular context. If you leave it to be subjective, then everyone has good taste which means taste cannot be the difference between good and bad software which is the premise of the post.


> that's imposing these draconian rules on 3D printed guns

This is a bill with no votes - the first committee hearing is in March.

The purpose of the bill seems to be have some controversy & possibly raise the profile of the proposer.

The bill is written very similarly to how we enforce firmware for regular printers and EURion constellation detection.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: