Hacker News new | past | comments | ask | show | jobs | submit | spintin's comments login

When ever I watch someone coding C(++) all they do is compile and then add remove * and & or change between -> and . when the compiler complains.

Multiply that by all the C(++) coders on the planet and we have lost a billion man hours...


Interesting, I did different choices:

5-bit base-32 oi23456789 abcdefghkl mnpqrstuvw y

o = 0 i = 1 j, x and z removed.

I like that you can fit 6 characters in an 32-bit integer and still have to bits to spare... makes for compact usernames and network bandwidth.


This is the eternal browserbros. attempt to make us think native has zero value now that we have a completely captured and bloated browser.

The browser is dead, the only thing you can use it for is filling out HTML forms and maybe some light inventory management.

The final app is C+Java where you put the right stuff where it is needed. Just like the browser used to be before Oracle did it's magic on the applet.


> The browser is dead,

Yea. Nah!

That obit is a bit premature


So you're telling me you write Java professionally?


Funnily enough, in a world with WASM, we might actually have Java in the backend and C in the frontend rather than vice versa as it would've been likelier in the 90s.


The irony of half world backed by VC money, trying to reinvent Erlang, Java and .NET application servers, while pretending to be innovative.


WASM is adding GC... recreating the wheel of the applet but without escaping the problem of javascript glue.

Go is just Java without the WM.

Rust is just a native compiler that creates slow programs and complains a lot.


> Rust is just a native compiler that creates slow programs and complains a lot.

Good morning Troll

I'll give you "complains a lot."


Corrective upvote from me - the comment is too funny


You had me all the way up until the rust bit.


It's pretty much the only professional language you can write.

If you consider respect and responsibility.


You can very simply do this with javascript.

Browser developers are now reaching for straws to remain employable.

That said the biggest tasks remains:

- Go back to HTTP/1.1 with "One time password" auth.: https://datatracker.ietf.org/doc/html/rfc2289

- Simplify the browser so that it can be compiled in less than many hours on the latest CPU.

- Completely remove the tie-ins to any commercial/governmental entities.

Basically go back to Netscape with some improvements to javascript performance and hardware accelerated rendering of HTML.

Everything else invented in the last 20 years is meaningless. This is valid in most domains: Raspberry Pi is the only real exception.


HBM is slower than DDR per pin, the speed gain is from a hugely parallel bus.

Parallel means latency if you have non "embarrassingly parallelizable" tasks?


The smallest transfer done from memory is a single cache line, which on most desktop machines is 64 bytes, or 512 bits. You could imagine a memory bus that was 512 bits wide and transferred a cache line per clock, and this would improve latency when compared to a serial bus with higher clock speed. HBM doesn't do that, though, instead every HBM3 module has 16 individual 64-bit channels, with 8n prefetch (that is, when you send a single request to a single channel, it will respond with 512 bits over 8 cycles).


DDR5 has 2 independent 32-bit lanes. Multiple transfers are required for 64 bytes.


DDR5 has a 16n prefetch, so a single transfer from a 32-wide channel moves 64 bytes.


I think linux would be the obvious choice.

VR is being gatekept by Valve and Meta.

How likely is Valve to use Android?


> How likely is Valve to use Android?

Not likely, unless they make a headset which doesn't do much of anything by itself and is just meant for streaming from a PC. To make a standalone headset which can draw on the Steam catalog they would almost certainly want to use a variant of their SteamOS Linux distro.


Valve already has its own OS for a mobile device: Steam OS (based on Arch) on the Steam Deck. My bet is that they'd just modify that for a standalone headset.


6502 > Z80


I would say 6502 < Z80 (cheaper in a good way leaving more for graphics+sound support chips).

The first time I used dBase II on a Xerox C/PM machine was when I realized what a business computer could be like. It seemed standardized more than the PETs or TRS80s.


Well, it's the last actual MPU from that era, so from the perspective of "you will still be able to buy them when existing stock runs out," sure.

Otherwise, there's no clear winner.


For what?


Video without RAM contention, for one: https://retrocomputing.stackexchange.com/q/22257/278

> The 6502 needs access to memory only half the time, during the what is usually called the ϕ2 phase of the clock. The other half of the time, so long as you tri-state the CPU address pins,² the bus and memory are available to other systems.

> The Z80 does not have such a well-defined, synchronous system of accessing RAM. Many cycles don't need RAM access, but many do, and which particular ones do and do not depend on the instruction mix.

> There is no "do nothing" solution to this constraint, and Z80-based systems that wanted consistent video output would have to solve this one way (hardware) or the other (software).


So, any feature of a processor is a reason to say something is better than the other?

Ok... The 6502 does not have port-I/O. Everything is memory mapped. Therefore, the Z80 is better.

Or, the Z80 has more registers than the 6502, therefore the Z80 is better.

Or, the Z80 can do limited 16-bit operations that the 6502 can't, therefore the Z80 is better.


At same clock frequency 6502 is quite a bit faster.


Sure, but how many 1 Mhz z80 machines do you remember?

I'm not saying the z80 is better than the 6502, but it's a pretty big stretch to say the 6502 is better, especially if you're not coding in assembler.


For some workloads, the 6502 was faster than a more-typical 4Mhz Z80.


As with all tools the goal is to be able to use the tool without fear.

Remember the first weeks of coding C? It took me 4 years to not get cold sweat just thinking about using it when the compiler didn't inform you about a typo and you have to rollback the code all the time to fix problems and learn assembly (in X86, ARM and now Risc-V).

And that was without deadlines or any delivery pressure.

Today I realize C is for mad people, but I learned to respect the thing while no longer being afraid to use it in a real scenario.


Unless you can hot-reload it?


It's not the point I think (it's not about having to recompile). The point is not having to maintain a fork of the config file.


The real way to do this is to get user space networking.

Also event-based protocols with deterministic physics.

Last but not least, you need to use a language that can atomically share memory between threads; C (with Arrays of 64 byte Structs) or Java.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: