Hacker News new | past | comments | ask | show | jobs | submit | more etra0's comments login

> I therefore find myself stamping out boilerplate for e.g. JSON deserialisation more than I would like.

This was my first experience while getting started in the .NET ecosystem. I wanted something similar, I wanted to build small console tools/services and F# looked cool but hit the reflection issues too soon.

I mostly use crystal and golang, but both have their own issues. Hopefully F# improves with time to be fully usable with AOT!


Luckily nowadays C# is just as terse for writing CLI apps or even terser, which does not have aforementioned issues with AOT.


I'm willing to try it, I'm just waiting for a few more benchmarks regarding memory usage. Last time I checked I could only find the Benchmark Game [1] for AoT measurements, but they're from .NET 7.

Memory usage is important to me and golang is very good at that being light, but I really hate writing it for various personal reasons.

If I had more free-time I'd just simply go with Rust but you know, free time is limited :P.

[1] https://programming-language-benchmarks.vercel.app/csharp-vs...


BenchmarksGame is actually an interesting if unfortunate case for C#.

Many tests there are at odds with the areas of optimizations that get most engineering effort investment in .NET: high level and/or application logic and high-performance primitives like Span, Vector, etc. (with the exception of perhaps Regex-Redux, but even there I'm not sure why the results are so low, because C# has one of the fastest[0] regex engines that only loses to Intel Hyperscan, Rust one and PCRE2).

Probably the best way to go about this is to just give it a try with a sample project and see if the RAM usage is to your liking - you can get far with plain 'dotnet new console --aot' and standard library (once you're done, .NET's variant of go/cargo build is 'dotnet publish -o {output folder}').

Overall, .NET tends to have higher memory footprint than Go because its GC is designed to sustain much higher allocation rate and heaps spiking to much higher use without severely throttling performance (which Go does). But there are interesting developments with the new GC mode called DATAS. I made a very simple example project some time ago to showcase .NET's concurrency and publishing experience to someone else which shows that it now rivals Go where it's the strongest[1].

[0] https://github.com/BurntSushi/rebar#summary-of-search-time-b...

[1] https://github.com/neon-sunset/http-bench


Note that I also imported regex-redux into rebar: https://github.com/BurntSushi/rebar/tree/master/record/all/2...

It isn't exactly equivalent to the benchmark game's version. The biggest difference is that it doesn't permit parallelism.

For more details on the benchmark model: https://github.com/BurntSushi/rebar/blob/master/MODELS.md#re...

.NET does well overall in rebar's benchmarks, but less so in regex-redux (in both the benchmark game and in rebar's version of it). Not sure why. I haven't investigated. Of course, this is assuming you aren't using .NET's default interpreter engine. You've got to switch to the no-backtracking or compiled modes to get decent perf.



[1] That isn't the benchmarks game.

This is the benchmarks game:

https://benchmarksgame-team.pages.debian.net/benchmarksgame/...


Do you have any resources?

I was looking for similar benchmarks but couldn't find anything conclusive. I found the benchmark game [1] but only had .net 7 AoT (not .net 8) benchmarks and memory consumption seems too high.

I'd ditch golang on a pinch as soon as I can use a decent type system, and C# seems like a great alternative.

Crystal would be my ideal but its compiler is still too slow, sadly.

[1] https://programming-language-benchmarks.vercel.app/csharp-vs...


Same, and even worse, the playground requires you to be logged in. That's like the initial step of a demo and requiring an account is simply nuts.


The same is true of GPT. Just sayin.

Actually GPT is worse because they demand both a phone number and an email. Still haven't signed up because of that nonsense.


LLM inference takes up some scarce GPU time and many people are trying to use free entrypoints to build services instead of the intended paid APIs, so I understand why those services want to put limits on usage.

Programming playgrounds however are freely available for pretty much every mildly popular language, and these days many toolchains can even be compiled and run with JS or WASM so one could just serve some static files to host it. This is definitely more suspicious than what OpenAI and other ML companies are doing.


There is no parallel. If meta required you to be logged in to run local inference with llama, that might be similar. Your using openai's cloud so you're logged in.


I can understand why openai requires phone number. With how much resources they spend on each prompt, they want to limit abuse of their systems. Also ChatGpt is not available in many countries and a phone number is reliable enough test of location. I am myself in one of these countries, but I also care about my privacy, so I use services that use chatGPT indirectly instead: https://you.com with disposable email and https://phind.com


Given the iris-cataloguing efforts of Altman's Worldcoin, and Microsoft's intrusive telemetry-surveillance, I would doubt that's the only reason.

I appreciate you offering an alternative that preserves a small measure of privacy.


That's why I don't use GPT.


I can totally relate. I bought my ErgoDox EZ a while ago and I still love it.

I changed the switches for some lubed linears, and added some foam and now it even sounds better. It also reduced my wrist pain.

I'm debating with myself whether to get the Moonlander or not. I don't actually need it, but it looks so cool (and the USB port of the ErgoDox is quite outdated haha).


I love my ergodox ez. I got the moon lander, and it looks cool but I do not like it. Niether does my friend. The tenting tilts the thumb away which makes it uncomfortable. I wish I just got another ergodox ez (eventually I did).


The latest Ergodox model now has USB-C.


The YouTube channel HowNOT2 [1] has a great collection of videos testing various ropes (mostly for climbing). Among these videos, there are a couple that test specific knots (like [2]). It's overall a very fun channel, even though I don't even do any sort of climbing.

[1] https://www.youtube.com/@HowNOT2

[2] https://www.youtube.com/watch?v=dagg2-If4h8


I can relate to this. I blog sporadically, using different computers, and every time I had to pull the repo and get it to run, several things would break, resulting in a very painful dependency management process (I must admit that I lacked experience with ruby projects in general).

After porting my blog to Hugo, having a single binary that can perform all tasks on all three operating systems I use (MacOS, Windows, and Linux) is a blessing, and now the blog builds incredibly quick.


Corridor Crew did a some sort of anime using this technique [1] and then they did two videos [2, 3] explaining the technology behind. Quite interesting if you ask me!

There still are some issues with the eyes and a bit of flickering but at the speed everything is moving I wouldn't be surprised if this improves in a year or two.

Needless to say, there's still a lot of artistry involved in such a process so anything is yet to be completely automated.

[1] https://www.youtube.com/watch?v=tWZOEFvczzA

[2] https://www.youtube.com/watch?v=FQ6z90MuURM

[3] https://www.youtube.com/watch?v=mUFlOynaUyk


It's a DRM free game. I cannot say if the game pings home or not because I don't own it but this statistic is collected by Steam, not Larian.


They outsourced "home".


I really wanted to like Nim, but its async story is not that great (no multithreaded runtime AFAIK, no channels) and albeit justifiable, I don't like the way it treats imports. I think Crystal and Go are better in the async space.

I thought I'd have more troubles with the case insensitivity thing, but it turns out, it wasn't a big issue at all, and UFCS is great.


Ah, good to know. I hadn't checked out the async story in Nim at all.


Now they're building the ELT –– Extremely Large Telescope.

They're great with names haha.


Don't forget OWL: Overwhelmingly Large Telescope ( https://en.wikipedia.org/wiki/Overwhelmingly_Large_Telescope ).


Or, "Once Was Larger"

"The ESO specialists expect resolutions from OWL that are up to 40 times higher than those of the Hubble Space Telescope. If the 100-meter mirror cannot be financed, a 60-meter variant is being planned. The name OWL would remain the same. Because then the project is jokingly called »Once was larger«."

https://www.itespresso.de/2006/04/17/groesstes-observatorium... --> Google Translate


Still waiting for the Stupendously Gigantic Telescope.


TEAT Telescope to End All Telescopes


Big Ass Telescope?


What, no BFT?


They should build one ten times the size of the next largest one, and call it the Quite Sizeable Telescope


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: