Here's a static check Rust's Box<T> offers over C++'s std::unique_ptr<T>.
The following program is obviously incorrect to someone familiar with smart pointers. The code compiles without error, and the program crashes as expected.
% cat demo.cpp
#include <iostream>
#include <memory>
int main() {
std::unique_ptr<std::string> foo = std::make_unique<std::string>("bar");
std::unique_ptr<std::string> bar = std::move(foo);
std::cout << *foo << *bar << std::endl;
}
% clang -std=c++2b -lstdc++ -Weverything demo.cpp
warning: include location '/usr/local/include' is unsafe for cross-compilation [-Wpoison-system-directories]
1 warning generated.
% ./a.out
zsh: segmentation fault ./a.out
The equivalent Rust code fails to compile.
% cat demo.rs
fn main() {
let foo = Box::new("bar");
let bar = foo;
println!("{foo} {bar}")
}
% rustc demo.rs
error[E0382]: borrow of moved value: `foo`
--> demo.rs:5:13
|
2 | let foo = Box::new("bar");
| --- move occurs because `foo` has type `Box<&str>`, which does not implement the `Copy` trait
3 | let bar = foo;
| --- value moved here
4 |
5 | println!("{foo} {bar}")
| ^^^^^ value borrowed here after move
help: consider cloning the value if the performance cost is acceptable
|
3 | let bar = foo.clone();
| ++++++++
Not only does Rust emit an error, but it even suggests a fix for the error.
An approximate 20% reduction in bandwidth looks significant to me. I think the problem here is that the chart uses a linear scale instead of a logarithmic scale.
Looking at the data, I'm inclined to agree that not much CPU is saved, but the point of MessagePack is to save bandwidth, and it seems to be doing a good job at that.
> An approximate 20% reduction in bandwidth looks significant to me.
To me it doesn't. There's compression for much bigger gains. Or just, you know, just send less data?
I've worked at a place where our backend regularly sent humongous jsons to all the connected clients. We were all pretty sure this could be reduced by 95%. But, who would try to do that? There wasn't a business case. If someone tried succeeded, no one would notice. If someone tried and broke something, it'd look bad. So, status quo...
In a system that requires the absolute speediest throughput compression is actually usually the worst thing in a parsechain - so parsing without first decompression is valuable.
I've tried messagepack a few times, but to be honest the hassle of the debugging was never really worth it
Until recently (2023), the type inference was very weak and did not work with higher-order functions (map, filter, reduce, etc.).
As a result, Typed Clojure was practically unusable for most applications. That has changed as of last year. For instance, the type checker can now handle the following kinds of expressions.
(let [f (comp (fn [y] y)
(fn [x] x))]
(f 1))
This expression was a type error before early 2023, but now it is inferred as a value of type (Val 1).
Unfortunately, many Clojure users think types are somehow a bad thing and will usually repeat something from Rich Hickey's "Maybe Not" talk.
I've worked with Clojure professionally. The codebases I've seen work around dynamic types by aggressively spec'ing functions and enabling spec instrumentation in development builds. Of course, this instrumentation had to be disabled in production because spec validation has measurable overhead.
Although Typed Clojure has made remarkable progress, the most editor tooling I recall for Typed Clojure is an extension to CIDER that hasn't been maintained for several years. (The common excuse given in the Clojure community is that some software is "complete" thus doesn't need updates, but I have regularly found bugs in "complete" Clojure libraries, so I don't have much confidence here).
Overall, if one wants static typing, then Clojure will disappoint. I still use Clojure for small, personal-use tools. Having maintained large Clojure codebases, however, I no longer think the DX (and fearless refactoring) in languages like Rust and TypeScript is worth trading off.
I have slowly been getting into Rust for some personal projects. Already ported one of my Clojure applications to Rust and really enjoy the the tooling (and resource efficiency!) compared to Clojure.
Would like to try Axum, but couldn't find reliable code generation tools. Has the tooling improved on that front? I would love to hear if anyone has tried the rust-axum[1] OpenAPI generator and whether it generates decent Axum-based code.
OpenAPI isn't a hard requirement. I'm open to using Protobuf or Smithy as an IDL if the Rust ecosystem offers better server code generation with them.
With tools like ogen[1], one can take a single OpenAPI document and generate server code with a static router, request/response validation, Prometheus metrics, OpenTelemetry tracing, and more out of the box.
It can also generate clients and webhooks. Authentication is just declaring a SecurityScheme in the OpenAPI document then implementing a single function. The rest of the backend is just implementing a single interface. Unlike oapi-codegen, there is no need to tinker with routing libraries or middleware for authentication and logging.
Pair this with sqlc[2] and SQLite's `pragma user_version`, and you get type-safe database code and database migrations for free. I will concede that adding SQLite is a manual process, but its just two imports added to main.go.
Frontend is entirely your choice. Go's standard library provides good enough text templating that I don't miss ERB or Django-style templates. Using the standard library's `embed` package, one can easily embed static assets into a single binary, so deployment can be as simple as `go build` and moving the binary.
I have a hard time using languages besides Go for developing backends, because the code generation tools make Go as convenient as frameworks like Quarkus while staying lightweight and fast.
Exactly. I don’t want to waste time on trivial decisions like which frontend tool to use or what templating engine to pick. This is exactly why Go isn’t great for building full-stack apps. Every app in Go looks different because you have to assemble all the parts yourself.
In contrast, RoR or Django provides a nice rubric to get you up and running quickly. That said, I still like Go when I need to spin up a microservice with a well-defined scope.
Personally use Porkbun since Namecheap's API is poorly-documented and they attempted a KYC audit for purchasing a $100 domain.
I am fine with the identity verification, but their ticketing system seems to have sent all of my e-mail to their spam box, because they would never respond. I attempted opening tickets explaining the e-mail situation, but they wouldn't listen. In the end, I gave up and let them deactivate the account.
Moved to Porkbun, purchased the exact same domain (no KYC required!), and have been a happy user of their API for about two years now. They also have much more lax requirements for API usage compared to Namecheap. Porkbun also supports WebAuthn and logging in with a security key. It's overall a much nicer service than Namecheap.
That kyc thingy is icann requirement, its how domain registration works. Icann require every accredited registrar to verify registrant details so registrar would randomly ask for id, passport etc. That include porkbun, they're bound to their contract with icann as an accredited registrar too. They probably won't ask today but maybe tomorrow, or next week, or next month, or next year, or never.
They already got your details from your card details and decide its enough. Something like vpn, using niche browser, details on card not tally with registration details etc etc would throw off their threat mitigation system. Also different business operated differently, their payment gateway behave differently etc etc. Too many random factor to avoid xxx specific registrar because they ask for kyc when the kyc itself is a requirement.
The requirement in the contract is nowhere near that specific. Contact info validation is sufficient for almost all registrars. It's possible a given registry has higher standards, or maybe one registrar got some order to be more thorough, but great reason to avoid given this is a commodity and there are actually good alternatives. (I broadly like Tucows and Cloudflare)
Namecheap is on my NO NO NO list, along with GoDaddy (and a bunch of others). Google Domains was also on this OH GOD NO list, but thankfully Google did the Google thing and killed the product.
Most of my domains are on Namecheap since the times when wikipedia's domains were there. Hopefully, my low-key personal domains are of no interest to anyone...
JSX syntax doesn't change every month either. React itself, as a library, has a great deal of backwards compatibility.
What does churn is the tooling (e.g. abandoning Webpack for Vite). But the actual API and syntax has remained virtually unchanged. You can still write and use class components today. Hooks themselves are nearly 7 years old at this point, with no sign of changing. You are not required to use "server components" or any of the new features in React.
Meanwhile people in this thread recommend "simple" alternatives like Vue that produce breaking changes so massive it splits their ecosystem in half.
I stopped writing React for probably 2 years and now the whole framework is different and I have to relearn the entire thing. Defining a component is completely changed, changing state is completely changed, and the component lifecycle is completely replaced and it feels like the complexity of the framework has gone through the roof. At least for me this is the type of churn that makes me not want to do it anymore.
Practically all Japanese BBS in use today are written in Perl. They are either proprietary CGI scripts or a proprietary fork of an open-source software like WebPatio[0] and Zero-channel Plus[1].
> they are just bad customers who they will always have a hard time making money from
The iPhone 12 mini still ended as one of the top 10 smartphone models sold in January 2021.[0]
There was also a new iPhone SE model that was half the price and roughly the same size. I wouldn't be surprised that most decided the iPhone SE was a better deal at the time. In fact, the iPhone 11 and iPhone SE were the top-selling iPhone models of 2020.[1]
Both of those sources paint the picture GP was telling. For 2020 the iPhone SE sells 2nd place in unit count... nearly tied with a unit that was out for 6 fewer months that year with twice the ASP. Nearly 1/3 the sales of the previous generation model that year. Same for the top 10 best selling models in Jan 2021, combine both the iPhone SE 2020 and iPhone 12 Mini and you've got the monthly sale volume of the previous generation iPhone 11 at half the sale price and 2 models worth of R&D. This data for a the year after a slump of smaller models not being available.
If this was OnePlus or someone those would be decent numbers. The problem is they are Apple, they already held the top 4 spots by significantly larger margin with premium devices. Building low cost devices to only make 8th and 10th place with them isn't necessarily a win for them. They can just cut margin on the older versions of the popular models if they want to capture that price point without stifling their focus on their new products each year.
The way people talk about smaller phones around here you'd expect the 2020 model was outselling new models through this day and the masses were waiting with bated breath for updates. Truth is there is just a smaller portion of the market that actually wants such phones when it comes time to upgrade but you here from them more often because they are one of the least common and least valuable segments of the market to try to service.
The following program is obviously incorrect to someone familiar with smart pointers. The code compiles without error, and the program crashes as expected.
The equivalent Rust code fails to compile. Not only does Rust emit an error, but it even suggests a fix for the error.