If you ever used MPW shell, a lot of those characters were part of the syntax of commands and the regular expression parser so it was common to learn to compose ∫,® ∂ etc. The debugger TMON also used them, so they just become second nature, like !@#.
Neat, did not know that. At the time MPW shell was used, it was a little bit too advanced for me — I was only as far as working my way through C For Dummies (or something like that) with a limited student edition of CodeWarrior* around the time REALbasic came out.
* Possibly bronze edition? Whatever it was, it was 68k only.
I love using x macros with c++ to create static types and hooks to disambiguate from basic types. This is more applicable to final executables than libraries - I would never provide anyone with an API based on the mess it creates, but it allows application code to be strongly checked and makes it really easy to add whole classes of assertions to debug builds.
In 1988 I was working on a Mac software package and I remember the thrill of his returned “Customer Registration” card arriving. We had a small display cabinet and it went in there (along with Stanley Kubrik’s and as few others)
The Radio Series was the best, followed by the scripts to the radio series (more random access!)
The TV series was actually pretty good too (like a Tom Baker Doctor Who (Adams was also a Who editor/writer during that time and had access to all manner of polyurethane monster kit!))
I also loved the Dirk Gently books but I always felt like they needed more of a denouement. Every passage before the end was like a hand carved chocolate frog, and the endings were said frog hitting a publisher at speed.
With all likelihood anyone still producing commercial (or free!) software today, or in the last 40 years, has users that they would love to see eaten by crows.
The big loss for me was when many apps took over the windows 3.0 MDI interface style. (Specifically thinking of code here). I was doing some hacking in codwarrior on Mac OS 9 a few weeks ago and it was such a joy to have 5 source code windows open any once - and a separate window with build or find results that weren’t wrapped in a miserable column. There’s actually a lot about Xcode I do like - but treating content panels like the old Puzzle desk accessory was never an improvement.
I always thought that a near learning project would be training an ML on “real” cards and then detecting fakes. I don’t play the games but I was always thrown by how much effort went into counterfeits, but I guess there’s enough profit for someone. There’s usually something wrong with the registration or colors.
What is missing in the context here is that the cards mentioned in this article are not actually real. They never existed, and therefore they are not "counterfeits" of a real one, they are just made up. Someone just claimed to know someone that had playtest cards from back in the day. They are not a commercial product.
Avg player doesn't buy a few thousand cards at a time. If you buy a high value card from a random seller you should always check it unless you trust them from references.
People only pull out slower tools for valuable, forgery worthy cards.
If someone is buying 1000 $1000 dollar cards, it’s still worth it lol.
Even cheap forgeries cost money to produce, so I wouldn’t expect a lot of low value cards to be forged. If you sort out the valuable cards and do random sampling, you can probably catch the most problematic cases.
I built one of these several years ago for MtG cards. Trained a neural network with a binary classifier on a cheap $20 USB microscope looking at examples of the backs of real cards vs. fake cards.
Sadly never got around to shipping it, because it worked really well. Ported it to the web, but never figured out the billing issue, and so it died during the delivery phase. From time-to-time, I still wonder if I should resurrect this project, because I think it could help a lot of people.
Why not put out the fires first? Imaging being a first responder hearing this political nonsense takes precedence over your safety. The fires are here. They need to be dealt with.
I’m with parent - what if you don’t have the tool? What if there’s a syntax error in some implementation or dependency such that the tool chokes early?
Human readable headers are accessible out of context if the implementation. They also help provide a clear abstraction - this is the contract. This is what I support as of this version. (And hopefully with appropriate annotations across versions)
> I’m with parent - what if you don’t have the tool?
The "what if you don't have the tool" situation never happens in case of Rust. If you have the compiler you have the tool, because it's always included with the compiler. This isn't some third party tool that you install manually; it's arguably part of the language.
> What if there’s a syntax error in some implementation or dependency such that the tool chokes early?
In C I can see how this can happen with its mess of a build systems; in Rust this doesn't happen (in my 10+ years of Rust I've never seen it), because people don't publish libraries with syntax errors (duh!).
In this specific case, your tool requires a web browser (though I'm assuming that there is a non-web browser form of what is being sold here). Maybe you are in a situation where you only have terminal access to the machine.
Maybe you are on your phone just browsing github looking for a library to use
I'm sure people can continue to imagine more examples. It is entirely possible that we have different experiences of projects and teams.
> I’m sure people can continue to imagine more examples
Hopefully they’ll imagine more compelling examples.
If the hypothetical person’s phone is capable of browsing GitHub, I don’t see why they can’t also browse docs.rs. It renders well on small screens. That’s not a hypothetical, I’ve actually read the docs for libraries on my phone.
> The "what if you don't have the tool" situation never happens in case of Rust.
So it’s built into GitLab and GitHub? BitBucket? How easy is it to use on windows (i.e. is it is easy as opening a .h in notepad and reading it)? How easy is it to use from a command line environment with vim or emacs bindings?
I could go on. “Never” is doing a lot of heavy lifting in your assertion. I shouldn’t have to install a toolchain (let alone rely on a web browser) to read API documentation.
If you can open a browser, open docs.rs. The GitHub repo usually contains a link to docs.rs because that’s how people prefer to read the documentation.
If you prefer working without the internet that’s fine too. Use cargo doc, which opens the rendered doc page in a local web browser.
If you prefer being in a text editor exclusively, no problem! Grep for `pub` and read the doc comments right above (these start with ///). No toolchain necessary.
Look, most normal people don’t have some intense phobia of web browsers, so they’d prefer docs.rs. For the people who prefer text editor, it’s still a great experience - git clone and look for the doc comments.
The point is, the existence of docs.rs only encourages Rust library developers to write more and better documentation, which everyone, including text editor exclusive people benefit from. That’s why your comment sounds so strange.
> So it’s built into GitLab and GitHub? BitBucket?
No. It's built into the toolchain which every Rust developer has installed.
> How easy is it to use on windows (i.e. is it is easy as opening a .h in notepad and reading it)?
A easy as on Linux or macOS from my experience.
> How easy is it to use from a command line environment with vim or emacs bindings?
Not sure I understand the question; use how exactly? You either have a binding which runs `cargo doc` and opens the docs for you, or you use an LSP server and a plugin for your editor in which case the docs are integrated into your editor.
> I shouldn’t have to install a toolchain (let alone rely on a web browser) to read API documentation.
If you want you can just read the source code, just as you do for any other language, because the docs are right there in the sources.
For publicly available libraries you can also type in `https://docs.rs/$name_of_library` in your web browser to open the docs. Any library available through crates.io (so 99.9% of what people use) have docs available there, so even if you don't have the toolchain installed/are on your phone you can still browse through the docs.
I know what you're going to say - what if you don't have the toolchain installed and the library is not public? Or, worse, you're using a 30 year old machine that doesn't have a web browser available?! Well, sure, tough luck, then you need to do it the old school way and browse the sources.
You can always find a corner case of "what if...?", but I find that totally unconvincing. Making the 99.9% case harder (when you have a web browser and a toolchain installed, etc.) to make the 0.1% case (when you don't) easier is a bad tradeoff.
I don't understand how you don't understand the order of magnitude difference in flexibility, utility, availability, etc between needing to run a specific executable vs merely opening a text file in any way.
"you always have the exe" is just not even remotely a valid argument.
> "you always have the exe" is just not even remotely a valid argument.
Why? Can you explain it to me?
I'm a Rust developer. I use my work station every day for 8 hours to write code. I also use `cargo doc` (the tool for which "I always have the exe") every day to look up API docs, and in total this saves me a ton of time every month (probably multiple hours at least, if I'm working with unfamiliar libraries), and I save even more time because I don't have to maintain separate header files (because Rust doesn't have them).
Can you explain the superior flexibility and utility of "merely opening a text file" over this approach, and how that would make me (and my colleagues at work) more productive and save me time?
I'm not being sarcastic here; genuinely, please convince me that I'm wrong. I've been a C developer for over 20 years and I did it the "opening a text file" way and never want to go back, but maybe you're seeing something here that I never saw myself, in which case please enlighten me.
I don’t understand how you don’t understand that that’s always an option. Rust source files are written in plaintext too.
There are a few people in this thread, including you, who claim that they vastly prefer the output of documentation to be plain text in a single file rather than linked HTML files OR reading the source in multiple plaintext files.
That’s a preference, so y’all can’t be wrong. But consider that if this preference was even slightly popular, cargo doc would probably get a —-text option that output everything in a single text file. The fact that it doesn’t have it tells me that this preference is very niche.
Yes, it works with GitHub, GitLab, Bitbucket, and everything else. It's built into the compiler toolchain.
It works with every syntax that you can compile, because it uses the compiler itself to extract the documentation.
Yes, it works on Windows too. Rust supports Windows as a first-class platform. It works with dependencies too (the docs even link across packages). The fragmentation of C tooling and unreliability/complexity of integrating with C builds is not a universal problem.
Rust's built-in documentation generator creates HTML, so anything with a browser can show it. It also has JSON format for 3rd party tooling.
The same language syntax for the documentation is understood by Rust's LSP server, so vim, emacs, and other editors with LSP plugins can show the documentation inline too.
I've been using this for years, and it works great. I don't miss maintaining C headers at all. I write function definitions once, document them in the same place where the code is, and get high fidelity always up-to-date API docs automatically.
The "what if you don't have the software" argument doesn't hold water for me. What if you don't have git? What if you don't have a text editor? What if you don't have a filesystem?
Most programming language communities are okay with expecting a certain amount of (modern) tooling, and C can't rely on legacy to remain relevant forever...
From the blurb at the top: “ The app is 95% complete, […] I intend to clean up the rest of it, and go GA within a few weeks. ”
Assuming the last 5% is going to just take a few weeks is naive from a development point of view. Everyone learns this the hard way, so I don’t mean it as a dig.
> people probably already having a lot of tools that they use and prefer. Another tool adds to their work load.
Further, a lot of providers are very strict about what tools their organization is allowed to use. In the past I’ve tried to get providers to look at a personal web page where I’d had a medical history and links to imaging data, and they weren’t allowed to access it via policy.
(I then brought 10 disks of imaging on a thumb drive - but they wouldn’t take that either. So I re-burned them onto physical media, and they were ok with importing that.)
I do understand why those policies are necessary, and in the end I learned their systems and limitations. It’s actually been an ok experience.