Hacker News new | past | comments | ask | show | jobs | submit | wwfn's comments login

I was just mulling this over today. DRY = easier-to-decode is probably true if you're working on groking the system at large. If you just want to peak in at something specific quickly, DRY code can be painful.

I wanted to see what compile flags were used by guix when compiling emacs. `guix edit emacs-next` brings up a file with nested definitions on top of the base package. I had to trust my working memory to unnest the definitions and track which compile flags are being added or removed. https://git.savannah.gnu.org/cgit/guix.git/tree/gnu/packages...

It'd be more error prone to have each package using redundant base information, but I would have decoded what I was after a lot faster.

Separately, there was a bug in some software aggregating cifti file values into tab separated values. But because any cifti->tsv conversion was generalized, it was too opaque for me to identify and patch myself as a drive-by contributor. https://github.com/PennLINC/xcp_d/issues/1170 to https://github.com/PennLINC/xcp_d/pull/1175/files#diff-76920...


Bazel solves this exact problem (coming from its macrosystem) by allowing you to ask for what I term the "macroexpanded" BUILD definition using `bazel query --output=build //some/pkg/or:target`. When bazel does this, it also comments the file, macro,and line number the expanded content came from for each block.

This gives us reuse without obscuring the real definition.

I automated this in my emacs to be able to "macroexpand" the current buid file in a new buffer. It saves me a lot of time.


I'm also clueless. First search brings up https://www.tomshardware.com/news/intel-details-powervia-bac... with a nice diagram

It looks like the topology for backside moves the transistors to the middle so "singal wires and power wires are decoupled and optimized separately" instead of "compete[ing] for the same resources at every metal layer"


I'm thinking unpopular could mean the tech is polarizing or frequently dismissed/overlooked.

  * APL -- I haven't dedicated the time to learning in part because there's little support where I normally work. I'd love for APL to have be adapted like a domain specific language a la perl compatible regular expressions for various languages (april in common lisp, APL.jl in julia).
  * regular expressions. https://xkcd.com/1171/
  * bugs/issue tracking embedded in git https://github.com/MichaelMure/git-bug/
But I'm more excited for things that fall into the niche/lesser-known side of of unpopular. I love finding the little gems that change how I organize or work with the system.

  * "type efficiently by saying syllables and literal words" https://sr.ht/~geb/numen/
  * I use fasd[0] 'z' alias for jumping to previous directories in shell every day.
  * Alt+. in shell (readline, bash) to get the previous commands last argument is another ergonomic time saver that I think is relatively obscure. I have a bash wrapper to combine that with fzf for quick any-previous-command-argument  fuzzy search and insert [1]
  * zimwiki [2] (and/or a less capable emacs mode[3]) for note taking has served me well for a decade+
  * DokuWiki's XML RPC [4] enables local editor edits to a web wiki. I wish it was picked up by more editor plugin developers. (cf. emacs-dokiwki [5]) 
 * xterm isn't unpopular per say, but I don't see sixel support and title setting escape codes talked about often. I depend on a bash debug trap to update the prompt with escape codes that set the terminal title [6]
* are clipboard managers popular? I get a lot out of using https://github.com/erebe/greenclip

[0] https://github.com/clvv/fasd [1] https://github.com/WillForan/fuzzy_arg [2] https://zim-wiki.org/ [3] https://github.com/WillForan/zim-wiki-mode [4] https://www.dokuwiki.org/xmlrpc [5] https://github.com/flexibeast/emacs-dokuwiki [6] https://github.com/WillForan/dotconf/blob/master/bash/PS1.ba... -- bash debug trap to update prompt with escape codes that set the title to previous run command -- to eg. search windows for the terminal playing music from 'mpv'


Greenclip is exactly what I've been looking for! Thanks!

Also how do you use zimwiki? I've been trying it for a month and I don't find it that great compared to something like Obsidian or QOwnNotes or even TiddlyWiki. Do you have a specific workflow?


Yeah! On the actual notetaking side: I think I stumbled into a less deliberate "interstitial journaling" paradigm (a la roam research?). I setup the journal plugin to create a file per week from there keep a list of links to project specific files (hierarchies like :tools:autossh, :studies:R01grant:datashare). I also backlink from the project file to the journal file. So each page looks like a log. I try to aggressively interlink related topics/files.

I have an ugly and now likely outdated plugin for Zim to help with this. There's a small chance the demo screenshots for it help tie together what I'm trying to say. https://github.com/WillForan/zim-plugin-datelinker

On the tech side: My work notes (and email) has shifted into emacs but I'm still editing zimwiki formatted files w/ the many years of notes accumulated in it Though I've lost it moving to emacs, the Zim GUI has a nice backlink sidebar that's amazing for rediscovery. Zim also facilitates hierarchy (file and folder) renames which helps take the pressure off creating new files. I didn't make good use of the map plugin, but it's occasionally useful to see the graph of connected pages.

I'm (possibly unreasonably) frustrated with using the browser for editing text. Page loads and latency are noticeably, editor customization is limited, and shortcuts aren't what I've muscle memory for -- accidental ctrl-w (vim:swap focus, emacs/readline delete word) is devastating.

Zim and/or emacs is super speedy. Especially with local files. I using syncthing to get keep computers and phone synced. But, if starting fresh, I might look at things that using markdown or org-mode formatting instead. logseq (https://logseq.com/) looks pretty interesting there.

Sorry! Long answer.


Thank you for the long answer! You've made some really great points, and regarding markdown and org-mode I've been thinking about switching to something like djot instead (from the author of pandoc) but I can't deny the power of emacs and org-mode when combined.

Also your "interstitial journaling" paradigm seems great, I'll try to apply it because I enjoy grounding what I do into some loose chronology kinda.

Thanks again for taking the time to expound on your approach!


http://davmail.sourceforge.net/ works well as a O365->IMAP bridge. davmail can be configure so it's clientId matches outlook's


emacs as a terminal has surprisingly good latency [0] but I'm pretty careless with my pipes and frequently overwhelm the buffer with verbose output. Waiting seconds for a ctrl-c to take so I can fix my mistake is painful. Any magical incantations (long line tweaks?) that help with this?

apropos of emacs as WM: with the config kludge I've made and frequent REPL spamming, I've found `killall -SIGUSR2 emacs` essential to get back the editor.

[0] https://danluu.com/term-latency/


which term do you use? eshell?


M-x shell but i'm in comint-mode more often for ESS/R where I'm likely to accidentally print out a dataframe with too many rows.


I can imagine media products "lie" (by omission, by implicit bias, or even blatantly) but will need some references to reorient to the idea the scale is the same.

My prior is still heavily tilted by the lead up to the Iraq war -- and think the time since has only seen a further embrace of "tell the audience what they want to hear over evidence" (see "top talent" texts re. dominion suit)

[0] https://www.politico.com/story/2015/01/poll-republicans-wmds... > 52 percent [fox viewers] say that they believe it to be “definitely true” or “probably true” that American forces found an active weapons of mass destruction program in Iraq. > Overall, 42 percent still believe that troops discovered WMDs, a misleading factor in the decision to invade Iraq in 2003.

FWIW the iraq war is on my mind after reading https://www.theatlantic.com/ideas/archive/2023/03/iraq-war-u.... Bad war, bad reasoning, terrible consequences, but maybe Iraq could be worse today.


Have you tried setting clientID match outlook in ~/.davmail.properties

    davmail.mode=O365Manual
    davmail.oauth.clientId=d3590ed6-52b3-4102-aeff-aad2292ab01c
    davmail.oauth.redirectUri=urn:ietf:wg:oauth:2.0:oob

https://sourceforge.net/p/davmail/discussion/644057/thread/a...

FWIW, user for login looks like: user@domain.tld


https://www.discovermagazine.com/planet-earth/archaeologists...

> Back to the media-hyped “Stonehenge” Holley found in Lake Michigan: It might be a small version of a prehistoric hunting structure, similar to the one found in Lake Huron. As for why it was falsely labeled in headlines, VanSumeren says that a hunting blind underwater “doesn’t have the same ring to it” as an internationally recognized prehistoric structure like Stonehenge.

maybe this paper? https://www.jstor.org/stable/10.3998/mpub.11395945


Open science depends on open tools! Octave is such a good resource for otherwise walled off code (that doesn't use newer matlab features). But I'm curious where octave is popular. Does anyone pick it over julia or python when starting a new project/research?

I also wish the state of science/engineering software shook out differently. There's plenty of money to pay Mathworks. Is there some kind of license like pay us if you're doing commercial work or publishing research on grants worth over $XXX, otherwise consider it open source?


Another take: There's value in using a compiled language here. I'm an amateur dabbling in computational chemistry, and my weapon of choice is Rust. (After starting using Python). Why:

  - Fast, in an area where speed is important
  - Can be made faster for repetitive tasks by communicating with a GPU using Vulkan, CUDA etc
  - Can incorporate a GUI, and 3D graphics, which helps for exploring things visually, tweaking parameters, doing time simulations etc.
  - Can share. Ie, I showed my brother yesterday by sharing a 10Mb executable. Matlab has license complications and you need it installed. Sharing Python programs is a disaster. (Docker? Pip? Pyenv? Poetry? Virtualenv? System Python? System dependencies for the C and Fortran dependencies?)


Interesting. You might be the first computational chemist I know who actually uses Rust. I know a lot of computational chemists!

Python is the big one, all of the aforementioned chemists are either intermediate or advanced in that. The runner-up seems to be Julia, which I personally have no experience with. The big guys are Fortran and C++. I prefer C for tasks of this nature, but I also shill Scheme so don't listen to my opinions on programming languages.

Best of luck on your computational chemistry endeavours!


C++ and Fortran are what come to mind for computational chemistry due to the legacy libraries but it's been a long long time since I've used Gaussian.


i think there are only a couple exceptions to the Fortran/C++ rule. DFTK.jl is the only one I know written in Julia, and there’s GPAW and pySCF in python


What's discrediting about scheme?


I don't know, but whenever I plead with people in computational science to read Structure and Interpretation of Classical Mechanics I'm usually met with an eye-roll.


Interesting, I'd never heard of SICM, only SICP.


[flagged]


Don't listen to my opinions, sure. But pay heed to the large number of computational chemists who do use these languages. There is a reason that professionals employ the tools that they do.


One reason we have lots of people in numerate disciplines using Python is that we taught them Python. Python is very teachable. Given a class of average 19 year old students from a numerate discipline, my colleagues will teach most of them Python to a reasonable standard in a single module (so e.g 2 hours of teaching per week over 16 weeks).

The same would not be true if we were supposed to teach them C++ for example. It's a huge, sprawling language, full of difficult concepts that are unfriendly to teach, and equally filled with foot guns that if you don't teach them will definitely maim your students, but if you do teach them take up yet more precious time on minutiae.

Safe Rust wouldn't be as hard to teach as C++ but it's no picnic. So even if the Chemists decided that ideally they'd like their undergraduates to learn Rust instead of Python, I think the argument would be that it can't be done on the existing timeline.


Now we teach them Python, because the ecosystem and mindshare is there. But why didn't Perl or Ruby (or Tcl) win? It seems like Ruby didn't get the right ecosystem around it, but Perl has PDL (which does seem to predate much of the Python scientific ecosystem), and had SWIG support before Python. Maybe there's a lesson there for the Rust scientific ecosystem?


> . But why didn't Perl or Ruby (or Tcl) win?

That is a very good question.

IIRC at the time Perl had a bad reputation because of the weird syntax "$%@" (not weird to shell programmers, so much, but they were weird...)

Python was "object orientated" and from a computer scientist.

Perl is very imperative and informal. Python felt to people like a real language, Perl did not.

I am reaching for reasons as I was very disappointed that Python was the winner. I have never liked Python (personal taste - meaningful white space? Really?) and knew Perl very well.

(Perl Vs. PHP is a different, much more interesting, and tragic story)


> Python is very teachable. Given a class of average 19 year old students from a numerate discipline, my colleagues will teach most of them Python to a reasonable standard in a single module

Compared to Perl, sure, Python is teachable. But I'm willing to speculate that Safe Rust as a semi-pure functional language (basically: when in doubt, .clone() all the things and don't even think about the minor hit to performance) can also be taught as a first-time programming language. Rust may be at a disadvantage to Python wrt. bindings to the existing ecosystem of C/C++/F0RTRAN scientific codes, but that's a temporary state of things.


To science majors? No. It lacks so many of the things you'd need to get them started (e.g. a REPL/notebook server, interactive plotting, a "here's everything you need" install (a.k.a conda)), and most (but not all) students have no real interest in learning to program. You could get the interested-in-programming ones to learn Rust, but then they're going to hit the large legacy of existing code and wonder why you taught them something that not related to what they're going to use.


There's an interesting experimental project for a Rust REPL and notebook interface at https://github.com/google/evcxr . Other things, e.g. semi-officially endorsed collections of community crates with no strong backward compatibility guarantees, have seen some development already.

> but then they're going to hit the large legacy of existing code and wonder why you taught them something that not related to what they're going to use.

OTOH other languages should be easier to learn by reference to a comparatively elegant language like Rust. Python, Fortran, C++ etc. have almost nothing in common but Rust could arguably be a good introductory baseline that shares features with all of these.


Historical reasons, most often than not.


You take the comment far too literally.


Given your criteria, you might want to consider (modern) C++.

* Fast - in many cases faster than Rust, although the difference is inconsequential relative to Python-to-Rust improvement I guess.

* _Really_ utilize CUDA, OpenCL, Vulcan etc. Specifically, Rust GPU is limited in its supported features, see: https://github.com/Rust-GPU/Rust-CUDA/blob/master/guide/src/... ...

* Host-side use of CUDA is at least as nice, and probably nicer, than what you'll get with Rust. That is, provided you use my own Modern C++ wrappers for the CUDA APIs: https://github.com/eyalroz/cuda-api-wrappers/ :-) ... sorry for the shameless self-plug.

* ... which brings me to another point: Richer offering of libraries for various needs than Rust, for you to possibly utilize.

* Easier to share than Rust. A target system is less likely to have an appropriate version of Rust and the surrounding ecosystem.

There are downsides, of course, but I was just applying your criteria.


> Fast - in many cases faster than Rust, although the difference is inconsequential relative to Python-to-Rust improvement I guess.

Do you have any examples? AFAIK properly tuned rust and c++ will perform largely the same. Actually, Rust should have a bit of an edge due to the prohibition of aliasing. In practice it can vary if a standard library implementation is suboptimal or the compiler has suboptimal codegen into Bitcode but that’s generally going to be rare these days I think.

> Richer offering of libraries for various needs than Rust, for you to possibly utilize.

Are you talking for computational chemistry specifically or general libraries?. Don’t know about the former. For the latter I’ve found not only are more interesting libraries available, there seem to be generally high quality versions. Additionally, “cargo add xxx” is infinitely faster than integrating some random third party c++ dependency. Not to mention that C++ falls on the floor if one dependency requires transitive dependency X and another (or you) requires X at an incompatible version? Rust handles that elegantly where two different modules can depend on the same package at different versions without conflict without you needing to worry about it beyond the size of the executable.

> Easier to share than Rust. A target system is less likely to have an appropriate version of Rust and the surrounding ecosystem.

Again, haven’t that found to be my experience. Rustup lets you obtain any version of the rust tool chain, no muss no fuss. Additionally, it does a phenomenal job of not worrying about which compiler vendor you’re using (there’s only one) and more importantly there’s no versioning issues with libraries written against an older version of the language - 2021 and 2018. Oh, if you’re on windows, you also don’t have to worry about whether the target person has the right c++ runtime library version installed. With rust they just run the exe.


> Do you have any examples? AFAIK properly tuned rust and c++ will perform largely the same.

In principle, Rust has some overhead for being "safe", which C++ does not. Also, C++ benefits for longer period of time and more people working on optimizing its libraries and compilers.

> Are you talking for computational chemistry specifically or general libraries?

Ah, indeed, no. I have no idea which chemistry libraries are available for Rust and for C++. I was referring to scientific computing, algebra and other heavy-lifting work.

> Additionally, “cargo add xxx” is infinitely faster than integrating some random third party c++ dependency.

That's not one of OP's criteria. But - seeing how you like package managers, try `conan install /path/to/srcdir` from the build directory. For more details, read:

https://docs.conan.io/en/latest/getting_started.html

> Not to mention that C++ falls on the floor if one dependency requires transitive dependency X and another (or you) requires X at an incompatible version?

C++ is a language, it doesn't have dependency management. As for libraries, that situation is so rare - in my experience - that a chemistry person should not even be bothered to consider it.

> haven’t that found to be my experience.

Well, I learned something new today... I didn't know about rustup. Was it introduced recently?


> In principle, Rust has some overhead for being "safe", which C++ does not. Also, C++ benefits for longer period of time and more people working on optimizing its libraries and compilers.

Do you know of any specific safety features that makes rust slower? About the only one I can think of at runtime is that everything is bounds checked but that’s usually fixable in many ways (and technically c++ has that too except it’s applied randomly based on subtitle API differences like [] isn’t but .at is). As for quality of the standard library it’s hard to say. I haven’t found it to be bad. The HashMap implementation is better out of the box, a Vec/vector doesn’t really have anything special. File system APIs, error handling, modules, and async all feel more mature and polished (not performance per se but just overall completeness of the language). What is on your mind when you say that the c++ standard library is higher quality? Oh, and which of the three major implementations are you referring to?

The compilers story is actually even better because very little optimization happens in the frontend and thus a lot of the optimizations done for c++ apply to rust too. One specific way the language itself is better is that it disallowed aliasing so it’s closer to Fortran for numerical stuff and there’s various optimizations that c++ just can’t do. If I recall correctly that hasn’t been fully hooked up to the compiler due to bugs in LLVM but that will get resolved eventually. Overall, I’m not familiar with the claim that Rust code compiles worse than c++. About the only thing I’ve run into so far is bound checking showing up in hotspot code that I wrote non idiomatically because I’m still new to the language.

> Well, I learned something new today... I didn't know about rustup. Was it introduced recently?

1.0 is 6 years ago so not recent at all (not sure if there were pre 1.0 releases too): https://www.reddit.com/r/rust/comments/5iqsmg/rustup_100_is_...


> not only are more interesting libraries available, there seem to be generally high quality versions

I wonder what's an equivalent of Eigen? https://eigen.tuxfamily.org/index.php?title=Main_Page

> if you’re on windows, you also don’t have to worry about whether the target person has the right c++ runtime library version installed

It's usually easy to build such binaries for Windows. That's a single combobox in project settings. Here's an example, just running the .exe works: https://github.com/Const-me/Whisper


> I wonder what's an equivalent of Eigen?

2 seconds on Google yields:

https://www.nalgebra.org/

https://rust-lang-nursery.github.io/rust-cookbook/science/ma...

https://docs.rs/GSL/latest/rgsl/

I have no idea the quality but you’ve also got bindings to more mature libraries:

https://docs.rs/lapack/0.14.1/lapack/

And https://www.erikpartridge.com/2019-03/rust-ml-simd-blas-lapa... is an overview from t years ago and does say it’s maybe not as good although not sure how much has changed in that time (eg has nalgebra matured sufficiently?)

It’s possible that Rust is lacking in good linear algebra libraries. I’m not as familiar with that space.


I want to get into that for CUDA. I've been using Vulkan via WGPU. But there, everything has to pass between CPU and GPU as byte arrays, when I've heard CUDA lets you maintain the same data structures. On the to-do list.

Would also def benefit from the larger selection of C++ libs, especially for GUI and graphics.

Given the main computation issue is doing the same operation on many values in a 3D array, using the GPU more would be great. And my current setup for that is clumsy.

I disagree on easier to share. Binaries work the same either way. Compiling from source on rust is generally install Rustc, and do `cargo b --release`. Compiling someone else's code in C++ is... complicated.


> when I've heard CUDA lets you maintain the same data structure

Yes, that's possible with a combination of:

* Unified address space for pointers (and C++ references are basically pointers)

* Paged virtual memory on GPUs

... however, remember that many/most data structures which make sense on the CPU may not make sense to use on a GPU. And - NVIDIA very often implements and promotes certain features because they sound good in marketing, not because they're actually that useful. Or because they let you make a super-slow program into a meh-speed program, not because you would use them in a really-fast program.

> Compiling from source on rust is generally install Rustc, and do `cargo b --release`.

Well, I guess you may have a point, but let me still make a couple more arguments.

First, installing Rust (and related libraries/tools/whatever) is not trivial for people who don't know rust, and some OS distributions only offer you an older version of rust using their distro-package-managers. It's more likely that a C++ environment is already set up on someone's machine... TBH, though, if the distro is older, that might not be a perfect fit for what you're building.

Second C++ package managers becoming more popular, e.g. Conan: https://conan.io/ ; and that helps for when your OS distro doesn't cover you.

> Compiling someone else's code in C++ is... complicated.

This has actually improved a whole lot over the past... say, decade or so!

* Unzip/untar

* Configure the build with CMake (cross-platform build system generator - almost ubiquitous these days in C++ projects): `cmake -B build_dir/`

* Assuming you have the dependencies set up - it should "just build": `cmake --build build_dir`

* Run the thing from the build directory or install it with `cmake --install build_dir`.

... and if you want to tweak package options more easily than via the command-line, you can use a TUI (ccmake) or GUI (cmake-gui).

The problems start when you're missing dependencies, or if the author didn't make their code platform-independent / multi-platform.


Maybe it is not part of computational chemistry, but how do you do explorative analysis? More than half my time working with data is spend grouping and summarising data, typically in Pandas and drawing interactive graphs in plotly.


Plots and visualizations in 3D. (Vulkan / WGPU) For example, to compare wave function solutions, I use surface plots of a 2D slice. I have a UI that can change the slice of the non-mapped axis. Plots of psi'', the potential etc too.

Or for simulating molecular motion, draw it out in 3D, with 6DOF FPS-style camera controls etc.

I imagine Pandas would be OOM too slow for this. Compared to Numpy, Pandas is, in my experience far too slow. Another speed limitation is Matplotlib's 3D plots make my computer crawl when rotating them etc; and they're not very interactive, at least on their own.

When dealing with these systems, a particular computational challenge is that they're 3D, so computation scales with precision^3. (This also adds complications to visualization, as I alluded to above)


Did you write your own visualizations using wgpu? I don't think its goals include providing visualization out of the box.


Yep! So, took some work upfront. Basic rendering/navigation engine. `egui` for the GUI.


If sharing is a problem I’m unsure why you didn’t think docker was the solution. I can’t think of an easier way for both the provider and the consumer than docker.


They did mention licenses as a complicating factor.


Python can be very fast. Library ecosystem and network effects by far outweigh anything you listed. Mindless Rust evangelism is tiresome.


Python sometimes feel like being stuck in a local-maximum: the network effect is real, but there are so many downsides to the platform itself, and the "two-languages" paradigm renders the computational code obscure and undebuggable.


One thing is for sure, rust has excellent marketing.


Or it's actually good and you're bemoaning it due to bias. It's worth exploring the areas outside of our comfort zone.


I've tried it, and it seems interesting. But I distrust anything that appears to have a lot of propaganda behind it, and rust definitely fits that bill. The hype will need to die down and the dust settle before I trust anything important to it.


I think you are being unfair here. Literally every community has the 1% of toxic, loud and insufferable people in it. Point me at your hobbies and I'll find these kinds of people there, I can bet my neck on it.

Many people conflate their own irritation from hearing about the thing X often and against their will, with the quality of the thing X.

IMHO critical thinking and maturity demands that we make that distinction.

I have worked several times in Rust companies and their devs are extremely hardcore and very quiet. You couldn't tell these people are responsible for the technical operations and code of worldwide financial services.

So again, be a bit more charitable. Every community has its rotten apples. That says nothing about the quality of the thing that the community stands for.


i know you were replying to someone else but thanks for your comment, its good to get perspective. i’ll try to be less of a curmudgeon


After being a curmudgeon from 30 to 40 years old, I've figured it's not good for me so I started changing things around and I like my life and career more now.

Don't get me wrong, insufferable people still exist! But I no longer judge an entire community by them.


I used to use it a lot, but the python libraries are much further ahead at this point. For my uses (essentially image processing and statistics) Octave is always playing catchup with Matlab but python is mostly at par or ahead of Matlab. With the exception of the parallel stuff... but that wasn't anything Octave had either. Eventually when I need performance again I'll see about figuring out how to migrate parts to Julia. One thing I like about the python things is there are some hints of IDL in there (I'm a masochist and liked IDL... Matlab frustrates me at times). Matlab was great for slapping quick UIs together and making little tools, but Jupyter works well enough for that.

The 0 indexing in python does really and truly suck sometimes though.


> The 0 indexing in python does really and truly suck sometimes though.

Now that's something you usually don't hear. At my university I heard a bit of "urban legend" about how back in the day (think early 1990's) a couple of professors got a peer-reviewed article published because they found out that Matlabs 1-indexed implementation of some algorithm resulted in numerical errors, which they measured and corrected. Don't really remember the details, and most of those involved have already retired.


What you've said isn't surprising.

The problem isn't whether 0 or 1 are "right" or not, it's the inconsistency. It makes transcribing something from a textbook harder, because the indexing logic in a textbook algorithm can get quite intricate. It's even worse if they use slices like M[i:j,m:n].

Indexing from 1 is the standard in many areas, going back many decades. SWEs have adopted a different convention.


(Obligatory) There are only 2 hard problems in computer science: cache invalidation, naming things, and off-by-one errors


may You concurrency forgotten have.


It just printed out in your comment a few hours after I thought I invoked it.


but this depends a lot on the particular formula. For example, if you use discrete Furier transforms, all formulas are natural with 0-indexing, but weird with 1-indexing. In general, when the indices of an array are to be interpreted as modulo N, you want them to be 0, 1, ..., N-1

It is inevitable to have to use both conventions if you do varied math stuff. Thus, you will never be happy.


The pragmatic solution for transcribing is probably to make some functions `array_index` and `array_slice` that automatically convert between the two conventions. I think Julia offers another way: It lets you choose between them.


I prefer OPTION BASE {0, 1} :)


I just never find myself working out formulae/series/proofs on paper using zero-indexing. When you look up derivations/solutions in books and references they're one indexed. A recent example was a formula that had a (n-1)!*n! term that made sense from understanding the derivation. But I forgot to code it as (n-2)!*(n-1)! at first.

So you end up having to translate from one-indexed derivations to zero-indexed code and it leads to different bugs. Ultimately zero-indexing just trades one layer of bugs for another.

IMHO zero-indexing makes sense to people thinking about the machine, not necessarily the work people are trying to use the machine to do.

To be fair sometimes the opposite is true and zero-indexing gives less complex expressions.


There are good reasons why indexing from zero with exclusive upper bounds should be preferred. I'll defer to Dijkstra for the explanation: https://www.cs.utexas.edu/users/EWD/transcriptions/EWD08xx/E...


Because it's intuitive that an array contains an element before the first element?


I have a ruler. What number does the first centimeter start at?


It's a bit like all those people claiming that the new millennium shouldn't have been celebrated until 2001. They didn't stop to think how many years pass between the epoch and the time of interest. You celebrate a baby's first birthday after one year of time has passed. Until that point they are zero (and a bit) years old.


Do you typically measure sets with a ruler?

Furthermore it depends on what you're doing with that first centimeter. In many instances the first centimeter is correctly considered to be 0.5


Is this a pithy response? The first element is indexed by zero.


Isn't dropping Dijkstra links as an appeal to authority about 0-indexing considered a pithy response?

How about I drop a link about Einstein Notation with a curt observation that there are good reasons it uses both zero and one indexing simultaneously?

As I said it depends on how you are thinking about things. Indexing makes sense to people who are focusing only on the machine.


We're discussing programming language design. Referencing a prominent computer scientist to explain something on that subject better than I can seems like a reasonable thing to do. If you want to point me to somewhere where it can be argued that indexing from one is useful (not just a matter of aesthetics), please do.


Again, machines are not the purpose of computing.

"The purpose of computing is insight, not numbers." (Hamming)

Machines and notation are both tools. Aesthetics do matter particularly for tools used for thought.

There is no correct answer and imposing one tool's limitations on everything is not useful. Particularly not "just because Dijkstra said so".

Sometimes 0-indexing works well. Sometimes it sucks. Sometimes 1-indexing works well, sometimes it sucks.

I did in fact give you a reference. The fact that it's irrelevant to you and dismissed as merely "aesthetics" and that you think a computer scientist is the domain expert of all science tells me everything that needs to be said--you never deal with questions of correctness beyond mere transcription.

What I said is that moving from Octave/Matlab to python's zero indexing sometimes does truly suck. And I mean it. That doesn't mean 1-indexing never sucks or that I am inaware of Dijkstra's opinions. Having to arange(1, N+1) or 1:N+1 or whatever everywhere isn't eliminating off-by-one errors.

Matlab and Octave and FORTRAN etc are domain languages designed for linear algebra. They are of course going to be more convenient for their domains.


In language design, a choice is necessary. You can support all possible indexing strategies, but that's problematic for a whole different set of reasons. So you need to choose, index from 1 or from 0. I've never once thought my algorithmic implementation is mucky because of the index from zero, inclusive lower bound, exclusive upper bound policy languages. I used to regularly lament how ugly the MATLAB code was for that reason. Unless MATLAB had bolted on another language feature to solve that problem, I'll let my extensive (albeit 15 years old) use of MATLAB in signal processing algorithm implementation be my aesthetic guide.


I'm not here to defend Matlab as some aesthetic language (in fact why on earth they haven't poached more of Octave's syntax improvements remains an eternal mystery) so if you just want to bitch about Matlab go right ahead ;)


The scars run deep.


That's not a scientific paper.

I find it easier not to make mistakes when I index from 1.


It's a numbered manuscript by Edsger Wybe Dijkstra, one of the most influential members of computing science’s founding generation [1].

The issue stems from the word "index" in the sense of "to count" .. that the first cell is numbered 1 makes sense to some.

What is actually happening is measurement, or the offset from the origin of memory.

The first inch starts at zero on the tape measure and ends at 1, that first inch has an offset of zero.

For all manner of reasons to do with measuring memory as one might measure wood a great many people think in terms of memory offsets.

[1] https://www.cs.utexas.edu/users/EWD/


[redacted]


Maybe work on your communication skills.


The 1-based indexing in Matlab does truly suck, all the time.


The 0 indexing is something one can get used to. On the other hand the fact that range(k,n) goes from k,...,n-1 is the most annoying part.


That's so that range(10) has 10 items in it.


Yes. But when I use invoke range(k,n) or list[k:n] I intend to do operations on some specific items, not repeat some operations 10 times.

So every time I see `for i in range(10)`, I have to do the mental computation: I am repeating this operation on numbers from 0,...,9.

I would rather write `for i in range(9)` and know that I am operating on 0,..,9.


For anyone that spends significant time using these languages, those mental mappings are as natural as the one you seem to think is obvious - then the additional advantages of the approach become apparent.

Exclusive upper bounds mean slices work neatly: a[0:10], a[10:20], a[20:30] etc. That's very neatly represented in a loop from zero in steps of ten. That pattern turns up all over the place when accessing arrays.

I've spent enough time in MATLAB to have been scarred by that kind of array slicing and off-by-one errors.


This has the nice advantage that range(k,n) + range(n,m) == range(k,m).


In my past engineering university Matlab is popular. Students and staff get access to a network license and courses (non CS) are built around it. It is what older professors are used to and I don't see them shifting to a new language given how busy university professors are, and that Matlab just works. So outside the university, Octave is the path of least resistance for those not planning to purchase Matlab or get it by "other means".


> But I'm curious where octave is popular. Does anyone pick it over julia or python when starting a new project/research?

In many non-CS engineering disciplines Octave (and Matlab) is quite common. For doing everyday numerical calculations. Python and Julia do not compete with the simplicity of using GNU Octave.


>> Python and Julia do not compete with the simplicity of using GNU Octave.

Python I understand. The numpy notation sucks big time for translating straighforward notation of engineering disciplines and is a huge step backwards in readability; but Julia? That's a surprise. In fact the opening code in the Octave page can be copied almost exactly into Julia and it works.


The big simplicity is that you install octave and you have a gui development environment ready to go. Julia and vscode are great, but setting up a project with all dependencies is a little more work for university students with no programing experience .


I actively despise how MATLAB code looks, despite spending years using it. I spent 6 months translating a library to python and it was so difficult to understand what it was trying to do because of how it was represented. I find numpy notation much easier to grok. For sure, you can do interesting indexing that makes it hard to understand, but that isn't even possible in MATLAB.


I guess Mathworks is pushing very very hard to get Matlab into Universities at the teacuing level, with campus licenses, which honestly for students are great (until they stop being students and realize they have been sucked into an abyss of cost).

It is really costly for a dept. to switch to Julia, say, as you would need all your colleagues (I teach applied maths and use matlab/octave) to make the switch lest you annoy the students (because ones learn one thing, others other).


It's also difficult to learn the new paradigm of a language, especially since MATLAB is so tightly coupled with the IDE and every other programming languages is usually run on the command line. You can use an IDE for those languages, but it usually requires some setup and sometimes that breaks.

I recently found Spyder IDE for Python which feels a lot like MATLAB. Despite my hate for MATLAB, I did at times appreciate the layout of the IDE and the way variables are still available after running a script. Spyder fortunately has a similar interface, except even better since you can restart the interpreter without restarting the IDE. I highly recommend it as a MATLAB replacement!


Autodesk did the same with AutoCAD with student licenses, and they unofficially allowed the pirating and use of AutoCAD knowing it would then win market share when people went to work for a company that needed to secure licenses.


I agree with you, but times are changing. In my experience quite a few departments are moving to python now. This is largely driven by new students who choose or push for python based subjects, because of the much broader applicability. Typically that's a slowish process, starting at the first year level and transitioning to later levels. It's not like it hasn't been done before, when I was a student we would be taught in c or java.


Indeed, despite the tale in university that Matalab is what “the industry uses”, never found an established business willing to pay the license fee. The single only shop using Matlab I encountered was venture capital funded.

But truth be said, nothing so far quite managed to replicate Matlab syntax succinctness and ease - even if it’s a (unstated?) design goal of Julia.


I use it when I’m given a model by a scientist they wrote in matlab and I need to port it somewhere else to integrate with an actual system. Octave is great for testing to make sure port matches etc. I also use Octave to learn how matlab algos work because octave is open source and reimplements most if matlab a standard library


I'm an engineer who does a lot of "applied research" projects. I've used (still do some) octave quite a bit.

There's nothing like octave (or MATLAB) for scripting to "get shit done." There is a plethora of packages and tools to do whatever you need.

Need to import excel data and graph it? Octave is great. Need to connect to an sql database, pull data and export it? Octave. Linear algebra. Nuff said.

It's a great tool for engineers / scientists who don't want to "program" (although you can!) and want readable syntax to just git r dun. Also, there's a shit ton of m code floating out there on the internet and in engineering schools / university research groups. Octave isn't 100% m compatible but if you don't have a special tool boxes & functions you can get them to run in octave with a little debugging.

I love octave. It's gotten me out of a lot of binds.


Its use is mostly limited to STEM undergrads that for some reason have trouble installing/running MATLAB.

Especially in Engineering (Mechanical/Electrical/Civil) Octave is not a substitute to MATLAB for the simple fact the former does not feature the various toolboxes useful to practicing engineers.


> for some reason have trouble installing/running MATLAB The reason is perhaps the biggest argument against its use in an academic setting, so I'm quite happy that they are the biggest customer


MATLAB isn’t difficult to install or run, you just have a license or you don’t.


You'd be surprised at the computer illiteracy of some of the non-CS STEM majors.


I found it really useful when I got started in numerical computing and machine learning. Now I use the Jax/Numpy stack and Pytorch, but octave still has a special place in my heart due to Andrew Ng's canonical intro to ML Coursera course.


I used it in grad school a lot, especially when I was running/editing someone else's Matlab code. But if I were to build something on my own, Python would be the way to go.


I write all my code in C++, but when debugging, I format my printed data so that I can examine it in octave.


I use Octave when I teach linear algebra, but it is missing some things even for that, e.g., lasso was missing the last time I tried it. So back to Matlab it was. When I have time (not in the middle of an academic year) I might try to contribute some code.


I used it a bunch when studying physics at uni, it was perfect for making plots for lab exercises...


"Perfect" to describe a plotting syntax inherited from Matlab (and unfortunately also "passed-down" to matplotlib) is stretching it...


I'm also very curious for hear from expert lispers. I've tried to find the sweat spot where lisp would fit better than what I already know: shell for glue and file ops, R for data munging and vis, python to not reinvent things, perl/core-utils for one liners. But before I can find the niche, I get turned off by the amount of ceremony -- or maybe just how different the state and edit/evaluate loop is.

I'm holding onto some things that make common lisp look exciting and useful (static typing[0], APL DSL[1], speed [2,3,4]) and really want to get familiar with structural editing [5]

[0] https://github.com/phantomics/april [1] https://github.com/coalton-lang/coalton/ [2] https://renato.athaydes.com/posts/revisiting-prechelt-paper-... [3] https://github.com/fukamachi/woo/blob/master/benchmark.md [4] https://tapoueh.org/blog/2014/05/why-is-pgloader-so-much-fas... [5] https://github.com/drym-org/symex.el


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: