Yes, this is the method that I have been always using. But I have had many cases where it seems to reset or bug out in some other way, and the Logitech MX Master 3S does still does not work well. There's definitely something weird about movement, it doesn't feel linear and accurate even after running this command.
The mouse feels perfect on Windows/Linux, but it's off in my Macbook. Sometimes also loses bluetooth connectivity, where it kind of starts to skip and/or stops working completely. And I have to take the bluetooth usb, put it back in for it to work normally again.
Maybe I should try another mouse, if you have any recommendations for a similar, ergonomic wireless one. Definitely don't want the magic mouse.
Ergonomically I think the Logitech one is perfect for me.
I do own a device I'm very fond of, Ploopy Mouse, but it's not likely to match your criteria. It's quite big, 3D-printed, open source, wired, and runs QMK. Loads of character :)
Well, the sales video on the page starts immediately with a person ditching the MX Master mouse, so I do feel targeted, I will continue with the video.
Yes, there some specific routes set in the gym intended for the ACG group where legs / feet aren't necessary. They'll also set similar routes nearby each other, and another volunteer from the group will climb alongside and help the other climber as needed. But a lot of routes are accidentally set in a way that works for wheelchair users as well. Especially if they are allowed to cheat a little bit and grab a nearby hold from another route when needed.
Related to the article, people with impaired vision may have a volunteer use that person's cane to point out where the next hold is, or they just need to be led to the wall to start and they can handle it from there.
For people with a bit more mobility issues they also have an ascender seat / chair (not sure what the proper name is). You sit down, get strapped in, and pull a handle down from overhead repeatedly to "climb". It's not climbing a wall with holds, but you get the same workout and still end up 60 ft in the air.
They can't do all the same climbing routes, but keep in mind that having your strength concentrated in 2 or 3 limbs instead of 4 doesn't necessarily mean you have a worse power-to-weight ratio. An athletic wheelchair user will have incredibly strong arms IME.
My solution is to have this snippet in my vimrc. Don't ask me why this works. It's been years, and I had no issues with it.
" System-agnostic setting making the unnamed clipboard register act like
" clipboard in any other editor. Copy with y commands, and paste with p or P.
if has('unnamedplus')
set clipboard=unnamedplus,unnamed
else
set clipboard+=unnamed
endif
Beside this, there is also the issue of setting paste when pasting in insert mode.
Research is a bit too strong a word for it but I'll summarize my conclusions.
Most people I talked with seemed used to and very happy with the Python model: modules are distinct namespaces for symbols and their referenced objects. Pretty much everyone I talked to had no difficulty understanding modules, it's always the package management that makes things complex no matter which language it is.
Something I spent quite a bit of time thinking about was whether it was worth reifying that model into the language as first class objects. In other words, should import be a special keyword that makes the interpreter magically bind symbols or a function which returns a regular everyday normal object? At the end of the day, Python's modules are just dictionaries, just like Javascript modules. Should I make that fact apparent or hide it behind language keywords? Interestingly, both languages went in the opposite directions: Python went from special import syntax to allowing you to access modules as a dictionary, while Javascript went from a require function that returns an object to an import statement. I ultimately chose a somewhat weird mix of both, powered by lisp's flexibility.
I also tried to figure out how compiled languages approached modules. So I dug up literature on Modula, Modula-2 and Oberon and tried to figure out how they represented modules. This ended up having a significant influence in my design in the form of isolated modules with export control. Each module contains its own table of symbols and their references. Importing is just setting a local symbol to the value of the other module's symbol, and only symbols in the exports list can be imported. I also liked the qualified and unqualified names: added the option to prefix the module name to the local symbol.
I also thought a lot about how to map modules to the file system. The idea of program folders from Windows has been an inspiration for a long time now. The idea is if the module itself is reachable then all of its submodules are also reachable by the loader. I also think it's important that no file escapes the package directory. Python and Ruby frequently have thing.{rb,py} scripts and thing/ directories side-by-side, I sought to eliminate that for the module's root directory only which results in a main file like thing/thing.{rb,py}.
To enable module-oriented development, I've found the most important feature of the modules and packaging system is editable libraries. Like pip's editable installs and npm link. When I develop a project, I often end up with several supporting libraries. Languages should support linking these local versions to the main project so they can be developed simultaneously.
Thanks for response. Some of these thoughts floated around in my head as well, though less ordered and well articulated.
One thought regarding the quantification element. I’ve grown to appreciate the uniformity of Go’s solution, where you always import an entire namespace, optionally aliasing it to avoid conflicts.
This small restriction makes some naming decisions simpler (for example sticking to config.Parse, and not the stuttering config.ParseConfig). Additionally it sprinkles the namespaces over the code so you get at least _some_ feel for them.
On another note, on some occasions I have thought to myself that it’s easy for the import section to get to little scrutiny on reviews, and wondered if there’s something that could make us wonder „should x depend on y?” more often.
> I’ve grown to appreciate the uniformity of Go’s solution, where you always import an entire namespace, optionally aliasing it to avoid conflicts.
That's a very good solution in general which makes everything uniform and consistent. It's only due to my personal tastes that I didn't implement it that way.
I'm obsessed with symbol management. I'm so obsessed with this I wrote my language in freestanding C just so there would be no libc and compiler cruft in the resulting ELF binary. I'd rather deal with complexity than see weird doubly underscored stuff in readelf output.
Names are everything in computer science. I have some kind of psychological need to have clean names. For that I need clean namespaces that I can shape to my will. So I absolutely wanted the ability to import only the symbols I needed in order to minimize the pollution of the namespaces. I support just importing everything as a convenience but I personally never use that feature. I also made sure I had the ability to rename imported symbols to anything I wanted just in case other programmers aren't as obsessed with names as I am.
I went so far with this I implemented basic control flow as a library of lisp macros. There are no reserved keywords or special cases, they're just normal functions that get imported like all the others. Thus they can be renamed or avoided entirely. In fact I made it so only two symbols are present in every namespace: import and export. And those can be overridden too after the programmer is done with them.
I probably have more than few screws loose or something. Go's approach is totally reasonable. Instead of importing N symbols from a module, it imports 1 symbol and nests all N symbols under it, and if there's a clash you only need to rename the module prefix. It's nice and creates single points of truth.
... Now that I think about it, the only reason I didn't implement it the Go way is I didn't want to add special syntax for nested symbols to my language. In other words, in my language "config.Parse" is a single symbol instead of a "config" + "Parse" pair. I feel like it just wouldn't be lisp anymore if I added syntax to decompose the former into the latter.
> sticking to config.Parse, and not the stuttering config.ParseConfig
Yes. I find the stuttering repetition in your latter example to be profoundly irritating and a symptom of a bad modules system. In my language I tried to prevent that by making it easy to add or remove module prefixes to the imported symbols. I also made it easy to rename symbols for good measure just in case people did it anyway.
Have you looked into the "first class modules" system used by ocaml? AFAIK it's the only language of its maturity to use something like it. It's very powerful without adding mandatory complexity. But because basically no one comes to it with prior experience in a similar approach, it's hard to gauge how powerful, or when the added complexity of using it fully is actually worth it.
I have! I didn't reflect much on it though because I'm focused on creating a dynamic language. Perhaps I did not understand it?
Another interesting concept I ran into while researching modules is parameterized modules. Essentially, modules that take their dependencies as arguments. The modular equivalent to dependency injection I suppose. Instead of a module importing by symbol a specific library as a dependency and some package manager resolving it to actual files in the load path later on, the programmer explicitly loads the library and passes it to the module as an argument.
Instead of the symbolic imports we're all used to:
(import lib); lib imports lib2 internally
Module importing becomes analogous to function calls which construct an instance of the module given its dependencies:
(import (lib2))
(import (lib lib2))
This is really elegant and more or less reifies package management into the language. However, it presents serious ergonomics issues because it forces the programmer to deal with all these package management and library loading details. The truth is we want to sweep all that ugly stuff under the rug, not deal with it every single time we import a module.
The main benefit, the loose coupling that stems from the ability to substitute dependencies without having to change the importing module, can be accomplished in a declarative manner via package managers. Arch Linux packages for example may have a "provides" variable which allows multiple packages to implement an interface of sorts and be used interchangeably to satisfy dependencies. So I think parameterized modules imposed significant costs for little benefit.
Here it is. There is no visualization of the stack, which apparently Stackline in the other comment supports, but I don't tend to need that. Just being able to move between the windows is good enough for me.
I moved from Linux to M1 MacBook recently. I know my greps and vims, but I was tired of audio glitches during high CPU usage, system not waking up from sleep, total OS freezes, super loud fans, and so on.
Now I get none of that. I don't think I've ever heard the fans. Audio just works, everything is super snappy. It always wakes up. I'm no longer afraid of bluetooth.
And on top of that, setting my $DAYJOB VPN took three minutes and it just works, where on Linux I had constant problems with DNS breaking, and setting it up was always an hour of work, praying I got the config files right this time.
It really seems to be "unixy desktop with working sound", the best of both worlds.
Exactly my experience. After 15 years, I became an apple fanboy in 15 days. I still do hate losing my muscle memory on some bash shortcuts, but I'd say it was very much worth it.
reply