Hacker News new | past | comments | ask | show | jobs | submit | lewantmontreal's comments login

Set ‘media.default_volume’ to 0.3 or so in about:config.

On ff mobile, where Mozilla in their endless wisdom disabled about:config, the same can be reached via url chrome://geckoview/content/config.xhtml .

Don’t fall for this. It’s too good and I can’t get out now since nothing else than vscode supports it.

Dependencies, env vars, dev databases, ports, everything all written up in code and ready to use once you open the workspace. All without messing up your host pc. Works also via ssh.


JetBrains has an official DevContainer extension for a while now[0], though I can't vouch how well it works.

I've seen devcontainers used in various projects, and how it further leads to a bigger divide between dev and ops. Your team should standardize on a single OS and software stack and life will be easier, + developers will also learn something about Ops and the systems they deploy to. I know, unpopular opinion, with the pervasiveness of Kubernetes and backend systems developed on Macs.

[0] https://plugins.jetbrains.com/plugin/21962-dev-containers


JetBrain's implementation is far worse, though I hope Fleet will be their saving grace.


I tried JetBrains version first. Sadly it doesn't really work well.


You can SSH in and use vim or emacs the way god intended.

You can also connect via JetBrains Gateway.


> nothing else than vscode supports it

Isn’t it just like remote development? I’d expect you to be able to e.g. run Vim inside the container, use Emacs TRAMP to connect to it, or `rmate` to edit the files inside in e.g. Sublime.


Intellij supports it (a bit) via remote development. It starts up a container and runs the ide back end inside of it.

It's not ergonomic at all thought, because now you need to have 2 IDE's running. The one that managed the container and the one inside the container


If using vim, you don't really need any of this. kubectl exec is enough. Too bad I can never seem to get good enough at vim for this workflow however (even though I really want to).


huh? are you doing a bit or just didn't look in to it at all?

even emacs supports it: https://happihacking.com/blog/posts/2023/dev-containers-emac...


Thanks for the link, this definitely didn't seem to be supported a few years ago when I last checked.


Some iOS 15 phones like the 5S/SE have no newer comparable phones which makes upgrading difficult. Oh dear, I suppose not browsing the web is another option.


What’s missing from the more recent SE models?


A 4-inch screen size that allows the phone to be used single-handed with ease anywhere on the screen, which I’m using to write this comment right now. It’s uncompromising in battery longevity when considering its size, too. First-gen SE is the best phone Apple ever made. The newer SE are terrible.


As someone that went from 5S to 2016 SE to 2020 SE which is my current phone, I've been very satisfied with the 2020 SE.

With each phone I have always taken steps to reduce battery usage, disabled background refresh, and manually turn on low power mode after charging every single time.


Not sure if you're doing it manually from the settings, but you can create a Shortcut that enables low power mode. I have a little icon that I press, but there might be an even better way (automatically trigger on unplug? IDK)


Is your hand large enough to handle the phone one-handed and touch anywhere on the screen with ease without risking a drop? If not, do you use some kind of magnet finger-ring case or similar case?


I have my iPhones set to bring the top of the screen to the middle when I double tap on the Apple logo on the back.

It’s a feature called Reachability.

https://support.apple.com/guide/iphone/use-reachability-iph1...


I cannot reach the upper left corner one-handed. Have not tried finger-rings or popsockets yet.


Small physical size, also a headphone jack.

The more recent SE models are iPhone 8 bodies with upgraded internals (being an iPhone 11 and 13 inside, respectively) and Apple no longer offers a Mini version of their newer phones (not that the A16 and A17 are significant improvements on the A15, but still).

And the Mini is still larger than the first SE by about a half inch in either direction; the 8 (and the newer phones) add another half inch on top of that.

Of course, the problem with using a first-gen SE today is that because information density on mobile is abysmal, the added vertical space has been very welcome to app developers (who also might not test on the smallest screens). So while the smaller phones were ergonomically far superior to the larger ones, and this is to a point true for the Minis as well, that's since been "balanced" by inherently worse UI/UX on said phones.


Eh, iPhone SE 2nd generation/3rd generation?


I’ve always wondered why the backtrace just kind of stops midway to when you finally handle an error, instead of showing up to the `?` where the error happened. I’ve been using anyhow’s `with_context` to add helpful notes to each usage of `?` to get around it.

I’ll have to check if this macro is an easier solution!


Author of `wherr` here. I can totally relate to the situation you describe. That's exactly what happened to me, and why I created wherr. At first, I couldn't understand why I couldn't find the `?` line in the backtrace.

`anyhow`'s `with_context` is great, but seems to require you to modify each line of code where errors are created, to find the error.

With `wherr`, you only need to add one line of code, `#[wherr]`, per function.


That doesn’t seem like a meaningful DX improvement and seems more stylistic, no?

Really we’d need rustc to support some kind of AOP technique that lets you inject code at every call site somehow. That way you write the extra context info in a single macro that gets applied transparently for you.

That being said, SNAFU’s approach seems better in the interim. Just add a Location (and presumably there’s a backtrace option too) to your error type and it’s captured automatically without any macros.


> and presumably there’s a backtrace option too

There is [0].

However, I'm not personally a fan of backtraces (or the lightweight location information, really). I'd much rather provide what I call a "semantic backtrace", which is where almost every Result-returning function is annotated by the programmer with a unique `.context(...)` call. This often results in a better error to the user ("couldn't connect to the server <because> couldn't get the URL <because> couldn't open the config file <because> couldn't open file ..."). The programmer can identify the location of the error due to the uniqueness of each context.

Backtraces are an easier way of getting similar information, but much less useful to the end user. I'd rather people prioritize the end user over the developer, so that's what I try to encourage in SNAFU.

That being said, plenty of people want location / backtrace information, so SNAFU tries to support those as well.

[0]: https://docs.rs/snafu/latest/snafu/struct.Backtrace.html


Semantic backtraces should be visible to the system administrator. In a DevOps shop, that's your developer, not your end user. This mentality works for application software the end user is running on their own hardware. Showing backtraces to users external to your system administrator is generally not desirable - you don't want to leak implementation details because they may reveal proprietary intellectual property or they may reveal internal details that can be used to compromise your security. User-facing errors where you're running software in a managed environment are typically carefully managed as explicit carveouts in terms of what information they will reveal.

Also, most users are non-technical. So semantic backtraces that are visible only end up helping a small, more technical subset of your customer base (even if your customer base is developers in my experience). You do want to be able to have a way to link a semantic backtrace to a specific error instance that a customer observed though.


Yes, this is how I prefer to do it as well. Takes some work to consistently remember to put in context information, but you can add in things like the filename for those NotFound errors, and so on.


Correction: seems like the Error crate is working to natively acquire support for reporting backtrace automatically [1]. Extending that to add line number support seems reasonable. That way you get an upgrade across all codebases without anyone needing to make any changes.

https://github.com/rust-lang/rust/issues/53487


I like the idea of not even having to write a line of code per function.

Maybe `cargo fmt` could be extended to add such annotations automatically? That way, it wouldn't be "magic", since the macros would be spelled out in the code, but the manual work of adding them would be eliminated.


Silently keeping a stack trace behind `?` would get hideously complex in no-std environments.

That said, this crate is an answer to my prayers if it works as advertised.


Any way to get Firefox ESR without untrusted third party flatpacks? On normal firefox you can’t run your own extensions…


It's not third-party, it's a flatpak built by Mozilla and pushed by them to Flathub.


It's actually easier than what the document shows you: Once you get to the Debian shell do apt-get install firefox-esr instead of installing the flatpack. I assume they don't tell you that, because then you end up with the Debian build of Firefox instead of the Mozilla's


The Debian package of Firefox is maintained by a Mozilla employee :)


I am said maintainer. Note that's not part of my work at Mozilla, and that doesn't make the Debian package endorsed by Mozilla as the way to get Firefox on Debian.


Thank you for maintaining the deb!


Sorry for assuming both of these, it wasn't clear that they aren't true.



Buying a movie from google for that certainty sounds pretty good huh. I wonder if the purchase might also dissuade any accidental deletions of a google account.


> Buying a movie from google for that certainty sounds pretty good huh. I wonder if the purchase might also dissuade any accidental deletions of a google account.

I feel like I must mention here: I anal. I am not a lawyer. That being said, Google accounts are complex. Here is what it says near the top of the page in the same link I posted:

> Google also reserves the right to delete data in a product if you are inactive in that product for at least two years. This is determined based on each product's inactivity policies.

My personal opinion is that Google isn't committing to deleting your Google mail data at exactly two years whether you bought a YouTube movie. However, Google is also not committing to preserving / storing say your location history (for example) in perpetuity because you bought a YouTube movie.

Once again, this is just what I read. I am not a lawyer, and definitely not a Google lawyer.


We weren’t supposed to talk about fight club


I mean, nitter has pages indexed on Google so it's not exactly secret


With higher latency the experience suffers. As an example liveview page navigations work via websocket which is not cacheable so navigating back/forward always needs to make a request. You really need some edge setup like fly, or only serve a geographically local audience.


Unfortunately no matter how local you push the CDN: somebody with a crappy connection to their ISP, or a crappy router in their home, or on a tenuous wifi link in their back room or at a coffee shop will drown in the input lag all the same and inexperienced users (those whom this is more likely to happen to) will be the ones to fall into unstable interaction orbits with the UI such as spam clicking the button that doesn't appear to have acknowledged their press.

The real best solution for this one major drawback I am aware of would be for Liveview to be able to generate some JS handling in their front-end widgets, more like Lambdera.

That, and/or run a BEAM on the user's machine via WASM so that they don't have to dirty up their hands in even the slightest amount of javascript lol! xD


Wasn’t manifest v3 supposed to prevent dynamically loaded code? As the article says these extensions are featured but (I think) the latest update to v3 says: “In January 2023, use of Manifest V3 will become a prerequisite for the Featured badge in the Chrome Web Store.”

https://chromeos.dev/en/posts/manifest-v-3-migration-timelin...


No. Manifest v3's main role was to cripple ad blockers... hence you're now seeing YouTube experiment with "anti-ad-blocker" popups warning users they wouldn't able to see the site.

They know they got people by the balls after they rolled out v3 earlier this year.


Note: I am the author of this article.

Migration to Manifest V3 has been postponed, all these extensions (like most extensions in Chrome Web Store) are using Manifest V2.

Note that the changes in Manifest V3 are meant to prevent security vulnerabilities. Outright malicious extensions will always find a way.


As I said, outright malicious extensions will always find a way. I now discovered a newer variant of these extensions, this time using Manifest V3. And they still run arbitrary code: https://palant.info/2023/06/02/how-malicious-extensions-hide...


Google announced on December 9 that this timeline was paused: https://groups.google.com/a/chromium.org/g/chromium-extensio...


Yes, dynamic code is all outlawed.

Disclaimer, I filed this issue. https://github.com/w3c/webextensions/issues/139


Thing is, you can't load javascript code... But you can easily write a mini virtual machine to run any code you download from the web. And due to javascripts introspection abilities, that VM can (if the developer wishes) do anything.

The simplest javascript bytecode interpreter is probably only a few hundred bytes, which is easy to hide in a big extension.


Those sidesteppings are outlawed by the Chrome Web Store policies.

There are JS-in-JS interpreters out there. They're just not allowed. https://github.com/jterrace/js.js/ https://github.com/marten-de-vries/evaljs


For what I understand in order to use CJK languages on Linux you currently have to install not a keyboard layout but a separate input device which makes getting started quite difficult. I wonder if the pop-os devs have anything up their sleeve for that.


Not sure what the experience is like when just using the GUI on a normal distro. But on my arch setup I just installed fcitx5, fcitx5-mozc, set up the environment variables to actually use fcitx, and make it autostart, and I had japanese input working just fine.

I am guessing that most normal people desktop distros like popos just have something like fcitx set up. In which case you'd just have to install the plugin, or, I wouldn't be surprised if it comes preinstalled on some of them.


Yes, sounds right. When trying out Fedora I installed the fcitx5 tools, added some env vars and virtual keyboard settings and rebooted. Then I had CJK working on a virtual keyboard device.

It’s just very different from MacOS and Windows where you just add a keyboard layout and you’re done, that’s why I was curious. Admittedly I love the amount of customisation options on fcitx.


What about input method editors? Don't you need to install those on windows and macOS as well (unless you are happy with the method provided by OS)? Or are the third party input method editors less popular now? (I don't use any of the languages requiring them so I don't know how widely are they used at the moment).


or am I not interpreting your comment correctly here?


IDK about Chinese or Japanese though, at least for Korean, there is no need of separate input device at all.


Why would you need a separate input device?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: