Hacker Newsnew | past | comments | ask | show | jobs | submit | jonpalmisc's commentslogin

Settings > Notifications > Notification Content > Show: "Name Only" or "No Name or Content"

I've had this enabled to prevent sensitive messages from appearing in full whilst showing someone something on my phone, but I guess this is an added benefit as well.


Just to clarify, this is within the Signal app settings—not the OS (iOS or Android) system settings.

Critical distinction, as merely changing OS notification settings will simply prevent notification content from being displayed on-screen.


Wait so if I do iOS setting notifications > never show previews it’s still caching them in the background? Unencrypted?

Yes. And technically, from a privacy perspective, it's even worse than that. What's additionally happening is they're still 'syncing' back to Apple servers via APNS (and to Alphabet servers via Firebase on Android)—even with notifications completely disabled, that's correct.

If the app generates them, the OS receives them. That's why the Signal app offers this setting.


>it's even worse than that. What's additionally happening is they're still 'syncing' back to Apple servers via APNS (and to Alphabet servers via Firebase on Android)—even with notifications completely disabled, that's correct.

Source? I don't think either OS implements notification syncing between devices, it's only one way, and as others have mentioned, the actually push notification doesn't contain any message content, only an instruction for signal to fetch and decrypt the message.


> I don't think either OS implements notification syncing between devices

iOS does. This is how you can receive Signal notifications on your Apple Watch and other Apple devices that don’t have the app installed.


> I don't think either OS implements notification syncing between devices

Can't speak for iOS and no idea if this relates to the above functionality, but Pixel lets you deduplicate notifications across Pixel devices.


This sounds correct. When I implemented push notifications for an iPhone application, I remainder needing to obtain a store a separate token for each device a user has, and subscribing to a feed of revoked delivery tokens. Seemed like an interesting design intended to facilitate E2E encryption for push notifications.

I do wonder how notifications that are synced/mirrored to the Apple Watch and newer versions of Mac are handled.

Wait... why does Signal need to send notification content to Firebase to trigger a push notification on device? I would instead expect that Signal would send a push to my Android saying nothing more than "wake up, you've got a message in convo XYZ", then the app would take over and handle the rest of it locally.

I also didn't realize that Android stores message history even after I've replied or swiped them away. That's nuts - why!?


Signal does NOT send notification content througth APN/Firebase, their push notification is literaly a ping as you expected.

Source: https://mastodon.world@Mer__edith/111563866152334347 (Meredith Whittaker is the current CEO of Signal)

I can't link you rigth now to the actual code on their repo but it is verifiable.


Btw I clicked your mastodon link and it didnt work


If your app needs to send a notification while it's not currently a running process, it must go through Firebase on Google's side and APNS on Apple's side. There is no way for a non running app to send a notification entirely locally, this is by design of both companies.

Signal developer here. Not entirely sure what you're saying. I'm only an Android guy, but FCM messages are certainly one trigger that can allow an app process to run, but it's not the only trigger. You can schedule system alarms, jobs, etc. And the notification does not need to be provided by the FCM message. In our case, the server just sends empty FCM messages to wake up the app, we fetch the messages ourselves from the server, decrypt them, and build the notification ourselves. No data, encrypted or otherwise, is ever put into the FCM payloads.

Sure but it needs to go through Firebase regardless of the content of the notification message, I do not believe there is a way to use a third party notification service which does not depend on Firebase.

It doesn't. The API for displaying a notification is purely local.

Receiving a ping from Firebase Cloud Messaging triggers the app to whatever it does in order to display its notification. In the case of Signal, that probably means something like fetching the user's latest messages from the server, then deciding what to show in the notification based on the user's settings, metadata, and message content.

Here's example code for using FCM to show a notification. In this case, the notification content also passes through FCM, but Signal does not do that. https://www.geeksforgeeks.org/android/how-to-push-notificati...


Sorry I should clarify, by "it" I meant any sort of ping must go through Firebase Cloud Messaging, not that the message content itself goes through Firebase.

Looks like there is a way to bypass Firebase by using something like UnifiedPush which runs a perpetual background process that acts similar to Google Play Services to pick up notifications from the server and calls the local notification API.


It's theoretically possible to just keep an app running in the background all the time and periodically poll a server.

That's unreliable though since some OEM Android builds will kill it for that even if the user disables battery optimizations. Those OEMs sort of have a point; if lots of apps did that it would drain the battery fast.


Then that's basically what I said right? That there is in practice no way to opt out of using Firebase if you want consistent notifications.

When running Signal without google play services, Signal reliably received push notifications and with minimal battery drain.

Any application can send notifications without going through a server.

> this is by design of both companies.

I’ll note that whatever other reasons it’s also the only way to make this battery efficient. Having a bunch of different TCP connections signaling events at random times is not what you want.

Ideally the app also is responsible for rendering rather than having to disclose the message but that can be challenging to accomplish for all sorts of reasons).


> […] this is by design of both companies.

This is more of a fundamental technical limitation of operating systems and networks; I don't think it is possible to design distributed communication between arbitrary service provider infrastructure and end-user devices without an always-online intermediary reachable from anywhere (a bouncer, in IRC terms) that accepts messages for non-present consumers.


Yes, however the fact that it is not customizable is what is annoying, you are forced to rely only on the OS makers' implementations, which I guess should be expected in the day and age.

It sounds like you’re hinting at being unhappy with the lock-in forced by the ecosystem.

The flip side of the coin: any possibly avenue to exfiltrate data and do (advertising) tracking by app developers will be used. The restrictions also protect my privacy.

And my phone battery.


Clearly they don't protect your privacy as evidenced by the post we're commenting on.

But there is a way to do this encrypted, so that when the notification is received on your iPhone, the process itself needs to decrypt it.

Except you need an entitlement for that, because it requires that your app has the ability to receive a notification without actually showing it (Apple checks this).

Your app gets woken up, decrypts the message, and then shows a local notification.


Android doesn't store message history unless you explicitly enable that feature and neither does Signal send message content to Firebase.

You're angry about a huge amount of outright misinformation here.


Sad to think there is a PostIt note somewhere in Virginia and written on it is a box labelled Signal, with an arrow pointing to a box labelled Apple servers, followed by a smirking smiley face pointing between the boxes with the message “encryption added and removed here”

Any idea if this works the same or differently for Hidden apps specifically?

Normally no notifications are shown for hidden apps, and even if you unhide the apps, prior notifications which were sent do not reappear IIRC. I'm curious if notifications like that are still hitting the phone into the notifications database, or get silently dropped, or something else.


With notifications disabled APNS push notifications fail for the sending app backend. The device id is rendered invalid if push notifications are disabled at any point. Backends are supposed to handle this and quit sending messages.

Signal has this setting to tell the backend how much information to put into the push message. It can tell the backend to send a simple notification saying “new message” and not send information through APNS or enable it.

I am willing to bet Signal has a notification extension to handle edge cases where there is lag in settings to scrub the message metadata before it dings a screen alert.


yes, since apple doesn't control the content of the pushes it is sent by application backends. that can only be controlled within each app

Signal should switch the default to being less verbose.

No it shouldn't. That makes the UX much worse, just to guard against the 0.00001% case where the FBI seizes your iPhone.

They should also signal your counterparty's security posture.

Basically, give you a heads up that the other side has settings that make the system less secure.


I'd prefer the receiving end looks at sender's metadata on the message, and uses that to determine where the line is between recipient-convenience and betrayal.

I suppose you could do both, but "Hey I've got something extra important to send you, but it says need to change your settings first please hurry" seems worse than "sometimes I don't get full notifications on my watch, weird."


The default should be "No name or content".

Name only strikes me as a fairer compromise between security and usability.

I thought name-only was the default.

> I thought name-only was the default

At least for me, it was name and content.


I may be misremembering, or it may have changed; I've been using Signal from the early days.

Not really, that would discourage use by normies.

users should switch to simpleX

When you put it up against each other it makes perfect sense, but I would never have thought about it in that way!

Thank you for adding this to the conversation.


Fwiw, in my Signal app on Android this setting is in

Settings > Notifications > Messages > Show


My Samsung also keeps a history of notification content. Under Settings->Notifications ->Advanced -> Notification History

However, if this is important to you then you want Signal to stop telling Android to make the notifications. If it doesn't exist nobody will accidentally make it available.

Deleting that history is good to know about after the fact, but preferably lets just not create the problem.


I need the notifications though.

But you can set them without content. That actually works with signal because all it sends through Google Firebase is a notification to wake up the app. If you have the content turned on the app basically fills the content in the notification locally. But you can turn that off.


I allway say it: it is the defaults, stupid (paraphrasign).

The Defaults have to be the most sensitive ones.

If you are a supposed super secure app, this should be the default.


Disable Apple Intelligence summaries for sensitive app notifications too.

Given the quality of the summaries, you might want to keep them just for plausible deniability </s>

I guess enabling Lockdown mode might avoid this particular issue too, together with a bunch of other stuff?

Why would lockdown mode prevent this? I have lockdown mode on but that doesn't automatically make my notifications private.

Lockdown mode would prevent access to the data in theory.

But most likely (pure speculation mind you), this was a case of someone handing over the phone for review and where cooperating.

It might have been that they deleted signal some time ago, or even deleted signal and then handed over the phone.

It's notable that the data wasn't recovered from signals storage (was the data securely erased or that kind of recovery not attempted?).


It's a mode of the phone that is supposed to prevent cyber attacks, more so than "normal mode" I suppose, since it's supposed to limit features in the name of security. This seems like a variant of such attack, so seems like it should protect against it

There is a documented list of things that Lockdown Mode affects [1], this is not one of the advertised ones. There are a bunch of other (undocumented) things it affects (some of which are bugs :/), but I don't believe it has any affect on notification storage.

[1] https://support.apple.com/en-us/105120


Mostly it seems the documentation is vague. Is there anything clearer than this?

> Web browsing: Certain complex web technologies are blocked, which might cause some websites to load more slowly or not operate correctly. In addition, web fonts might not be displayed, and images might be replaced with a missing image icon.


Maybe it should.

Originally enabled it just to avoid awkward moments

WhatsApp supports this too.

Settings > Notifications > Show preview


This seems to be the default for me, at least on Android.

Android also supports custom encrypted payloads so Signal doesn't have to give them to Google.

Going to take a guess the author is not a Spanish speaker :p

After Nvidia's cuLitho now we get Anos...

Ahem, well, that's embarrassing! :D

Ross Bamford doesn't sound spanish to me

José is not an English name and here I am writing in English. People can learn other languages you know?

The licenses (from major foundries/vendors) are indeed usually quite restrictive; however, the hard part has always been enforcing them. It's not surprising to me that Google hasn't built any guardrails around this.

After all, gating by IP address? What happens if someone from the marketing team logs on from an airport? All of the slides revert to Arial?


The access would presumably need to be done through a VPN to have the fonts.


Ehh.. a lot of these docs go out to customers and end users. Playboy for instance sends out tons of their updates and plans to clients with their own custom fonts in it.


That's what PDFs are for --- the font files can be embedded in a fashion which precludes downloading/usage.

Various print shops have systems in place for previewing/approving print jobs as well.


Tangential, but I really wish there would be a performance renaissance with Emacs.

Native-comp was a good step forward, but Emacs is still so much slower than Neovim, even in the case of launching and immediately quitting, with no config:

    $ time emacs -Q -e kill-emacs
    /Applications/Emacs.app/Contents/MacOS/Emacs -nw -Q -e kill-emacs  0.18s user 0.03s system 98% cpu 0.213 total
    
    $ time nvim -es --cmd 'vim.cmd("q")'
    nvim -es --cmd 'vim.cmd("q")'  0.02s user 0.01s system 82% cpu 0.034 total
Even with a very minimal set of packages, text insertion, etc. is slower, and opening Magit (when it hasn't been loaded yet) takes about a second due to slow package loading.

Emacs is my favorite editor, full stop.

But every time I open Neovim or Sublime for quick tasks, it's always painfully apparent how much faster they are when I CMD+Tab back to Emacs.


Emacs' hard to solve issue is its use of global mutable state all across the board, which makes concurrency and parallelism very hard to add properly. It will take a lot of effort to slowly carefully reduce the error/bug surface and add proper parallelism constructs, that are easy to use for any package author.


Emacs is my editor/IDE of choice and consider myself power-user. However, I'm no expert in its internals or elisp. I understand that things are built with single-thread execution in mind over decades. However, I think things still can be more async, where you can offload heavy stuff to separate thread and stream results. E.g. Magit status doesn't need to block my entire editor. It can run what it needs to do in separate thread, send the results back to main thread just for rendering when its ready. Same with say consult-ripgrep / consult-find-file / find-file-in-project etc -- doens't need to wait for it in main thread and block the render / event handling until entire result set is ready (e.g. in this case things can be streamed). As in maybe there is a way around to make this much better by message passing/streaming instead of sharing state itself?

I love Emacs, but it really just fails to be effective for me when I work on monorepos and even more so, when I'm on tramp.


Probably all true, what you say about magit and so on. Message passing values would be an idea, but with the current situation, when 1 concurrent execution units, a process, finishes its job, how does its "private" potentially modified state get merged back into the main Emacs global state? Lets say the concurrently running process creates some buffers to show, but in the meantime the user has rearranged their windows or split their view, but the concurrent process doesn't know about that, since it was after its creation time. Or maybe the user has meanwhile changed an important Emacs setting.

I think the current solutions for running things in separate threads are only for external tools. I guess to do more, a kind of protocol would need to be invented, that tells a process exactly what parts of the copied global state it may change and when it finishes, only those parts will be merged back into the main process' global state.

Maybe I understood things wrong and things are different than I understood them to be. I am not an Emacs core developer. Just a user, who watched a few videos.

Tramp can be sped up a bit. I remember seeing some blog posts about it. I guess if you need to go via more than 1 hop, it can get slow though.

What is the problem with mono repos?


Yes, totally agree that its not always applicable. But I think there is still lot of scope to offload some operation (e.g. magit operations like status, commit, streaming search result into minibuffer in ivy-mode). Having a dedicated protocol would of course be best (VSCode Remote works flawlessly for me).

>> What is the problem with mono repos?

If you use things like that depend on something like ivy/vertico/... find-file-in-project, projectile-find-file, ripgrep gets super slow (I think the reason is that they usually wait for entire result to be ready). LSP/Eglot gets slower. Similarly, will have to disable most of VC related stuff like highlight diff on fringe. Git will be inherently slower, so magit will hang your UI more often. Of course you can disable all these plugins and use vanilla emacs, but then if you remove enough of them you're likely going to be more productive with VSCode at that point.

Just to clarify this is experience with monorepo + tramp. Also not sure how much of its just plugins fault. Somwhat better if you use emacs locally where the monorepo is, however that often means using Emacs cli -- which usually means lose some of your keybindings.


While faster Emacs would always be nice, I think the idea is you just keep it running. Hence emacsclient program. So startup time is not such a big deal.


Personally, I don't buy into this argument. I think having a globally shared buffer state, etc. is an antifeature. Plus, there's no reason that starting a TUI program should be that slow.

Either way, this only addresses startup time too. The rest of the issues: text insertion lag, `project-find-file` being slow in large repos, etc. all remain.


> I think having a globally shared buffer state, etc. is an antifeature.

As someone who mostly lives in Emacs, I like it. If I'm away from a machine, I can SSH into it and carry on with whatever I was in the middle of.

It's also nice to set emacsclient as EDITOR, so that e.g. running `git commit` will open up a buffer in the existing Emacs session. This is especially useful since I use shell-mode, and it would be confusing/weird to have new Emacs instances popping up when I'm already in an editor! (They open in a "window" (i.e. pane) in the existing "frame" (i.e. window) instead)


Emacs has globally shared buffer state amongst the frames that share the same "base frame" (no idea what this is called) or the same socket (could be wrong here).

Anyway, you can start N emacs instances and they can all have individual buffer states.

Emacs is not primarily a TUI program (although it does have a TUI with the -nw). The TUI version of emacs lacks visual customizability and introduces unnecessary overhead (terminal!). Use the GUI.

Text insertion lag is something I haven't experienced since 2019. Config issue?

project-find-file might be slow because of low gc-cons-threshold. I know consult gets around this by temporarily raising the threshold. These days, you can use the feature/igc branch to make these operations faster (although they are pretty fast anyway).

If you think emacs lacks <fundamental feature X>, think again!


> Emacs is not primarily a TUI program (although it does have a TUI with the -nw). The TUI version of emacs lacks visual customizability and introduces unnecessary overhead (terminal!). Use the GUI.

Can you elaborate on this? I tend to use emacs exclusively in the terminal, since I'm often using them on remote workstations. For remote workstations, I can (a) open files using TRAMP, (b) open a remote GUI with X11 forwarding over SSH, or (c) open a remote TUI. TRAMP doesn't always play nicely with LSP servers, and remote TUIs are much, much more responsive than X11 forwarding.

Locally, the performance of emacs depends far more on the packages I load than on the GUI vs TUI, so I'm interested in hearing what overhead there would be.


Yes, emacs is equally performant in GUI and TUI. And frames can be opened in both GUI and TUI on the same socket.

For me, TUI is a dealbreaker because:

- No mixed-pitch support: I use mixed-pitch fonts in org-mode buffers and in outline faces in prog-mode buffers. And fonts are just plain nicer on the GUI, and it's much better to look at.

- No SVG support: (I might be wrong about this) I have a custom modeline with SVG artifacts and the artifacts fail silently on the TUI

- Keybind conflicts: I am not used to accounting for the terminal's keybinds. Also, I use xfce4-terminal, which does not support the Hyper modifier (which I use extensively).


The slowness on startup in my emacs mainly comes from my customizations - over the last almost 3 decades I've accumulated roughly 30k loc of custom lisp, plus a lot of 3rd party stuff.

But I typically start emacs at boot, and then it runs until I reboot. I usually have one GUI frame, and one tui frame running in tmux so I can easily attach to my emacs session from a different computer. I have an emacsclient wrapper that opens stuff from the command line in my running emacs (and also mail wrappers, so clicking on a mail link in a browser opens a mail compositor in emacs).

I'm using eyebrowse with a bunch of own convenience features for workspaces in emacs - stuff like "when I switch to a buffer it'll switch to the workspace wher e that buffer is open unless I tell it I want it here". Combine that with some custom SSH entry points and especially on the notebook where I only have one screen it's way more comfortable to use than the OS window management for a terminal/ssh session messy like me.


> Plus, there's no reason that starting a TUI program should be that slow.

There's no reason why it shouldn't. You seem to think that the interface obliges a program into a certain performance pattern. No such obligation exists. And Emacs isn't a TUI program, it only happens to have a terminal interface among many others.


> You seem to think that the interface obliges a program into a certain performance pattern.

I think all software (or at least, any text editor) regardless of interface type should launch instantly. But it's more unjustifiable with TUI programs.


Nah. Here's a counter example: the TUIs that IBM wrote for many old store chains like Home Depot. They're at least an order of magnitude faster to operate for cashiers compared to web UIs but they're somewhat slow to start due to the caching and self-checks they do. This obsession with quick boot is more of a personal preference you have than a necessity.


An inane point. Obviously it's a "preference" rather than a "requirement" that my text editor boot in less than 30 seconds. But it's also not a functional requirement that Home Depot's POS terminals take a long time to start. If you could do the same checks and caching in a few hundred milliseconds it would only improve the usability for the cashier. You haven't made a case for why some user interfaces shouldn't start instantly, only that their slow start-up _might_ be justified


> If you could do the same checks and caching in a few hundred milliseconds it would only improve the usability for the cashier.

No it wouldn't. Those interfaces are permanent and only get restarted once a day or if the hardware has to be rebooted. Same for Emacs: there's absolutely no need to start the editor every single time.

> You haven't made a case for why some user interfaces shouldn't start instantly

I'm not making any case, we're not in court. Startup time is irrelevant and your fixation with it is really funny (up to a point).


> I think having a globally shared buffer state, etc. is an antifeature.

Maybe, but I'd like to hear why you think this is such an antifeature for a single-threaded application.

Given the extra resources available these days, for example, why not just bring up a stand-alone ERC instance for chatting, if shared state is a concern?


> having a globally shared buffer state, etc. is an anti-feature

Yeah, it feels a bit weird to not have some isolation.

Spacemacs offers layouts[^1] that give you some buffer-isolation. Each window has a "layout", and layouts have sets of buffers. It works well, but you can run into extra prompts if you open the same buffer from two layouts and try to kill it from one of them (kill the buffer (for all layouts)? just remove from this layout? In my mind the latter should just be the default).

[^1]: https://www.spacemacs.org/doc/DOCUMENTATION.html#layouts-and...


Emacs is functionally a shell not an editor. Starting Emacs for each file is akin to starting and stopping Wayland for every web page you open.

So the miniscule increase in start time is a non issue


On my M1 Mac Pro I get 0.13s wall, so not much faster than your Mac. On my i9-9900K Linux box I get 0.04s. I would think my M1 single core performance would be on par, if not faster. Perhaps it has something to do with macOS and gatekeeper, as I notice I'm not getting as high of a CPU utilization.

    $ gtime /opt/homebrew/bin/emacs --batch --eval '(princ (format "%s\n" emacs-version))'
    30.2
    0.07user 0.03system 0:00.13elapsed 78%CPU (0avgtext+0avgdata 46064maxresident)k

    $ /usr/bin/time ~/bin/emacs --batch -eval '(princ (format "%s\n" emacs-version))'
    30.2
    0.02user 0.01system 0:00.04elapsed 95%CPU (0avgtext+0avgdata 57728maxresident)k


GUI Emacs on a 12 year old processor (i5-4590) feels faster than on a M4 Pro Macbook. I think it's just something to do with the window manager on each of the systems (my experience is mostly with Wayland KDE) rather than the speed of the CPU.


I also run GUI Emacs on both Linux and macOS. I build it on Linux with --with-x-toolkit=lucid and for $REASONS I'm still on X11. I run it in a full-screen frame on its own monitor, and it does indeed feel faster.


Emacs can certainly be sluggish, but I'm not sure how much that's e.g. inherent to ELisp, or due to synchronous/single-threaded code, or choosing slow algorithms for certain tasks, etc.

For me, the best performance improvement has been handling long lines; e.g. Emacs used to become unusable if it was given a line of around 1MB. Since I run lots of shell-mode buffers, that would happen frustratingly-often. My workaround was to make my default shell a script that pipes `bash` through a pared-down, zero-allocation copy of GNU `fold`, to force a newline after hitting a certain length (crossing a lower threshold would break at the next whitespace; hitting an upper threshold would force a break immediately). That piping caused Bash to think it wasn't interactive, which required another work-around using Expect.

Thankfully the last few versions of Emacs have fixed long-line handling enough for me to get rid of my awful Rube-Goldberg shell!


> Emacs is still so much slower than Neovim, even in the case of launching and immediately quitting

I agree, but there's ways around it. On my machine the Emacs daemon is ready before I even log-in (lingering [^0]).

I think I only restart the daemon when I update emacs and its packages, and yeah, Emacs and Spacemacs are slow, but do not slow me down.

[^0]: https://wiki.archlinux.org/title/Systemd/User#Automatic_star...


Startup time does not matter, use the daemon. Opening a new frame is ~instantaneous.

I practically live in Emacs and it's not slow at all. It's very zippy, and my setup isn't the lightest!

There's a new branch (feature/igc) with incremental garbage collection (via MPS) that makes routine actions faster. I've been using it and it has been incredibly stable and has completely eliminated stutters (which used to happen very infrequently, but were present). Also, to me, it seems like it improves latency. The cursor feels more responsive.


> I practically live in Emacs and it's not slow at all. It's very zippy, and my setup isn't the lightest!

yeah, that's been my experience as well, particularly since upgrading to releases 29 and 30 where native compilation was enabled by default.

honestly the only place where it's slow it's when i'm editing terraform files, but that's because it needs to boot the terraform language server, and only on the first file of the project.


You never close emacs. You never open a new emacs.

You tell emacsclient which file to load and to which line to jump to. For me that's `e file.ext:200` expanding to a emacsclient call.


What hardware are you on?

On my old Ryzen 3600X running Arch it's a lot faster. Does the UI eat so much performance on OSX?

  $ time emacs -Q -e kill-emacs
  real    0m0.076s
  user    0m0.058s
  sys     0m0.018s

  $ time nvim -es --cmd 'vim.cmd("q")'
  real    0m0.028s
  user    0m0.005s
  sys     0m0.003s
vim still is a lot faster though.


> On my old Ryzen 3600X running Arch

> vim still is a lot faster though.

you might want to make sure you're comparing apples to apples though. the "emacs" command most likely is going to load the GUI emacs so a lot of gui libraries (if you're running a recent emacs then even GTK libraries) whereas the nvim command isn't going to load gui libraries at all.

maybe try with a non-gui version of emacs (or maybe calling emacs -nw)


no, this is the TUI version. X11 emacs with all the composited effects needs about 200-250ms to open (about the duration of the animation for opening and closing it). That's more like OP's timings.


No, you need to use -nw with emacs to make it apples to apples. Then it's emacs 0m0.095s vs nvim 0m0.057s:

    $ time nvim -es --cmd 'vim.cmd("q")'

    real 0m0.057s
    user 0m0.016s
    sys 0m0.017s

    $ time emacs -Q -e kill-emacs

    real 0m0.230s
    user 0m0.165s
    sys 0m0.064s

    $ time emacs -nw -Q -e kill-emacs

    real 0m0.095s
    user 0m0.057s
     sys 0m0.017s


Shouldn't matter when I am not on GUI seat. In my SSH session with X11 forwarding there is no DISPLAY emacs could use.

Tried it anyways, looks the same:

  $ time emacs -nw -Q -e kill-emacs
  real    0m0.075s
  user    0m0.062s
  sys     0m0.013s


s/with/without/


/usr/bin/time emacs -Q -e kill-emacs 0.03 real 0.02 user 0.00 sys

Altough I'm not using Emacs.app.


Not using Emacs.app because you aren't on macOS, or using some other build/setup? If the latter, I'm curious.


my non-command line gui version of doom emacs with a bunch of packages enabled loads up fully for me in 0.45s which is hardly slow. sure it's slower than neovim but also not slow in the absolute sense and i don't have the emacs daemon running which would make that even faster.


I share your wish. Emacs, as wonderful as it is, has accumulated a lot of cruft over the decades and would benefit immensely from a rewrite. A "Neo-Emacs" could be multithreaded from the ground up and drop support for archaic platforms. The rewrite could even be in Rust to attract younger developers.


There would be no point to writing emacs in a language that can’t be developed interactively in a repl. Emacs being written in lisp is an essential quality.


> Emacs being written in lisp is an essential quality

Not for the parts of it I use.


>a lot of cruft

Like what? Emacs is written in C and there are ports of it out there (all half-abandoned). Emacs, the way it exists, works very well.


The vast majority of emacs is written in lisp, not C.


I'm not sure I'm capable of noticing or caring about the difference between 0.18 and 0.02 seconds for something that doesn't happen on a rapid cadence.


I think it's purely a pricing & supply chain thing. Certain iPads have M-series chips in them, now certain MacBooks have A-series chips in them.

Also, the chip used has no impact on the viability of merging macOS and iOS anyway.


The Developer Transition Kit (DTK) ran on the A12Z chip. I don't think this should be interpreted as a signal of iOS/macOS unification.


For a native macOS app, there is also Monodraw [1], which is great.

[1] https://monodraw.helftone.com


Mono draw is in maintenance mode and non-free. Based on the name, pretty sure that Monosketch is an explicit replacement.


Monodraw got an update the other week. It isn't being changed, but it doesn't need to.

Great little app. And it's $10, once. Hardly breaking the bank.


But it's not open, and can't be edited by those who want to. We should always support FOSS.


Absolutely we should. But this one isn't FOSS.



Why have you sent me the licence page for Monosketch? I'm commenting on comment about Monodraw...


Which of these two exchanges makes more sense?

1.

> But [monodraw]'s not open, and can't be edited by those who want to. We should always support FOSS.

> Absolutely we should. But [monodraw] isn't FOSS.

2.

> But [monodraw]'s not open, and can't be edited by those who want to. We should always support FOSS.

> Absolutely we should. But [monosketch] isn't FOSS.

The first interpretation makes no sense to me, because you've agreed completely with the parent comment but worded the comment in a way that sounds like you're disagreeing.


I would assume they sent that because they were suggesting to support FOSS over closed-source software.


> Based on the name

I think in this case the name alone is not enough to suspect a replacement; perhaps it’s just a similar product in the same domain (_mono_space visual editors).


Maybe it's just more or less feature-complete? Was curious, as someone who hadn't heard of it before, so I checked the blog. Last post is from April last year and concerns public testing of a new release. That's not particularly old, if you ask me?


There is a setting as of iOS 26 under "Privacy & Security > Wired Accessories" in which you can make data connections always prompt for access. Not that there haven't been bypasses for this before, but perhaps still of interest to you.


For me: pro & creative apps. GIMP/Inkscape will never replace Photoshop/Illustrator/Affinity. Ableton, Logic, Pro Tools, etc. are not available on Linux and with the exception of REAPER, the alternatives are awful. And even with a Linux-compatible DAW, very few plugins are available on Linux.

On macOS, I can work on hobby software & graphics/music.


As far as I know, current Photoshop works fine under Linux woth wine [0].

[0]: https://www.phoronix.com/news/Adobe-Photoshop-2025-Wine-Patc...


This is a bit like claiming that a flat head screwdriver can sort of work with a Philips head screw… until it strips the head, you can’t see it and you don’t know how to fix it.


How's Bitwig these days? I've not checked it for years.

https://www.bitwig.com/


Not bad, but different DAWs cater to different workflows. To me (and most), Bitwig feels much more optimized for creating electronic music than recording guitar or drums. It wouldn't be my first choice for the latter workflow, where I'd prefer REAPER or Logic. You also still have the issue with plugin compatibility and that 99% of commercial plugin vendors don't support Linux.


> 99% of commercial plugin vendors don't support Linux.

It's a bit softened by the fact that many of them can be replaced/recreated with stock bitwig devices (if you're into that). There's also yabridge, though for me personally it has been a bit hit and miss.


The times of browsers having weirdly different rendering behavior are mostly gone, in my experience. I'm sure ~98% of Electron apps that expect Chromium would render just fine/same under WebKit as well.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: