Hacker News new | past | comments | ask | show | jobs | submit | TazeTSchnitzel's comments login

I would strongly recommend adding the ability to play back through a MIDI device. For me, I can get quite attached to the particular piano sounds on different devices I have, and being able to hear my playing exactly as I heard it when I played it is great. This is also, in some cases, the only way to make use of a keyboard/piano's built-in speakers.

This would also possibly widen the market for the product a little. Playing non-piano instrument sounds, even very technical synthy stuff, on a keyboard should “just work” assuming that all outgoing MIDI data like Program Changes and Control Changes are recorded and replayed. The further you get away from pure piano, the less likely the result will sound right on anything other than the original keyboard/piano.

Incidentally I'm curious if SysEx are recorded. Sometimes those are also used. For example some of the effect controls on certain Yamaha products use SysEx for historical reasons.

If you do add playback over MIDI you may want to familiarise yourself with some of the “reset” mechanisms to avoid hanging notes and other such problems, and make the app send them at appropriate moments. All Notes Off, All Sounds Off and Reset All Controllers are particularly useful “special” Control Changes.


yes sysex is recorded too!

thanks for your thoughts, I'll add midi out playback :)

and yes, head deep in midi knowledge by now but that is a good point about hanging notes.


Is “source separation” better known as “stem separation” or is that something else? I think the latter term is the one I usually hear from musicians who are interested in taking a single audio file and recovering (something approximating) the original tracks prior to mixing (i.e. the “stems”).


Audio Source Separation I think is the general term used in research. It is often applied to musical audio though, where you want to do stem separation - that's source separation where you want to isolate audio stems, a term referring to audio from related groups of signals, e.g. drums (which can contain multiple individual signals, like one for each drum/cymbal).


Stem separation refers to doing it with audio playback fidelity (or an attempt at that). So it should pull the bass part out at high enough fidelity to be reused as a bass part.

This is a partly solved problem right now. Some tracks and signal types can be unmixed easier than others, it depends on what the sources are and how much post-processing (reverb, side chaining, heavy brick wall limiting and so on)


> This is a partly solved problem right now.

I'd agree with the partly. I have yet to find one that either isolates an instrument as a separate file or removes one from the rest of the mix that does not negatively impact the sound. The common issues I hear are similar to the early internet low bit rate compression. The new "AI" versions are really bad at this, but even the ones available before the AI craze were still susceptible


I'm far (far) from an expert in this field, but when you think about how audio is quantized into digital form, I'm really not sure how one solves this with the current approaches.

That is: frequencies from one instrument will virtually always overlap with another one (including vocals), especially considering harmonics.

Any kind of separation will require some pretty sophisticated "reconstruction" it seems to me, because the operation is inherently destructive. And then the problem becomes one of how faithful the "reproduction" is.

This feels pretty similar to the inpainting/outpainting stuff being done in generative image editing (a la Photoshop) nowadays, but I don't think anywhere near the investment is being made in this field.

Very interested to hear anyone with expertise weigh in!


I won't say expertise, but what I've done recently:

1) used PixBim AI to extract "stems" (drums, bass, piano, all guitars, vocals). Obviously a lossless source like FLAC works better than MP3 here

2) imported the stems to ProTools.

3) from there, I will usually re-record the bass, guitars, pianos and vocals myself. Occassionally the drums as well.

This is a pretty good way I found to record covers of tracks at home, re-using the original drums if I want to, keeping the tempo of the original track intact etc. I can embellish/replace/modify/simplify parts that I re-record obviously.

It's a bit like drawing using tracing paper, you're creating a copy to the best of your ability, but you have a guide underneath to help you with placement.


It's not really digital quantisation that's the problem, but everything else that happens during mixing - which is a much more complicated process, especially for pop/rock/electronic etc., than just "sum all the signals together".

There's a bunch of other stuff that happens during and after summing which makes it much harder to reliably 100% reverse that process.


I didn't mean to say that quantization was the problem, just that you're basically trying to pick apart a "pixel" (to continue my image-based analogy) that is a composite of multiple sounds (or partially-transparent image layers).

I was sincere when I said:

> I'm really not sure how one solves this with the current approaches.

I was hoping someone would come along and say it is, in fact, possible. :)


Source separation is a general term, stem separation is a specific instance of source separation.


If you use a web browser or play multiplayer video games then there will be code running on your system that interacts with GPU drivers that you haven't explicitly chosen to download and which could potentially exploit certain vulnerabilities.


This highlights why we shouldn't let browsers (google) keep expanding their reach outside of the historical sandbox. It's almost like all the in-browser Java and Flash problems being repeated. They're creating security problems more than helping legitimate developers. WebGL was fine. Websockets were fine. WebGPU and the recently proposed arbitrary socket API are jumping the shark. Raw GPU access and TCP/UDP access are simply bad ideas from inexperienced people and need to be shut down. If you truly need that stuff I think the solution is to step up your game and make native applications.


I'm not sure why WebGPU is a step too far but WebGL isn't? Every other API for using a GPU went the same direction; why should HTML be stuck with a JS projection of OpenGL ES while native developers get Vulkan? The security properties of both are very similar, Vulkan/Metal/DX12 just lets you skip varying levels of compatibility nonsense inherent in old graphics APIs.


> why should HTML be stuck with a JS projection of OpenGL ES while native developers get Vulkan?

Because web browsers are supposed to be locked down and able to run untrusted code, not an operating system that reinvents all the same failings of actual operating systems. They should be functionality impaired in favor of safety as much as possible. For the same reason you don't get access to high precision timing in browser (a lesson that took a while to learn!), you shouldn't have arbitrary capabilities piled onto it.


Those are all historical remnants. Modern web browsers serve a radically different purpose than they did in the 90s. It doesn't make sense to even keep calling them "web browsers" since most people don't know what a "web" is, let alone "browse" it.

Modern browsers are application runtimes with a very flexible delivery mechanism. It's really up to web developers to decide what features this system should have to enable rich experiences for their users. Declaring that they should be functionally impaired or what they "should be" without taking into account the user experience we want to deliver is the wrong way of approaching this.

To be clear: I do think we should take security very seriously, especially in the one program people use the most. I also think reinventing operating systems to run within other operating systems is silly. But the web browser has become the primary application runtime and is how most people experience computing, so enabling it to deliver rich user experiences is inevitable. Doing this without compromising security or privacy is a very difficult problem, which should be addressed. It's not like the web is not a security and privacy nightmare without this already. So the solution is not to restrict functionality in order to safeguard security, but to find a way to implement these features securely and safely.


> Modern browsers are application runtimes with a very flexible delivery mechanism.

Clearly this is true. But as someone with an old-school preference for native applications over webapps (mostly for performance/ux/privacy reasons) it irritates me that I need to use an everything app just to browse HN or Wikipedia. I don't want to go all hairshirt and start using Lynx, I just want something with decent ux and a smaller vulnerability surface.


> it irritates me that I need to use an everything app just to browse HN or Wikipedia

But why?

That feels like saying it irritates someone they need to run Windows in order to run Notepad, when they don't need the capabilities of Photoshop at the moment.

An everything app is for everything. Including the simple things.

The last thing I'd want is to have to use one browser for simpler sites and another for more complex sites and webapps and constantly have to remember which one was for which.


Some of us don't use the web for anything other than websites. I'm honestly not even sure what people are talking about with some proliferation of "apps". There's discord/slack, and...? And chat was on the road to being an open protocol until Google/Facebook saw the potential for lockin and both dropped XMPP.

I already have an operating system. It's like saying I don't need notepad to be able to execute arbitrary programs with 3D capabilities and listen sockets because it's a text editor.

You also wouldn't need to remember what your generic sandbox app runtime is. Use your browser, and if you click on a link to an app, you'll be prompted to open the link using your default handler for that mime type.


> I'm honestly not even sure what people are talking about with some proliferation of "apps". There's discord/slack, and...?

Are you not familiar with Gmail or Google Maps or YouTube?

> I already have an operating system.

But Gmail and Google Maps and YouTube don't run on the OS. And this is a feature -- I can log into my Gmail on any browser without having to install anything. Life is so much easier when you don't have to install software, but just open a link.

> Use your browser, and if you click on a link to an app, you'll be prompted to open the link using your default handler for that mime type.

But I like having news links in Gmail open in a new tab in the same window. The last thing I want is to be juggling windows between different applications when tabs in the same app are such a superior UX.

Imagine how annoying it would be if my "app" browser had tabs for Gmail and Maps and YouTube and my "docs" browser had tabs for the NYT and WaPo and CNN, and I couldn't mix them?

Or if the NYT only worked in my "docs" browser, but opening a link to its crossword puzzle opened in my "apps" browser instead?

That's a terrible user experience for zero benefit at all.

(And I still would have to remember which is which, even if there's a MIME type, for when I want to go back to a tab I already opened!)


Calling gmail or youtube apps is already kind of a stretch. Gmail splits everything into separate web pages with the associated loading times and need to navigate back and forth. Exacerbating this is that it paginates things, which is something you only ever see in web pages. It lacks basic features you'd expect out of an application like ability to resize UI panes. Youtube has a custom, worse version of a <video> tag to prevent you from saving the videos (even CC licensed ones, which is probably a license violation), but is otherwise a bunch of minimally interactive web pages.

Maps is legitimately an interactive application, though I'd be surprised if most people don't use a dedicated app for it.

The point is you wouldn't have an "apps browser" with tabs. If something is nontrivial, launch it as an actual application, and let the browser be about browsing websites with minimal scripting like the crossword puzzle. Honestly there probably should be friction with launching apps because it's a horrible idea to randomly run code from every page you browse to, and expanding the scope of what that code is allowed to do is just piling on more bad ideas.


> it irritates me that I need to use an everything app just to browse HN or Wikipedia.

...this is possibly missing the point, but it occurs to me that you don't have to. Hacker News and Wikipedia are two websites I'd expect to work perfectly well in e.g. Links.

It's a bigger problem if you want to read the New York Times. I don't know whether the raw html is compatible, but if nothing else you have to log in to get past their paywall.


> Modern web browsers serve a radically different purpose than they did in the 90s

And that is a bad thing it was pushed this far! Exactly this is the argument here!


I don't necessarily disagree. But there's no going back now. There's a demand for rich user experiences that are not as easy to implement or deliver via legacy operating systems. So there's no point in arguing to keep functionality out of web browsers, since there is no practical alternative for it.


If rich ux can be delivered in a web browser then it can be delivered in a native app. I'd assert that the reason this is uncommon now (with the exception of games) is economic not technological.


It is partly economic, but I would say that it's more of a matter of convenience. Developing a web application is more approachable than a native app, and the pool of web developers is larger. Users also don't want the burden of installing and upgrading apps, they just want them available. Even traditional app stores that mobile devices popularized are antiquated now. Requesting a specific app by its unique identifier, which is what web URLs are, is much more user friendly than navigating an app store, let alone downloading an app on a traditional operating system and dealing with a hundred different "package managers", and all the associated issues that come with that.

Some app stores and package managers automate a lot of this complexity to simplify the UX, and all of them use the web in the background anyway, but the experience is far from just loading a web URL in a browser.

And native apps on most platforms are also a security nightmare, which is why there is a lot of momentum to replicate the mobile and web sandboxing model on traditional OSs, which is something that web browsers have had for a long time.

The answer is somewhere in the middle. We need better and more secure operating systems that replicate some of the web model, and we need more capable and featureful "web browsers" that deliver the same experience as native apps. There have been numerous attempts at both approaches over the past decade+ with varying degrees of success, but there is still a lot of work to be done.


Every package manager I know of lets you install a package directly without any kind of Internet connection (I haven't tried much, but I've run into CORS errors with file URIs that suggest browser authors don't want those to work). They also--critically--allow you to not update your software.

The web today is mostly a media consumption platform. Applications for people who want to use their computer as a tool rather than a toy don't fit the model of "connect to some URL and hope your tools are still there".


> And native apps on most platforms are also a security nightmare

You make it sound like a web browser is not a native app.


The difference is in the learning curve. On Windows, making a native app usually requires you to install a bunch of things - a compiler, a specific code editor, etc - in order to even be able to start learning.

Meanwhile, while that's also true for web apps, you can get started with learning HTML and basic JavaScript in Notepad, with no extra software needed. (Of course, you might then progress to actually using compilers like TypeScript, frameworks like React, and so on, but you don't need them to start learning.)

There's always been a much higher perceived barrier to be able to make native apps in Windows, whereas it's easier to get started with web development.


Not to mention it was (is) a constantly moving target. WinUI, WPF, Silverlight, UWP, RT, Forms, MFC, maybe more!


Browsers should be aggressively pro-user, and developers can innovate within the limitations they're given.


That settles it then. Let's remove all the innovations of the past 30 years that have allowed the web to deliver rich user experiences, and leave developers to innovate with static HTML alone. Who needs JavaScript and CSS anyway?

Seriously, don't you see the incongruity of your statement?


Exactly.

Putting everything, I mean everything into the browser, and arguing for it, is stupid. It stops becoming a browser then and becomes a native sytem, with the problems of the native systems accessing the open wild all over again. And then? Will be there a sandbox inside the browser/new-OS for the sake of security then? Sanbox into a not so sandbox anymore?


Modern operating systems are bad and they are not going to be fixed. So Browser is another attempt at creating better operating system.

Why modern operating systems are bad:

1. Desktop OS allow installation of unrestricted applications. And actually most applications are unrestricted. While there are attempts at creating containerised applications, those attempts are weak and not popular. When I'm installing World of Warcraft, its installer silently adds trusted root certificate into my computer.

2. Mobile OS are walled gardens. You can't just run anything, you need to jump through many hoops at best or live in certain countries at worst.

3. There's no common ground for every operating system. Every operating system is different, has completely different APIs. While there are frameworks which try to abstract those things, those frameworks adds their own pile of issues.

Browser just fixes everything. It provides secure sandbox which is trusted by billions of users. It does not restrict user in any way, there's no "Website Store" or something like that, you can open everything and you can bring your app online within few minutes. It provides an uniform API which is enough to create many kinds of applications and it'll run everywhere: iPhone, Pixel, Macbook, Surface, Thinkpad.


Unrestricted app installation is not bad. It's a trade-off. It's freedom to use your own hardware how you want versus 'safety' and restriction imposed by some central authority which aims to profit. Fuck app stores, generally speaking. I prefer to determine what source to trust myself and not be charged (directly or indirectly) to put software on my own system.


An overwhelming majority of the apps does not need full device access. All they need is to draw to the window and talk with network.

Yes, there are apps which might need full filesystem access, for example to measure directory sizes or to search things on the filesystem. There are apps to check neighbour WiFi for security which need very full access to WiFi adapter and that's fine. But those apps could use another way of installation, like entering password 3 times and dancing for 1 minute, to ensure that user understands the full implications of giving such an access.

My point is that on typical desktop operating system today, typical application has too much access and many applications actually use that access for bad things, like spying for user, installing their own startup launchers, updaters and whatnot. Web does that better. You can't make your webapp to open when browser starts, unless you ask user to perform a complicated sequence of actions. You can't make your webapp to access my ssh key unless you ask me to drag it into a webpage.


That's exactly what ChromeOS is/was. Users hated it.


This guy gets it 100%.


I agree. I'm not knowledgable enough to say for sure, but my intuition is that the total complexity of WebGPU (browser implementation + driver) is usually less than the total complexity of WebGL.


WebGL is like letting your browser use your GPU with a condom on, and WebGPU is doing the same without one. The indirection is useful for safety assuming people maintain the standard and think about it. Opening up capability in the browser needs to be a careful process. It has not been recently.


It's my understanding that the browsers use a translation layer (such as ANGLE) between both WebGL and WebGPU and a preferred lower level native API (Vulkan or Metal). In this regard I don't believe WebGL has any more or less protection than WebGPU. It's not right to confuse abstraction with a layer of security.


The translation layer is the safety layer. In principle, it's like running Java bytecode instead of machine code.


My analogy was bad and I'd probably be wrong as you (and your sibling post) say to expect WebGPU to have any lurking dangers as compared to WebGL. I was mainly trying to express concern with new APIs and capabilities being regularly added, and the danger inherent in growing these surfaces.


It's clear that you know nothing about how WebGL or WebGPU are implemented. WebGPU is not more "raw" than WebGL. You should stop speaking confidently on these topics and misleading people who don't realize that you are not an expert.


I'd dispute that I know nothing. I'm not an expert but have worked with both, mostly WebGL. Anyways, sorry, it was a bad analogy and you're right, I don't know enough, particularly to say that WebGPU has any unique flaws or exposes any problems not in WebGL. I'm merely suspicious that it could, and maybe that is just from ignorance in this case.


That's incorrect, WebGPU has the exact same security guarantees as WebGL, if anything the specification is even stricter to completely eliminate UB (which native 3D APIs are surprisingly full of). But no data or shader code makes it to the GPU driver without thorough validation both in WebGL and WebGPU (WebGPU *may* suffer from implementation bugs just the same as WebGL of course).

> Opening up capability in the browser needs to be a careful process. It has not been recently.

That's what about 95% of the WebGPU design process is about and why it takes so long (the design process started in 2017). Creating a cross-platform 3D API is trivial, doing this with web security requirements is not.


Both WebGL and WebGPU should be locked behind permission because they allow fingerprinting user's hardware (also they provide the name of user's graphic card). And because they expose your GPU drivers to the whole world.


> why should HTML be stuck with a JS projection of OpenGL ES while native developers get Vulkan?

Same reason kids should be stuck with Nerf guns while grownups have firearms.


Agree wholeheartedly (and I used to work on Safari/WebKit).

Cross-platform app frameworks have never been a panacea, but I think there may be a middle ground to be found between the web and truly native apps. Something with a shallower learning curve, batteries-included updating and distribution, etc. that isn’t the web or Electron.

That said, I worry that it’s too late. Even if such a framework were to magically appear, the momentum of the complex beast that is the web platform will probably not slow.


> I think the solution is to step up your game and make native applications

Say goodbye to anyone supporting Linux at all in that case. These rare security issues are a small price to pay for having software that works everywhere.


Rephrasing that: malware that works everywhere is a small price for software that works everywhere.

It isn't.

And there is no basis for your assertion these security issues are rare.


> malware that works everywhere is a small price for software that works everywhere

Yes.

Although the malware we're talking about doesn't actually work everywhere but only one one brand of GPU. But I would take it working everywhere over my computer not being useful.


Isn't WebGPU supposed to be containerized? So that it only access its processes, which are the computations it is running for rendering? I honestly don't know much but I had heard it was sandboxed.


It's not uncommon that I go to Shadertoy and see strange visual artifacts in some shaders including window contents of other applications running on my system, potentially including sensitive information.

It's difficult to make GPU access secure because GPU vendors never really prioritized security, so there's countless ways to do something that's wonky and accidentally leaks memory from something the app isn't supposed to have access to. You can containerize CPU and have strict guarantees that there's no way host memory will map into the container, but AFAIK this isn't a thing on GPUs except in some enterprise cards.


> ... including window contents of other applications running on my system, potentially including sensitive information.

If this is actually the case (which I doubt very much - no offense) then please definitely write a ticket to your browser vendor, because that would be a massive security problem and would be more news-worthy than this NVIDIA CVE (leaking image data into WebGL textures was actually a bug I remember right around the time when WebGL was in development, but that was fixed quickly).


Yeah that sounds like a basic garbage collection issue and isn't that the very basics of sandboxing? Is the rule not to not hand memory to a sandbox that hasn't already been overwritten with 0s or random information? This sounds analogous to the old C lack of bounds checking where you could steal passwords and stuff just by accessing out of bound memory. Is this not low hanging fruit?


TCP/UDP access is behind explicit prompt and it's basically the same as executing downloaded application, so I don't think that it's anything bad. Basically you either install software to your local system which does not have any restrictions or you use web application which still is pretty restricted and contained.


> TCP/UDP access is behind explicit prompt

... that is satisfied by a single click from malware or social engineering. Insane.


Would you be happy with two clicks? Three clicks? Like what's the principal difference? As I said, you can download and run arbitrary application with one click today. And may be second click to confirm to operating system (not sure if it's always necessary).

Insane thing is that arbitrary application has instantly full access to your computer. And web application still heavily constrained and has to ask about almost every permission.


I would accept zero clicks on a browser that I've installed without this dangerous feature and /with a promise no autoupdate will sneak it in/.

The reason your web page has to be imprisoned in permissions is that it is a web page from just about anyone using access that the browser has given it without telling the user.


> step up your game and make native applications.

Each to their own but I consider native applications a step down from web apps.

"Screw it, I'm giving up my web app and will now pay Apple/Google the protection money and margin they demand to shelter within their ad-ridden ecosystem lock-in." ... yeh that's definitely a step down.


You're talking like android and ios are the only platforms. The downsides of those platforms don't justify a web browser (which should be safe to use) granting excessive capability to untrusted code.


If you’re targeting a mass market audience they often are the only platforms. For many people their phone is their only computing device.


Neither Apple or Google are in scope for this issue. This is about NVidia GPUs.


Are you saying that WebGPU should only be supported on Android and iOS, because Android and iOS have more secure GPUs? Desktop browsers shouldn't support WebGPU (but should continue supporting WebGL)?


Close your eyes before you see WebUSB. Reckless and irresponsible in the extreme.


WebMIDI is cool, though! I updated my Novation Launchpad firmware with that.


They just keep promising more access to my machine but don't worry it is all totally secure! We promise! Yeah sure, when has that ever worked out?


This to me is the big risk here. A worm hidden in a game mod or something.

I can see it staying in the wild for a long time too. How many of the people that are playing on these cards, or crypto mining, or doing LLM work, are really going to even find out about these vulnerabilities and update the drivers?


>This to me is the big risk here. A worm hidden in a game mod or something.

Game mods are already barely sandboxed to begin with. Unless proven otherwise (ie. by manually inspecting the mod package), you should treat game mods the same as random exes you got off the internet, not harmless apps you install on a whim.


The attack surface from a browser is tiny. All you can do is call into ANGLE or Dawn through documented, well-defined and well-tested APIs. Or use things like canvas and CSS animations, I suppose. Browser vendors care a lot about security, so they try to make sure these libraries are solid and you can't interact with the GPU in any other way.

Native applications talk directly to the GPU's kernel-mode driver. The intended flow is that you call into the vendor's user-mode drivers - which are dynamic libraries in your application's address space - and they generate commands and execute ioctls on your application's behalf. The interface between these libraries and the KMD is usually undocumented, ill-defined and poorly tested. GPU vendors don't tend to care about security, so if the KMD doesn't properly validate some inputs, well, that issue can persist a long time. And if there's any bit of control stream that lets you tell the GPU to copy stuff between memory you own and memory you don't... I guess you get a very long security bulletin.

The point is, webpages have access to a much smaller attack surface than native applications. It's unlikely anything in this bulletin is exploitable through a browser.


This is why Qubes OS, which runs everything in isolated VMs, doesn't allow them to use the GPU. My daily driver, can't recommend it enough if you care about security.


What is your threat model that you chose to daily something so restrictive?


Numerous vulnerabilities are found in all browsers regularly, as well as in the root isolation in Linux. Similar with other OSes. The discussed article is one example.

In addition, Qubes is not so restrictive, if you don't play games or run LLMs.

See also: https://forum.qubes-os.org/t/how-to-pitch-qubes-os/4499/15


I asked about your threat model, I'm aware that there are numerous vulnerabilities found in all browsers regularly. I just personally don't have a reason to care about that. It's like driving on the highway, every time you do it you create a period of vastly increased mortality in your life but that's often still very worthwhile, imo using Qubes is like going on back roads only because your odds of dying at highway speeds are so much higher.


If you consider specific listed threats as not a real threat model, then what else would you like to know? The threats are real and I value my data and privacy a lot. Also, I want to support a great OS by using it and spreading the word. Personally, using Qubes for me is not as hard and limiting as people think. It's the opposite: It improves my data workflow by separating different things I do on my computer.


Data being stolen (or getting ransomwared or whatever) from my personal machine is something I expect to happen maybe once or twice a lifetime as a baseline if I have like a bare veneer of security (a decent firewall on the edge, not clicking phishing links). I silo financial information (and banks also have security) so such a breach is extremely unlikely to be catastrophic. In general I don't find this to be worth caring about basically at all. The expectation is that it will cost me a couple weeks of my life as like an absolute worst case.

That is roughly equivalent to dealing with a security related roadblock to my workflow for 1 minute every day (or 10 security related popups that i have to click that cost me 6 seconds each or one 30 minute inconvenience a month). I think that even having the UAC popups enabled on Windows is too steep a price to pay.

I think security like this matters in places where the amount of financial gain for a breach is much much higher (concentrated stores of PII at a company with thousands of users for example) because your threat model has to consider you being specifically targeted for exploitation. As an individual worried about internet background hacking radiation it doesn't make sense for me to waste my time.


Thank you for the interesting arguments.

> I silo financial information (and banks also have security) so such a breach is extremely unlikely to be catastrophic

So you are doing manually what Qubes OS does automatically: security through compartmentalization.

> The expectation is that it will cost me a couple weeks of my life as like an absolute worst case.

This sounds quite reasonable but ignores privacy issues and issues with computer ownership with Windows; I guess you also don't care about that.

I do agree that using Qubes wastes more of my time than your estimates; however it also, e.g., encourages 100% safe tinkering for those who like it, prevents potential upgrade downtime, enables easy backup and restore process and more.

> I think security like this matters in places where the amount of financial gain for a breach is much much higher (concentrated stores of PII at a company with thousands of users for example)

How about owning crypto?


If I owned crypto I would store the keys on a medium that people don't expect to find keys on and it would definitely not be live. (example, laser etched barcode into a rock)


GPU support for Qubes is coming.


Opt-in for chosen, trusted VMs is coming.


Google Play has also recently started requiring this, and I know someone affected by it. It is very problematic for small developers because they may have to publish their home address, which opens them up to the possibility of harassment and so on. I don't think it's a good change.


In the US, if you run your own business your business address is public information already in most states. You can use a registered agent address in the US which can keep your personal address off the public record for your business.

Most places do not accept a PO Box which is why most use a Registered Agent service.


That is also a thing in Europe.

I learned this from a friend who was under witness protection, so it was a necessity for him.

I'm using it mostly to keep the usual spam away from my mailbox.


I'm assuming you have to pay for such an agent?

Google Play also requires it if you're just publishing as an individual for free. I made my account back in 2010 and the last update was in 2011, and let it get restricted earlier this year because I didn't want to publicize that information.


Similar in EU. There are all sorts registry of self-employed people. Phone, address, date of birth, in some cases even local equivalent of Social Security number are public. Some countries even publish income.


Do you have a citation? This is not true in my state.


I'm not citing 50 Secretary of States. Delaware has an option to be more private than other states. Though if you setup the App Store you need a DUNS number which then makes the address mostly public.


You don't need a DUNS number in the App Store. I don't have one.


You do if you register as a Business. Individuals and Government are the only non-required entities.

https://developer.apple.com/support/D-U-N-S/


I'm an individual, and individuals are the ones most affected negatively by this Digital Services Act requirement.


I don't have a lot of apps on the play, and none of them are particularly popular - but some of them were/are useful.

I'm letting my account lapse in November when this falls due for me - no way in heck I'd be putting these details out there.

No great loss for google or the world, but still smarts a bit given the hours I've put in to them over the years.


I have a coworking space i use that accepts mail. its cheapish ($200 a year) and allows me to keep my home address off of marketing materials


> they may have to publish their home address

Wouldn’t a PO Box be a better option? They run as little as $10/mo for one at the USPS.


Nope, you can't use that as your legal registered address in EU.


An added expense, for an app that may not be paid.


If your app is not paid, then you're not a "trader" according to the EU.


You are according to the EU's rules of digital markets. Any app provider is.

If you don't process any personal information, don't have ads or purchases and don't have online backend services... Then you might have a chance of arguing you're not a business. But that's probably so few apps it's worth it to just require everyone to follow the same rules.


> You are according to the EU's rules of digital markets. Any app provider is.

This is simply not true. I've read the regulations, because I had to declare trader status myself.


Read the rest of the comment.


I did. You're wrong. And you don't even have to "argue". It's a self-declaration.


Not allowed.


It is allowed. I did exactly that for my apps.


It depends on how you're publishing your apps. If you have a registered business which publishes the app then you can use the business address, so a PO box might be viable. You'll need to make sure your business is registered on that address in your national business registry.

Otherwise you are required to give your personal address.

See e.g. https://dev.to/svprdga/privacy-nightmare-as-google-plans-to-... but there are plenty of other sources that match my experience.


I don't have a registered business. Nonetheless, I used a P.O. Box for my EU trader address in the App Store—a P.O. Box that I opened for this specific purpose—as well as a Google Voice phone number.

These were both accepted by Apple and published in the EU.


MediaWiki is actually pretty easy to set up on a web server, speaking as someone who's now done it twice. You plop the files into htdocs, make sure PHP is set up, set up vanity URLs if you want to, and then… well, that's it. The final step is to go to the site, fill in the setup form, download the settings file it gives you and upload it. It doesn't even need an external database, it can use SQLite; if email setup is annoying, it doesn't even need that. And it's the most powerful and flexible wiki software out there: if there's something you want a wiki to do, MediaWiki can do it, but it also isn't too bloated out of the box, so you can just install plugins as and when you need them. Thoroughly recommend it.


Making MediaWiki survive non-trivial amounts of traffic is much harder than simply setting it up. It's not an impossible task for sure but there's no one click performance setting.


Specifically, managing edge and object caches (and caching for anonymous viewers vs. logged-in editors with separate frontend and backend caches) while mitigating the effects of cache misses, minimizing the impacts of the job queue when many pages are changed at once, optimizing image storage, thumbnailing, and caching, figuring out when to use a wikitext template vs. a Scribunto/Lua module vs. a MediaWiki extension in PHP (and if Scribunto, which Lua runtime to use), figuring out which structured data backend to use and how to tune it, figuring out whether to rely on API bots (expensive on the backend) vs. cache scrapers (expensive on the frontend) vs. database dump bots (no cost to the live site but already outdated before they're finished dumping) for automated content maintenance jobs, tuning rate limiting, and loadbalancing it all.

At especially large scales, spinning the API and job queues off altogether into microservices and insulating the live site from the performance impact of logging this whole rat's nest.


Everything is hard at scale. You have to be pretty big scale before some of that stuff starts to matter (some of course matters at smaller scales)


While that's not wrong, the wiki loop of frequent or constant, unpredictably cascading content updates, with or without auth and sometimes with a parallel structured data component + cache and job queue maintenance + image storage + database updates and maintenance becomes a significant burden relatively fast compared to a typical CMS.


Also don't forget that any large gaming wiki will want significant amounts of either Cargo, Semantic MediaWiki, or (god forbid) DPL


> people have this pathological need to be angry at something or someone

Yes. I think this is corrosive to social media sites. It's also something that kinda defines them of course. If they never got anyone animated for any reason, they'd be boring, but there's some threshold where it starts making things a bad time.


> Sounds like somebody who might be happy finding the smallest and queerest Mastodon server they can find.

Hi, I am actually the author of the post, but I tend to keep a low-ish profile here. I think you may have misread me a bit, or I didn't make myself clear. I want to explain why as I think it may be insightful.

The thing that's bothered me for a long time now about the fediverse is that it has this culture of inter-instance suspicion. Fediverse users seem to expect instances to correspond to roughly what I call “subcultures” in the post, and then form the moderation/federation policy based on that, cutting you off from other subcultures that aren't aligned with “us”. Even if this doesn't happen at a technical level, people seem to act that way at the social level. A lot of weight seems to be placed on what instance you're on.

To me I can't help but find this deeply toxic. I actually have a single-user fediverse instance, and I suppose in some sense that means I have the “smallest and queerest” Mastodon server, but it's really very different. I did this because I refuse to belong to just one subculture. I think every person is part of many different little subcultures at the same time in their different spheres of life, and I really didn't want some instance admin deciding for me which cultures I am and am not allowed to be part of. I also didn't want people to immediately dismiss me as being from the “wrong” culture by my handle. I guess I'm one of those people who think friendships are somewhat sacred and social pressure to make all your friends be culturally aligned is not good.

As much as I said in the post that the lack of legibility in the audiences on Twitter with the current algorithm is bad, I think excessive legibility can also be bad. Humans are too tribal. The fediverse puts one particular tribal marker front-and-centre and it seems to break too many people's brains at some level.

Having to keep tabs on both Twitter and the fediverse already takes up a lot of my attention, so I haven't really felt able to maintain a serious presence on other places, but it seems like Bluesky might be closer to the Twitter philosophy here, so I'm more optimistic about that site if I had to pick a single succcessor to Twitter.

> That post is about as articulate as I’ve seen that explains why some people don’t want visibility, or who go “death con 3” (after Kanye West) when they see a reply they don’t like. There’s a real contradiction between the author’s desires and having a big pool to swim in, algorithmic feed or not.

I want to push back on this too. I don't think that no visibility between these groups is good, and I also don't think hyper-visibility is. I think what I'm arguing is that there's a happy medium that's being lost, or that the particular compromise Twitter had made Twitter specifically work.

[Added in an edit:] There's definitely some small and super insular subcultures that lose out from this constant bubble-bursting effect, but it's not just those that do. There's also for instance a pretty huge subculture I'm part of on Twitter that's defined by people being, more than anything else, open-minded and willing to assume good faith. That subculture doesn't hate contact with others, it really appreciates diversity, but for its own survival it needs to keep at least some distance from the people on Twitter who are so terminally tribal that they will attack them on sight.


It is actually quite remarkable that Twitter has kept functioning at a technical level. A lot of people expected the massive loss of engineering talent to doom them, but it seemingly hasn't.

However, the site is going through a serious cultural (maybe you could say spiritual) death, and that might have something to do with the lost institutional knowledge of what made Twitter tick at a level deeper than just the code.


On a technical level, I tend to agree. Never underestimate the skill of the employees to save their boss from horrible decisions.

That said, Twitter is no longer "open" like it used to be - I bet it just won't handle the public traffic anymore. That is directly tied to cratering revenue but it's hard to untangle from the owner's self-destructive drug-induced behavior.

It's a testament to the quality of the codebase in general. Good code is hard to kill, but it WILL be eroded to the point where it can no longer grow and be better.


That might be true about the traffic, but twitter wasn’t open before the musk purchase either, it would stop you from seeing posts if you weren’t logged in if you tried to scroll more than a few times. Its current behavior of showing just a few posts sorta, uh, randomly selected might be easier on the server, idk.


For most of its life, Twitter was completely open. You could see any tweet or user page with the URL, whether or not you were logged in. The only exception was if someone had “protected” their account.

Twitter as it runs now is far more locked down. And that happened after it experienced significant, noticeable outages.

Performance is not binary… Twitter is still “up” as a service but with a much smaller public footprint and handling much smaller amounts of traffic.


Do you know when they stopped being frictionlessly open? I’m curious when they started doing the to continue you have to log in pop overs. It preceded musk, as I said. I think every link still worked, but scrolling and navigating twitter like a typical website instead of typing in a link had log in gates.


That is NOT true. As a Twitter addict, I know that was not the case.

Also, Twitter had two notable Spaces problems, when launching the DeSantis campaign (into the ground) and the second fiasco, which I actually forget what it was, just recently. The system just cannot handle truly planet-scale traffic like before.


As someone who wasn’t a twitter addict, I know it was true because people would link me to twitter, I would see the tweet but attempting to scroll too much would result in a log in popover years before the musk buyout.


I'm skeptical, honestly. Within the first few months of the acquisition, it seemed pretty clear to my tech-friend group that Twitter was on its way out and was going to fail. But it's been 2 years, and many of those same people still use Twitter quite a lot. Maybe it'll still fail, of course... it'll just take longer than anyone expected.

I was never a big fan of it, and never used it much, so I can't judge any loss of quality over the past 2 years. And since they now require a login most of the time, and I don't feel like logging in, I don't bother clicking through links to Twitter that people post.


My friends who were passing around twitter links still seem to do it. And the content of those links hasn't changed (random nature/technical info that geeks like) so the twitter posters haven't left either.

The only thing that changes is i used to click on those that looked interesting and now i don't because i know i won't see the thread without a login.


I don't expect it to "fail" in a financial way because Elon Musk can just bankroll his political addiction.


I don't get why whole developer circles didn't leave the platform yet. You can't ask people something via DMs without paying for the blue checkmark and it's totally unhelpful, plus it sends the message you care not enough about the right wing messaging that is send by the platform now. Or these people are just ok with that, I don't know.


They have. Lots and lots of developers are now on the Fediverse. But there's a cultural schism between the types of developers that frequent Silicon Valley cafes and the ones that frequent the Chaos Computer Club.


Network effects are a hell of a drug.


It's not network effect. There is nothing valuable that happens on twitter that doesn't make it's way out of twitter to the rest of the internet and reality. Avoiding twitter is actually a great way to filter out literal nation-state produced misinfo and propaganda that doesn't pass the smell test, but seems rampant on twitter.

People are addicted to twitter because of FOMO, because god forbid they learn about breaking news an hour later than anyone else.


Software can work for a very long time with minimal maintenance. A different question is if it can keep making revenue without investing.

Quality content has disappeared on Twitter and there is a proliferation of Only Fans tweets. However if you love Musk and right wing conspiracies it is still a fine platform to use.


> A lot of people expected the massive loss of engineering talent to doom them, but it seemingly hasn't.

Not doomed, but lots of failures did follow. Timeline not loading or repeating forever, SMS auth issue, API performance, going down in Australia, etc. - they experienced quite a few problems early post-change, but managed to recover.


When SWEs in tech want to flex, they code Twitter on a whiteboard as an interview question. Major respect for the the work they're doing at xAI, but the hardest challenges building Twitter itself are social.


Either Joel or Jeff wrote about this in the context of Stack Overflow. A developer who thinks of making Stack Overflow imagines writing SQL:

create table users (id integer autoincrement primary key, username varchar(32), password_hash varbinary(32)); create table questions (id integer autoincrement primary key, user integer references users(id), title varchar(256), body bigtext); create table answers (id integer autoincrement primary key, question integer references questions(id), user integer references users(id), body bigtext);

but actually that's just a tiny fraction of actually making Stack Overflow.


Uh, it has doomed them. Have you not heard about their massive drop in revenue as advertisers leave?

Sure, the advertisers say its because of the platform being a cesspool. But I mean anything on Twitter is on FaceBook or YouTube; you might have to look faster or harder. However, that stuff isn't where FaceBook / YouTube sends your ads. The targeting on Twitter is so bad compared to alternatives that advertisers just don't care to use it. This is an engineering problem caused by having nobody to fix it.


It should be noted that Objective-C code is presumably a lot less prone to memory safety issues than C code on average, especially since Apple introduced Automatic Reference Counting (ARC). For example:

• Use-after-frees are avoided by ARC

• Null pointer dereferences are usually safe (sending a message to nil returns nil)

• Objective-C has a great standard library (Foundation) with safe collections among many other things; most of C's dangerous parts are easily avoided in idiomatic Objective-C code that isn't performance-critical

But a good part of Apple's Objective-C code is probably there for implementing the underlying runtime, and that's difficult to get right.


Most of Apple's Objective-C code is in the application layer just like yours is


That's not true, the kernel driver knows what page mappings belong to particular processes, including GPU page mappings. Moreover, you have no choice but to talk to that kernel driver; you can't go behind its back and talk truly directly to the GPU, even if you bypass the userspace GPU driver, because this would allow circumventing memory protection. It is true, however, that modern GPU kernel drivers are relatively thin.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: