Hacker Newsnew | past | comments | ask | show | jobs | submit | dbdoskey's commentslogin

Thank you for sharing, this was very insightful.

Do you have another example of something like this that your team had to deal with that was not as easy, but "looked easier" for the users?


There's loads of this in the UX space. To overly simplify, people's brains use expected ideas about what things are like, in order to interact with the world. We build models as to what things are like, and then things that look like what we expect, we over-weight to stating as things that we understand.

So when people are presented with something which is visually appealing, we think it's easy to use, even when it isn't. And people will then default to blaming themselves, not the pretty, elegant thing, because clearly the pretty elegant thing isn't the issue.

We call this the aesthetic-usability effect. Perception of the expected experience, and attribution of the actual experience, is more important part than the actual experience.

It's one of the many ways in which engineers, economists and analysts (in my experience) tend to run in to issues. They want people to behave rationally, based on their KPIs, not as people actually experience and interact with the world.

There's all sorts of research that then comes off this, like people enjoying wine they've been told is more expensive, over wine they've been told is cheaper, and the physiological response as measured with an MRI confirms their reported divergence in experience, despite that the wines are the same, as one quick example.

Low contextuality evaluations (my term for where you ask someone to state things about something where they lack enough experience with enough breadth and depth to answer reliably) are always wonky. People can't comment on wine, because they don't know enough about wine, so they seek other clues to tell them about what they're experiencing. Similarly, people don't know about things that are new to them (by default) or that look different to what they expect, so their experience is always reported as being worse than it probably actually is, because their brain doesn't like expending energy learning about something new. They'd rather something they understood. It's where contextualisation and mimicry come in really useful from a design of experience standpoint.


Replacing the Android home buttons with the swipe up gesture. It was demonstrably a very clear usability and efficiency loss, but most people strongly preferred it.

Before we had that latter data I actually argued against attempting it - I figured having a clear usability win vs iPhone would be an area we could capitalize on, and didn't believe we'd be able to execute the swipe system well in the time we had (I'd rather be behind and robust than leading edge and flaky), but doing it was definitely the right call - felt pretty sheepish about that one for a few years. The eng and ux teams that pulled it off were next level.


People's actual measured experience, vs people's experience of the experience, are rarely the same things, when they have prior knowledge of one thing, and low knowledge of the alternative. They prefer the thing they know, even when it's worse.

And although you can still choose to have the back/home/menu buttons, more and more apps will misbehave and draw under them, sometimes rendering controls unusable or content nigh-invisible. One year ago no app I had did that. Now it’s up to three.

None. The US money Israel receives is purely used for buying from US defense contractors. This is developed by purely Israeli defense contractors. The US leverages significant discounts on these Israeli developed systems compared to other countries.

Also, the amount Israel gets is in the same ballpark as Egypt and Lebanon, but interesting that that is never mentioned?



So it goes from Us Citizens to the wealthy who sell the weapon systems. That’s what none is. Cool.


This article is about an Israeli developed system, so no US tax payer money was used. It is an off topic discussion to discuss your hatred for Israel. Maybe submit a different article about that, but it is off topic for this one.


Response to “How much U.S. taxpayer money was spent on this?” - now flagged: $1.2B Source: https://defensescoop.com/2024/04/25/iron-beam-procurement-us...


In theory, that is the benefit of having an agent that is limited to only doing the tests, and an agent that only does the coding, and have them run separately, that way to fix a test, you don't change the test, etc...


OpenAI is not profitable because it is spending resources into moving forward and training new models and creating new tools.


There is nothing to defend there. They could have easily: * Made the donations go directly to funding the browser development. Right now I don't know if it even possible to donate purely just to browser development * They could have easily opened a services/consultancy arm, similar to igalia. A great and easy way to fund browser development. (How igalia has funded servo development in the past) * Create a for-pay enterprise support. In the past a lot of government organizations wouldn't use Chrome due to how the Chrome updates worked. They could have made a killing in government contracts just around that

And these are just a few simple income directions that are pretty common in other OSS projects. Instead they did braindead ideas like being a VPN reseller, giving away Pocket, and other things no one wanted or asked for.


> * Made the donations go directly to funding the browser development. Right now I don't know if it even possible to donate purely just to browser development

The problem goes beyond them making it impossible to donate purely to browser development: they have arranged their structure such that you cannot donate to browser development at all. The Mozilla Corporation develops Firefox; it is a for-profit subsidiary of the Mozilla Foundation, and donations to the foundation can't be used for the for-profit browser development subsidiary at all.

They've built their entire legal structure around reliance on Google's payments.


Yeah it’s super frustrating. I donate $50/mo to Ladybird development and would do the same for Firefox tomorrow if they gave me any way to actually do it. I have no interest in funding their foundation initiatives.


Few for profit companies accept donations. The compliance costs exceed the expected revenue.

You can fund Mozilla Coporation by paying for Mozilla VPN, Mozilla Monitor, Firefox Relay, or MDN Plus.


If the WSL 1 ended up working, it would have been one of the best historical coincidences in MS's history. A long forgotten feature in the NT kernel, unique to pretty much any other OS out there, used to push it's dominance in the 90's, is revived almost 30 years later, to fight for relevance with Unix based OS, once again. To quote Gorge Lucas, It's like poetry, it rhymes.


I can tell that if POSIX subsystem in Windows NT was actually a good enough UNIX experience, I would never bothered with those Slackware 2.0 install disks.

And the subsystems concept was quite common in micro-computers and mainframes space, Microsoft did not come up with the idea for Windows.


The original POSIX subsystem was just there so MS could say that it exists (and pass DoD requirements).

It got actually somewhat usable with the 2k/XP version, slightly better in Vista (notably: the utilities installer had option to use bash a default shell) and IIRC with 7 MS even again mentioned existence of the thing in marketing (with some cool new name for the thing).


Indeed, and that is why if I wanted to do university work at home instead of fighting for a place at one DG/UX terminal at the campus, I had to find something else.

I am aware it got much better later on, but given the way it was introduced, the mess with third party integrations, as Microsoft always outsourced the development effort (MKS, Interix,..), it never got people to care about afterwards.

First impressions matter most.


Realistically anyone who cared would be using something like Cygwin (and the original UNIX server market segment evaporated due to Linux and had zero interest in migrating to NT in that form--some did migrate due to application layer benefits like .NET but not for the same workloads.)


There is an alternative universe where Windows NT POSIX is really as it should have been in first place, and Linux never takes off as there is no need for it.

As there is another alternative one where Microsoft doesn't sell Xenix and keeps pushing for it, as Bill Gates was actually a big fan of.


Obviously we'll never know, but I seriously doubt that parallel universe would've had a chance to materialize. Not the least due to "free as in beer" aspect of Linux whilst web/Apache was growing at the pace it did. All proprietary unices are basically dead. Sun was likely the sole company that had the best attitude to live alongside open source, but they also proved it wasn't a good enough business post bubble burst. NT and Darwin remain alive due to their desktop use, not server.


IBM z/OS is officially a Unix-a very weird Unix which uses EBCDIC-but it passed the test suite (an old but still valid version, which makes it somewhat outdated) and IBM paid the fee to The Open Group, so officially it is a Unix. (Although somewhat outdated, they recently added a partial emulation of the Linux namespace syscalls-clone/unshare/etc-in order to port K8S to z/OS; but that’s not part of the Unix standard.)

If Microsoft had wanted, Windows could have officially been Unix too-they could have licensed the test suite, run it under their POSIX/SFU/SUA subsystem, fixed the failures, paid the fee-and then Windows would be a Unix. They never did-not (as far as I’m aware) for any technical reason, simply because as a matter of business strategy, they decided not to invest in this.


With Microsoft having either Windows NT with proper UNIX support, or real UNIX with Xenix, there would be no need for Linux, regardless of it being free beer.

Whatever computer people would be getting at the local shopping mall computer store already had UNIX support.

Lets also not forget that UNIX and C won over the competing on timesharing OSes, exactly because AT&T wasn't allowed to sell it in first place, there was no Linux on those days, and had AT&T not sued BSD, hardly anyone would have paid attention to Linux, yet another what-if.


NT underlies the majority of M365 and many of the major Azure services. Most F500s in the US will have at the very least an Active Directory deployment, if not other ancillary services.

IIS and SQL Server (Win) boxes are fairly typical, still.


I am not suggesting NT is dead on servers at all. I am positing it would be dead had it not been for owning the majority of desktops. Those use cases are primarily driven as an ancillary service to Windows desktop[1], and where they have wider applicability, like .NET and SQL Server, have been progressively unleashed from Windows. The realm of standalone server products were bulldozed by Linux; NT wouldn't have stood a chance either.

[1]: In fact, Active Directory was specifically targeted by EU antitrust lawsuit against Microsoft.


For all large corps, users sit at 1990s-style desktop computers that run Win10/11 and use Microsoft Office, including Outlook that connects to an Exchange server running on Windows Server. I'm not here to defend Microsoft operating systems (I much prefer Linux), but they are so deeply embedded. It might be decades before that changes at large corps.


That was true once, but not true now. On-prem Exchange is rapidly being squashed by Microsoft in favor of 365. The direction of travel for the Outlook client is clearly towards web (I note anecdotally that the Mac client, always a poor relation to Windows, is so laughably clunky that the Mac users I know forgo it in favor of the web client.) If the service is in the 365 cloud and the client is a web browser, who needs Windows for this discussion? We might end up in a future of terminals again for the worker bees and 'real' computers only for the people who need Excel and Word and for whom the web versions dont cut it


WSL 1 works fine. I much prefer it over 2 because I only run windows in a VM and nested virtualization support isn't all there.

Also feels a lot less intrusive for light terminal work.


That would not be unique, as is what BSD has done for Linux compatibility basically forever.


BSD and Linux are in the same bucket, so that doesn't count, not any more than MacOS compatibility with Linux. Windows is the odd one out.


I don't think it is fair to brush it off under "same bucket; doesn't count." The syscalls are still different and there's quite a bit of nuance. I mean the lines you're drawing are out of superficial convenience and quite arbitrary. In fact, I'd argue macOS/Darwin/XNU are really Mach at their core (virtual memory subsystem, process management and IPC) and BSD syscalls are simply an emulated service on Mach, which is quite different from traditional UNIX. The fact that as a user you think of macOS much more similar to Linux is not really reflective of what happens under the hood. Likewise NT has very little to do with Win32 API in its fundamentals but Win2k feels the same to the user as WinME, but under your framing, you'd same-bucket those.


> Likewise NT has very little to do with Win32 API in its fundamentals but Win2k feels the same to the user as WinME, but under your framing, you'd same-bucket those.

I probably would, in this context. Well, maybe not WinME, because that was a dumpster fire. But any Windows coming down from NT line, which is what's relevant in the past 20 years, sure. Same bucket.


Solaris did as well.


Looks amazing. Would love something like this in Firefox or Zen. Mozilla released Orbit, but it was never something that ended up really being useful.


Firefox already has something similar natively, but it's not enabled by default. If you turn on the new sidebar they have an AI panel, which basically looks like an iframe to the Claude/OAI/Gemini/etc chat interface. Different from Orbit.


That sidebar doesn't have the ability to do any actions on the browser tab, or have the data form the browser as a context in any way. It is just a simple iframe.


If you click the three-dots menu above the iframe, you can select "Show shortcut when selecting text". That allows you to select text and then provide that as context to an AI prompt.

(At least, that's how I understand it - I have the feature turned off myself.)


Thank you! :)

Would love to explore a FF port. Right now, there are a couple of tight Chrome dependencies:

- CDP - mostly abstracted away by Playwright so perhaps not a big lift

- IndexedDB for storing memories and potentially other user data - not sure if there's a FF equivalent


FF supports IndexedDB directly, it has supported it, fully, since version 16 [0].

[0] https://caniuse.com/indexeddb


Thanks! Will track your project for the future. Looks very promising


A similar process is happening with zellij and tmux. Since I switched over I feel that tmux is obsolete.


I hadn't used Zellij before, but I tried it out. Visually it works better than tmux and it shares enough key bindings with tmux to make it a pretty seamless transition.

With that being said, the binary is huge. I get that zellij is statically linked, but tmux is about 900KiB and has minimal dependencies. I'm flabbergasted that a terminal multiplexer, stripped, is 38MiB.


Looking at the source code I assume it's just the amount of cargo deps. Some of which I'm not sure what place they have on a tmux at a first glance.


Hopefully some effort is eventually put into slimming things down.


True, but zellij also does more. I'd also give it more of a stink eye if it were something I were running many times inside the inner loop of a script, but as something you generally launch once and leave running forever, eh.

I occasionally have to recalibrate my units. I just launched Emacs on my Mac and it's using 350MB of RAM. That's astonishing when I think about Amiga programs I wrote, but it's also just 0.53% of the RAM in this particularly machine. It's probably larger than it could be if someone ruthlessly trimmed it back, but I'd rather spend that time using the other 99.4% of my machine to do more fun stuff.


I have a few embedded devices which have just 128MiB of flash, and they can run tmux just fine. I wouldn't even consider zellij for this purpose, of course, and having tmux down there is more of a "this is a nice thing for development purposes" thing.

Regarding memory usage, Zellij appears to take up 63 MiB versus tmux's 3.8MiB. It's nice and all, but quite a pig. This is on Linux, maybe Mac is different.


Embedded is a lot different, to be sure. I'm surprised there's room for tmux on something that tiny.

But on desktop systems, on my Mac, Zellij takes 28MB of disk and 40MB of RAM. That's 1/37,000th of my available disk and 1/1,600th of my RAM. I'm all for optimized, tiny apps, but those are below my attention threshold.


> I'm surprised there's room for tmux on something that tiny.

A question that comes to mind is, under what circumstances would you expect a TUI based program that processes streaming text not to fit on a system that is otherwise capable of user interaction? It seems vaguely in the vicinity of the simplest possible interactive task you could come up with.

Certainly it generally isn't worth hyper-optimizing mainstream desktop applications to wring out the last few MB, let alone KB, of RAM in this day and age. However that doesn't answer the question - why would more than 1 MB of program binary be required for multiplexing text in a terminal? At least at first glance it honestly seems a bit outlandish.


Note that "embedded" like this includes e.g. many modern routers.

Also note that computers with much less disk space than 128 Mb could and did run full-fledged GUI apps in the past. For example, the entirety of Windows 95 is ~100 Mb when installed.


The product uses libevent and libc already, so adding tmux only consumes a few hundred KiB in the image. The root filesystem is squashfs, so it's even less in practice.


What does it do better than tmux?

Or is it more of a fish vs. zsh type of situation, where neither is obsolete, but the target audience is just very different?


Definitely more of a fish vs zsh situation, in my opinion.

tmux, to me, feels like "modern screen". It has some cool features, but at the end of the day, it just wants to be a terminal multiplexer. Great!

Zellij on the other hand seems to offer terminal multiplexing as an obvious first-class use case but "not the whole point". At the surface, Zellij is an opinionated terminal multiplexer that uses a nice TUI to give discoverability which you can turn off when you're ready to gain screen real estate. It's easy to make Zellij behave exactly like tmux/screen, and it's easy to configure via a single config file.

Where Zellij takes a turn in to a different direction, however, is that the workspaces you can configure with it can do all sorts of interesting things. For instance I once built[0] a python cli app which had a command that would launch a zellij workspace with various tabs plugged in to other entrypoints of that same python cli, basically allowing me to develop a multi-pane TUI as a single python Typer app. In one pane I had the main ui, and then in another stacked pane I had some diagnostic info as well as a chat session with an llm that can do tool-calling back out to the python cli again to update the session's state.

I think wrapping up a project's dev environment as a combination of mise (mise.jdx.dev) and zellij or nix+zellij to quickly onboard devs to, say, a containerized development environment, seems like a really neat idea.

0: https://github.com/eblume/mole/blob/main/src/mole/zonein.py -- but this is mostly derelict code now, I've moved on and don't use zellij much currently.


> Where Zellij takes a turn in to a different direction, however, is that the workspaces you can configure with it can do all sorts of interesting things.

That’s been a pretty standard feature of tmux since forever.

In fact the reason I first discovered tmux was because some Irssi (terminal IRC client) plugins used tmux to create additional panes for Irssi.

tmux is one of those tools that does a lot more than most people realise but the learning curve is steep and features aren’t easy to discover.

Zellij looks interesting but these days I mostly use tmux as a control plane rather than a terminal UI. So the enhancements of Zellij are wasted on me.


A quick example is that mouse scrolling works by default. I see it more like ripgrep vs grep. Either can do almost anything the other can, but one has much more modern, ergonomic defaults.


I used to used zsh, like I still have have karma moving up on stackoverflow as I answered my own questions on some obscure configuration fine tuning. But currently I'm more in a "give me the thing that work off the shelf" moment, so I take fish and don't plan to either look back.

Byobu with tmux as backend is my go to solution if I want a multiplexer, for what it worths.


Among a certain subset of linux users, new is always better.



From a quick read, all I can see is a manifesto for emacs.


How though? Genuine question; x11 didn't obsolete terminals. Does Arcan do something X11 couldn't?


Not exactly true, big tech did exist it was just different players. Sun was a strong player, as they were pushing Java, that was very popular in the enterprise world. Intel was considered a place that did a lot of interesting innovation. IBM/Oracle style players.

I think the big difference was that big tech was mostly on enterprise. The big shift to consumer focused big tech made a lot of the big tech more intersting place to work


This is IMHO where the interesting direction will be. How do we architecture code so that it is optimized around chatbot development? In the past areas of separation were determined by api stability, deployment concerns, or even just internal team politics. In the future a rep might be separated from a monolith repo to be an area of responsibility that a chatbot can reason about, and not get lost in the complexity.


IMHO we should always architect code to take advantage of human skills.

1°) When there is an issue to debug and fix in a not-so-big codebase, LLMs can give ideas to diagnose, but are pretty bad at fixing. Where your god will be when you have a critical bug in production ?

2°) Code is meant for humans in the first place, not machines. Bytecodes and binary formats are meant for machines, these are not human-readable.

As a SWE, I pass more time reading than writing code, and I want to navigate in a the codebase in the most easy possible way. I don't want my life to be miserable or more complicated because the code is architected to take advantage of chatbot skills.

And still IMHO, if you need to architect your code for not-humans, there is a defect in the design. Why force yourself to write code that is not meant to be maintained by a human when you will in any case maintain that said code ?


This human quite likes having everything on one page, to be honest. And not having a leaky ORM layer between me and the SQL.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: