Hacker News new | past | comments | ask | show | jobs | submit | more frozenport's comments login

Another way to think of it is like a racket, if you don't join you suffer. Once everybody joins the customer base is the same. In this way, it makes for a good although distasteful(maybe) investment.


I like to watch movies with friends, how do we do it in VR?

I like to watch movies while eating dinner, how do I do it in VR?


Easy, match up light field cameras with light field displays. Think polarized 3D screens, except a billion times better - they can project the light as if the objects literally are just behind the screen.

Add eye tracking and put the screens across every surface and you can make the views follow your field of vision.

Or you use AR glasses that essentially project images in front of you.


How do I shovel food into my maw when I'm wearing VR goggles? What happens when my spoon misses the bowl?


AR = augmented reality. You still see your own world and your own spoon and bowl. You can definitely have the view projected anchored in space above the physical things you interact with.


I can hear the complaints already: "Ouch! Dropbox Sync just kicked in and lagged my AR while I was eating dinner. Now I've got hot ramen all over me... fml."


There's transparent screen AR, like with HoloLens and Meta glasses from http://www.spaceglasses.com


Wow 8 months to write an OS. These guys must be good!


>>I can find, so it's profoundly stupid to disproportionately filter out entire demographics based on bogus criteria such as prior familiarity with incantations like “nohup tar -jxvf giant.tar.bz2 2> cmd.errs &”

Is this the best you can do?


Well, its also a poorly motivated dream unless you have a sexual fetish for the Java style of abstraction where everything is completely independent. Who the heck wants multi-threaded GUIs?

User interaction proceeds sequentially, so most objects don't require locks. The rare exceptions in my software is rendering or IO on a separate thread, and these don't nicely fit abstraction models, as mentioned in other posts they involve C-style state machines like OGL.

A multi-threaded GUI seems like a great way to kill performance, with little advantage.


Anyone who has seen the responsiveness of e.g. AmigaOS under heavy load next to many modern systems might be inclined to want (more) multi-threaded GUIs.

Heavy use of multi-threading to disconnect GUI updates from the actual work was essential to making that happen.

AmigaOS sacrificed throughput over responsiveness all over the place (e.g. something as trivial as cut and paste from a terminal would easily involve half a dozen threads with message passing).

You don't need separate threads for every little component, though.


Was the UI really heavily multithreaded compared to today's systems? I thought there was a lot of events and message-passing going on in AmigaDOS much like in current GUI systems.


Depends on what you mean by "heavily multithreaded". And yes, you're right, there were lots of events and message passing, but that message passing went between different threads.

I mean to write this up for a blog post and do some proper diagrams, but here's a rough overview of the state transitions when handling terminal IO for AmigaOS and reimplementations of the API, like AROS (this is where I got hands on experience with it - I extended the AROS terminal handling):

Low level interrupt sources will be handled by "devices" such as "keyboard.device" and "gameport.device" (the latter handles the mouse/joystick ports). These will feed input events into "input.device".

The input.device is opened by any component that wants to handle input events. This includes the "console.device" which is responsible for providing a "raw" terminal in a specified rectangle in a window. It handles low level input processing, and turns keyboard and mouse input that are relevant to the console/terminal into higher level events which it passes on to clients, as well as take commands (such as "move cursor to position (x,y)" or "write text xyz" and render the terminal).

Above the console.device sits the console-handler (applications can, and often do open console.device directly if they want a low level interface). This is responsible for opening a window, creating a console.device that covers the window, and "cooking" low level input into higher level input and vice versa for output.

The "gadgets" (widgets; buttons etc. in the windows) will be handled directly by intuition (the GUI system) in a separate high priority thread.

If you then do cut-and-paste, there are additional complications: "conclip" needs to be running. This receives requests to cut or paste via messages, and mediates access to the clipboard.device. The clipboard.device again manages reading/writing files in the relevant clipboard volume. That will involve talking to the appropriate filesystem handler, which again may write to a device (such as trackdisk.device for the floppy drives).

Pretty much all of these components will run as their own separate threads. And most of their interaction is via messages put on a queue.

So if you choose to "cut" a section by pressing a key combination, an interrupt will be fired to keyboard.device, which will add an event via the input.device which the input handler thread ("task" in AmigaOS) will pass to the console.device thread via a message, which will pass it on to the console-handler, which will pass a message to conclip, which will pass the data on to the clipboard.device which will send a message to the relevant filesystem, which may send a message to a low level device. After sending a message to conclip, the console-handler will send a message back to the console.device if there's any rendering required.

The reason for all of this is that coupled with careful priorities (UI rendering and input is running in high priority threads), the system appears very responsive, while a lot of this happens behind the scenes.

E.g. the clipboard system on the Amiga has to deal with a system where the clipboard could have been reassigned from the ramdisk where it'd usually be, to floppy, so it really couldn't reasonably be "inline" without making the system unresponsive.

In that respect AmigaOS was more multithreaded: There's all kinds of things we consider fast enough to do "inline" now that was put behind a thread-boundary because it was either unpredictable or too slow to be done inline back then.


But I think the thing missing is that each of these threads ran on the same CPU (right?), and thus things like torn reads and writes weren't an issue, so they didn't need to use expensive std::mutex or std::atomic everywhere.

Today, we have single concurrent execution by enforcing a single GUI thread, at the time of the Amiga they had single concurrent execution because that was the only execution they had.


They ran on the same CPU, but you still had to use mutexes and atomic operations because it had fully pre-emptive multi-tasking, and so your application had to be ready to lose execution from one instruction to the next. Torn reads/writes definitively were an issue for higher level code (unless you could be guaranteed that your construct would translate to a single m68k instruction), and needed to be kept in mind even in assembler in some situations (see below).

In fact, you'll find lots of Amiga-software being more brutal and enforcing serial-execution for critical section by using Forbid()/Permit() pairs, which will outright disable the scheduler, or even using Disable()/Enable() (disables interrupts too). Of course this is/was very much frowned upon for all but implementing atomic operations, though even this is not guaranteed to be totally atomic in an Amiga system without taking care.

The need to protect against other threads/tasks is/was one of the first things hammered into the heads of Amiga-developers exactly because it was so new to most, who would usually come at it from 8-bit home computers where the standard procedure was that you fully controlled the computer except perhaps for some very trivial interrupt handlers (which most software would take over control of anyway).

And while each of the normal threads would be running on a single CPU in a basic Amiga, any number of devices could DMA - the Amiga depended heavily on this -, and additionally both the Copper (very basic "GPU" of sorts used to set up "display lists" to manipulate various registers etc. though not really limited entirely to graphics) and the Blitter could access memory at any time too, so you very much had to at least in theory be prepared to deal with memory changing during execution of an individual instruction if working in "chip-memory" (the Amiga roughly works with two types of memory: "chip-memory" is memory where auxilliary hardware can steal bus-cycles from the CPU; "fast-memory" is memory that only the CPU can access).

Also note that while unusual, there were true multi-processor Amiga-setups: There were "bridge boards" for the A2000 which effectively were an x86 PC on a card, where the "graphics card" was a buffer in chip memory that would get displayed in a window, and which would receive input from the Amiga keyboard and mouse. There were also PPC accelerator boards (a release of AmigaOS4 for "classic" Amiga hardware with PPC accelerator boards exists; it basically runs everything it can on the PPC, just like for "new" Amiga hardware), though usually these would disable the M68k while the PPC was executing stuff (but I'm not sure if this was enforced by hardware or if it was done by the OS patches for simplicity).

I used to love to tell people of all the different CPUs in my A2000: A 68020 with the 68000 as fallback (if you soft-disabled the 68020 for compatibility) on the motherboard. A 6502-compatible core on the keyboard (the A500 and A2000 keyboards had an embedded SOC chip with a 6502 core + RAM + PROM as the keyboard controller). A Z-80 on my harddisk controller. An 80286 accelerator board + 8086 fallback on my bridge-board... Of course of the 68020/68000 and 80826/8086 pairs only one of each architecture could ever be running at once.


I really enjoyed reading this post, and it looks like the Amiga was ahead of its time.


It was a fantastic machine. Unfortunately Commodore all the way through (from long before the Amiga) was an absolute dysfunctional disaster of a company, and it was a miracle they lasted as long as they did (and a testament to the calibre of people that kept saving the company from self-inflicted wounds)

The biggest problem being perpetual under-investment in R&D, and management meddling that systematically whittled away at the lead they once had. E.g. the archetypical example is the Amiga4000. On one hand it is the "flagship"; the biggest, fastest classic m68k Amiga produced.

On the other hand, it arrived late, was ridiculously expensive, and was slow for what was there. The problem? New management wanted to start all projects over from scratch and put their stamp on them.

IDE, for example, was suddenly pushed onto engineering. Without understanding that the Amiga used SCSI for a reason: IDE of the time loaded the CPU too much. Fine on a single-tasking OS, or on machines with more CPU, but the Amiga was built around offloading everything. It was the only thing that kept it competitive in the face of mounting problems for Motorola with upping the speed of the M68k range (work was underway to evaluate alternative CPUs; PA-RISC was the lead contender at the time; in the end Commodore went bankrupt before making a decision, and third parties chose PPC).

The A4000 was the result: IDE dragging down IO performance; a broken memory sub-system due to rushed redesigns; a butt-ugly case compared to the sleek A3000, and trying to compensate for the other problems by going for a 68040, but going "cheap" and picking one of the slower versions and yet stil ending up too expensive.

The truly crazy thing, though, is that as they were doing this, the "A3000+" was pretty much done. It didn't have quite as fast a CPU, but was a step up from the A3000. It had AGA (the last custom chips that the A4000 also got), and a range of other improvements, such as a DSP providing high-end sound (8x CD quality channels), and that could also double as a built in modem. And it kept SCSI...

The best part? It was far cheaper than the A4000, and would've been ready much faster. Of course Commodore had to axe it...

Being a fan of the Amiga at the time was painful...


half a dozen threads with message passing

On a single-CPU no-MMU machine, message passing looks very similar to a function call.


It still introduces context switches and insertion into a queue, and I can tell from first hand experience tuning the terminal code for AROS that even running in a single address space, using a single CPU and no MMU on modern hardware, being careless about how you do the message passing will still kill your performance.


> Who the heck wants multi-threaded GUIs?

Widgets aren't static; presumably you want them to keep updating (spinners, size changes, status updates) when the user is interacting with other elements.


Modern single-threaded UI frameworks don't go into a loop while you're interacting with something. Say you click on a button and drag. This enters the UI framework as a "mouse down" event followed by several "mouse dragged" events. After handling each event (or between events), the framework can decide to do other work, like updating a spinner.


That's not a "modern" GUI framework approach - eventloops have been a feature of GUI systems back to the Apple Lisa and Windows 1.


Yes, event loops are quite an old idea. I just put "modern" there to exclude old systems that encouraged polling. For example, I'm pretty sure classic Mac OS entered a loop while tracking the mouse in menus.


Multithreading the GUI adds a lot of responsiveness. Sure, even your phones CPU has 2+ cores at 1+ Ghz, but if it still shudders then your OS programing model is wrong.

PS: I have yet to see a single threaded GUI I can't make shudder while playing video.


You want a multi-threaded GUI when it's running a long task. Otherwise it won't update until it's finished running, there's no way to cancel the task, etc.


Uh-huh.

Typically GUI work is instantiated with the GUI toolkit on the call stack. It calls foo.onClick(), etc. Now, if one particular onClick starts long-running task, then there are three possible designs:

Either that particular onClick() starts a worker thread and returns before the worker is done.

Or the GUI toolkit delivers the onClick() in a thread of its own, e.g. from a pool of workers.

Or everything is done in one thread, and the UI blocks.

The last one seems sucky, but the insidiously sucky one is the one in the middle. That's where every user's implementation of onFocusOut() must take care to lock because all of bar.onFocusOut(), foo.onFocusIn(), foo.onMouseUp() and foo.onClick() are called concurrently in four different worker threads. The tail wags the dog.


You don't need a multi-threaded GUI for that. You can have the GUI running on one thread while the task runs on another.


Sure. You put a message queue in there and create command objects for all the updates that tasks might want to make to the UI. It's not hard; in fact it's thoroughly mechanical, so it's exceedingly tedious to program.

So why isn't the computer doing it for me? I'd happily sacrifice some performance if I could just write the change I wanted to make in the thread where I wanted to make it, and have the computer take care of the bookkeeping.


You can just send a closure to the UI thread and have it executed there, provided you are using a good enough programming language.

Or start the long-running computation from the UI code in a way that returns a promise, and chain the UI update on it, in a way that causes that continuation to be scheduled on the UI thread.

Or have an UI that can be updated from any thread (but not simultaneously) and take the big UI lock.


In modern frameworks like Qt the message loop is mostly hidden.

An astute observer will also note that at the bottom of every Win32 program is a message loop monitoring events such as for when the window is closed.


> it's exceedingly tedious to program.

C++ programmer detected? ;)


Just have the ability to send closures (std::function) to other threads' message queues. Then you can use inline lambda syntax.


Don't see how this has anything to do with anything. For example, the "flux" pattern is similar to model/view which can be done by a single thread (ie, Qt). In this scheme the main thread services events sequentially, calling the necessarily object's methods on the same thread.

The discussion is about the 'dream' of pushing a button from a worker thread.


I thought than in flux, the worker thread could send "push button" message to the dispatcher and it will deal with it asynchronously. Likewise, the updates to the views are asynchronous (and can be potentially multi-threaded) to the actions that are coming into the dispatcher queue.


The thing is that the entity that pushes a button is a human, not a worker thread.


It is very common to have async http requests that "push button" to to say when they resolve.


Maybe it only works for apples, strawberries.


We should make a 90s version where each pixel is a <td>, I bet it will render faster than this!


Nonsense, why did France, UK enter the war? How about Poland?


Well France tried to avoid confrontation and the U.K. was trying to hold on to its empire and contain the German threat, but also had many business interstate in Germany. The allies thought they could diplomatically outmaneuver Hitler.

France and the U.K. then caved to Hitler's demands repeatedly and essentially sold Czechoslovakia and Poland down the river to Hitler at Munich 1938, and during the "phoney war". France was then surprised by the swiftness of the German attack but capitulated very quickly. Her armies were still mostly intact when France fell.

It must also be remembered that the extent of collaboration in France under occupation was very great and the French resistance was always very small.


When they realized that they were going to be right fucked. And they didn't really "enter the war" when they declared war.


The wolf, dingo, dog, coyote, and golden jackal diverged relatively recently, around three to four million years ago, and all have 78 chromosomes arranged in 39 pairs.[6] This allows them to hybridize freely (barring size or behavioral constraints) and produce fertile offspring.


from wikipedia [1]:

> Coyotes are closely related to eastern and red wolves, having diverged 150,000–300,000 years ago and evolved side by side in North America, thus facilitating hybridization.

[1] https://en.wikipedia.org/wiki/Coywolf


Four million years is recent? We split from chimpanzees three to six million years ago.


Yes, it is recent.

We are evolutionary newcomers.


I don't think chimps can breed with humans however.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: