> and, once again, I don’t think he’s capable of pushing any meaningful changes to the site, one, because he’s devoid of any imagination or creative thought
A bit heavy on the anti-Musk tirades for my taste.
> overblown, misleading, and potentially outright lies
I agree with you that her claims could be wrong, but please watch the original source material (https://youtu.be/3TKJN61aflI) - I think she is being much more nuanced that you're giving her credit for. She is presenting her findings, showing exactly how she did it, and asking for feedback ("I hope I hear more from people who have done more than me" - 43:10).
You can follow her workflow and disagree - and probably will! But that's ok - that's how science works. I look forward to many well-written posts going through her analysis and showing alternate interpretations. I'm guessing she looks forward to this too.
I hadn't actually found the original source material! I've amended my post to considerably tone down my accusations since it appears I was working with an incomplete picture of what the original presentation was. After an initial flicking through of the presentation it definitely appears that this is more nuanced than what I was working with and what has been reported on!
This article was retracted (referenced in another thread), but it was essentially a summary of the source webinar that is still posted (but oddly unlisted) here: https://youtu.be/3TKJN61aflI
I encourage you to watch that video and judge the analysis yourself. In this video I feel like she is very balanced and is taking a scientific approach ("Here's how I did the analysis and here's what I'm saying - please let me know what you think") vs the media/twitter trying to summarize/polarize.
Yet it is still incredibly misleading and dishonest.
That should really teach you something about relying on the style of presentation for judging whether a claim is reasonable. In this case, it's garbage.
I think you're being over sensationalist. In today's world I think it's healthy to assume someone is trying to reach for the truth instead of assuming malice. She's trying to present her findings and asking for feedback. I don't think that should be demonized - we've all been wrong before. Please watch the video.
Pretty much all the private investment in hyperloops is to get to the point of selling it for public infrastructure projects (it was originally proposed as an alternative specifically to California High Speed Rail), so, while the answer is mostly “no” in the strict present sense of “is”, it's also “yes” in the sense of the intended goal of the projects.
That just sounds like risk-free R&D, where the people's money only goes there once the idea has been proven by someone else, with no loss for the state if it goes nowhere near our interests.
Except in the real world, the “getting ready” from day one isn't just R&D but also propaganda. You counter the propaganda so it doesn't soften the ground for waste of public funds.
Personally, I care because it diverts attention and thrust from developing proven alternatives to air travel, such as HSR. When something that big hinges on a narrow swing in public opinion, and someone starts dangling a pie in the sky idea, the result can be abandonment of the main idea, at the service of an unlikely moonshot by a big ego.
I don't see HSR as a good alternative to air travel in a country as spread out as the US. a proper HSR route for the northeast corridor would be great; it's kind of ridiculous that a plane is the best way to get from boston to DC. when you start looking at traversing entire coasts, it's really hard for a train to compete with air travel. a non-stop flight from seattle to LA takes about two and a half hours, plus all the security nonsense. at 200mph, a train that makes zero stops would take about six hours. once you factor in travel to/from the airport and security, that's actually not too bad, but nonstop train routes are uncommon. the situation is even worse for cross-country flights. NY to LA is about six hours by plane; it would be about fourteen hours on a (highly unlikely) nonstop train.
frankly (as an east-coaster) I would be happy just to see the existing rail lines be price competitive with airlines. I greatly prefer traveling by train, and I would be willing to accept a longer transit time if it were not both slower and more expensive.
Both Europe and China are of comparable size to the US, and they enjoy high speed train since decades ago. It won't replace a NYC-LA trip anytime now, but the coastal corridors and various regional hubs (Texas triangle, lake Michigan, etc) are perfectly sized.
The problem is some people believe bureaucrats will spend better the money than a private group. So they want their money and give those bureaucrats the power to spend the money and believe someday they will make a revolutionary product for free.
yes there already is public money going towards this, see the contract the boring company has with the Las Vegas convention authority, a public body, and whoops of course it's already a mess.[1]
This is infrastructure, if this ever is going to be a business it's going to be public money, it's not like someone builts a hyperloop on their cow farm.
In the extreme long term, humans will have died out and everything that was ever done will turn out to have been "waste". That said, I don't feel bad about doing things that make me feel good and/or are interesting. People can do hyperloops if they want.
X11 is a window system. An X client is an application that uses the X protocol (e.g., Firefox, GIMP, LibreOffice) while an X server is an implementation of the X protocol that handles the rendering. Note that an X server and an X client don't have to run on the same machine; if I log into a remote server and execute LibreOffice, LibreOffice is still the X client and my machine's X implementation is still the X server, despite the fact that LibreOffice is running remotely.
XFree86 is a specific X server implementation, though it has been superseded by X.org sometime around 2004 due to a license change from the MIT license to the 4-clause BSD license, which is incompatible with the GPL. X.org is by far the most dominant X server implementation in the free open source software community, but it is not the only implementation; off the top of my head there were proprietary X server implementations for SunOS and NeXT.
> though it has been superseded by X.org sometime around 2004 due to a license change from the MIT license to the 4-clause BSD license,
My understanding is that the scene for the fork had already been set by the time of the relatively late license change. I seem to recall reading at the time that the XFree86 core team was unhappy that a sole contributor was attempting to modernize the system by introducing extensions, without building consensus around them to their satisfaction.
So the license change was meant to prevent forks from merging in further work on XFree86.
Of course the fork was adding functionality everybody wanted. So the fork survived and upstream languished.
Based on my understanding, the license change was "the straw that broke the camel's back." There were already murmurs of a fork due to disagreements among XFree86 developers, but the license change was the final push that led to distributions choosing not to adopt XFree86 4.4, the first version with the new license.
> I seem to recall reading at the time that the XFree86 core team was unhappy that a sole contributor was attempting to modernize the system by introducing extensions, without building consensus around them to their satisfaction.
This is a kind of odd retelling/interpretation of events, considering the “rogue” contributor version is the only one that lived and XFree86 is history.
You may find it odd, but that is what the other, now long forgotten side said at the time, and I'm describing that without taking their side. I noted neutrally as I could that they were "unhappy" and consensus was not built "to their satisfaction", but then that their project soon died because interesting work was going on in the fork.
They envisioned a world in which they needed license changes to block the fork from stealing all their work... And worthwhile contributions from their side did not materialize.
I mean I think the “sole” contributor your are referring to is Keith Packard? Don’t see any reason to not name names here. In any case I guess what I find odd is that while he did a shit ton of work in spearheading a fork, it’s not like it was just him. There were a lot of people, greatly outnumbering the XFree86 steering committee that were unhappy with the direction of the project. It’s not like it was just Keith. That’s what I find odd about your assessment. Obviously there was enough consensus to fork.
You will note that Keith is directly blamed for the fork and undermining XFree86. It sounds like there is also resentment for how XRENDER was done, also driven by Keith, but it would seem with more cooperation within the project. We know that the fork was the victor in history and XFree86 looks unreasonable to most observers but I was merely trying to accurately convey the other side in the dispute, without taking their side.
It does seem like there were a bit of tensions between "Linux on the desktop" types (driven by end user visible features and represented by the fork) and more conservative "old hand at X" types present in the thread, resisting such changes, maybe sometimes for good reasons and sometimes for bad. Though when I google around, it seems Keith and others were active in X before the rise of Linux.
Suffice it to say that Packard was involved in X before XFree86. But the point was that although he was an actual productive contributor he was far from working outside of a consensus. There’s really no sides here. The market could have continued with XFree86, it did not.
While it's true that originally X was network transparent, this hasn't truly been the case for a long time now. All modern applications (including the ones you've listed) render the whole window locally for performance reasons. This does not work over the network. Instead, xorg will transmit a picture of the window, giving you an in many ways worse version of VNC. The only difference is the level of integration with other programs like SSH.
It has a long a storied history. The concept of running an app remotely and use a local display server is probably the thing that sets X11 apart from the rest of the pack. This is still really useful in my humble opinion -- I can run X11 apps (e.g. xv) on a headless EC2 server running Linux in the cloud and have the UI/display on my Mac (using Xquartz).
This story runs in parallel with the history of Unix itself.
The fact that there is an implementation of X on Mac speaks to the power of this system, IMO.
However, regarding remote setups, is X forwarding significantly superior to using something like VNC? I've never done a side-by-side myself, but most of what I've read indicates that VNC is usually faster.
As a semi-experiment, a few months ago I decided to try running Firefox in a Linux VM, and render it in XQuartz on my Mac host. The experience was not pleasant, to say the least. And that's what I'd think to be the best-case-scenario for latency. Hard for me to imagine using it over WAN.
There was a lot of work done to make X11 work better over low bandwidth/high latency links. [0] But in the end you could get most of the benefits just by forwarding your X session over SSH with compression enabled. Although X11 is old enough that compression would have used too much CPU back when it was developed.
The more interesting comparison between X forwarding and VNC is that X is really only displaying a remote application locally, not an entire screen or desktop. (Unless you use xnest to run X inside X which is pretty cool.) I used to have a FreeBSD workstation with a browser window running on a Linux server and a bunch of terminal windows spun up on different Sun servers, and you couldn't tell the difference between your local and remote windows. Remote windows are first class citizens in X11.
A better comparison with X is the RDP protocol, which too is an API level protocol (its basically forwarding large parts of the windows GDI interface).
OTOH, the big strength of RDP is the async stream requests. This gives it a much lower apparent latency than X, but it can also do the VNC thing and render/compress parts of the UI on the remote machine.
Depends on the era of apps. If you're doing mostly bitmaps (that is, Qt4+, GTK2+, anything that draws its own UI, anything with HTML incl all electron, anything using GL, anything using composition) then VNC will work better, as it transfers better bitmaps. Working with Xlib, GTK1, or Qt3-era or earlier apps, X will perform astoundingly better. Test with xemacs next time, and you should see your results inverted.
It started a long time ago, back in the days of mainframes and thin clients. Mainframe boxes would run the sever side part of "X Server" while clients (ambiguous term in this context, I'll admit) would send commands (e.g. keyboard strokes) and receive graphics back.
XFree86 is a free software implementation of the X Window System.
The whole server/client thing is a bit outdated now, since almost nobody uses actual thin clients anymore. X forwarding is useful, but it's not a mainstream way to do your computing.
Not quite. The X server ran on the client only. The "mainframe" (usually a Unix server) had client programs that would connect to your X server, send it display commands, and receive pointer and keyboard events. The client-server relationship was reversed from the usual way, because the server was sitting in front of the user. The end-user computer was providing a service -- access to the user -- to the big machine in the server room.
I like this analogy... It's very true, and has a lot of local rendering which at the time of X terminals was a bit of a pipe dream. They could just locally render some things like fonts.
Sun also tried to get into this game early with the JavaStations. But they were ridiculously slow, so painful to use. In fact they kind of ruined Java for me. I was learning it at the time, and we got a JavaStation on loan from somewhere. Working on it just established this "Java = slow" feeling in my head and really put me off so much I went to do other things.
Of course this wasn't really deserved, and Java has gone on to become powerful (though I still consider its poor intra-version compatibility an issue). But I've never been able to quite shake that feeling.
In the linux world I work on, a very large percentage of the developers are frequently unknowingly running X via ssh tunnels. Its just to convenient to be able to run the graphical version of tools on the remote dev/test machine than to live all day long restricted only to the GUI tools on your local machine and a terminal session.
The one place that X could use some updating is all the sync method calls, that unlike RDP become quite slow if the network connection has any real latency.
> The one place that X could use some updating is all the sync method calls, that unlike RDP become quite slow if the network connection has any real latency.
I wouldn't say the one place, but it is a pain point.
(And actually, many of the supposedly synchronous things are not synchronous at the protocol level, but at the C library binding level -- xlib. Nowadays there's a much closer to the protocol binding: libxcb.)
A bit heavy on the anti-Musk tirades for my taste.