By default they just work like a 1080p display. It’s okay, but bounces with head movement, etc.
They work fairly well on Mac with their Nebula software, where you can have an extended display and control the distance if the screen and it accounts for head movements. The same software is not so great on PC, unfortunately.
On Linux, Wayne Heaney created Breezy Desktop, which is almost as good as Nebula, except for some stuff around the edges during movement[0]. He’s also created a very nice driver for the Steam Deck.
With a little bit of work you can also do 3D games and movies with them. Not quite easy enough for an elementary school kid, but not much worse.
What I’ve found is that I don’t use them around the house, but do while traveling. Not sure if that justifies the price. I’ve also got both versions of the Beam. Overall, I’d say it’s a push. When I’m traveling, I’m glad I have them. When I’m not traveling, I wonder why I have them.
I have one. I find it pretty bad for work. Low resolution, text is generally blurry and the corners of the screen are almost unusable with blurriness, colour aberration and other distortions.
Private money doesn't want to finance this, and State money should NOT finance this, as it's an individual "luxury" solution to a societal issue. Hence, there's no money or interest and the companies fail. That's a very acceptable and normal cycle.
Could be, but he does not strike as someone who is looking for fame.
Plus the whole discussion about why he would like to move to SF but can't seems pretty authentic.
I really don’t understand why we give credit to this pile of wishful thinking about the AI corporation with just one visionary at the top.
First: actual visionary CEOs are a niche of a niche.
Second: that is not how most companies work. The existence of the workforce is as important as what the company produces
Third: who will buy or rent those services or products in a society where the most common economy driver (salaried work) is suddenly wiped out?
I am really bothered by these systematic thinkers whose main assumption is that the system can just be changed and morphed willy nilly as if you could completely disregard all of the societal implications.
We are surrounded by “thinkers” who are actually just glorified siloed-thinking engineers high on their own supply.
Gwern's (paraphrased) argument is that an AI is unlikely to be able to construct an extended bold vision where the effects won't be seen for several years, because that requires a significant amount of forecasting and heuristics that are difficult to optimise for.
I haven't decided whether I agree with it, but I can see the thought behind it: the more mechanical work will be automated, but long-term direction setting will require more of a thoughtful hand.
That being said, in a full-automation economy like this, I imagine "AI companies" will behave very differently to human companies: they can react instantly to events, so that a change in direction can be affected in hours or days, not months or years.
> Someone probably said the exact same thing when the first cars appeared.
Without saying anything regarding the arguments for or against AI, I will address this one sentence. This quote is an example of an appeal to hypocrisy in history fallacy, a form of the tu quoque fallacy. Just because someone criticizes X and you compare it to something else (Y) from another time does not mean that the criticism of X is false. There is survivorship bias as well because we now have cars, but in reality, you could've said this same criticism against some other thing that failed, but you don't, because, well, it failed and thus we don't remember it anymore.
The core flaw in this reasoning is that just because people were wrong about one technology in the past doesn't mean current critics are wrong about a different technology now. Each technology needs to be evaluated on its own merits and risks. It's actually a form of dismissing criticism without engaging with its substance. Valid concerns about X should be evaluated based on current evidence and reasoning, not on how people historically reacted to Y or any other technology.
In this case, there isn't much substance to engage with . The original argument made in passing in an interview covering a range of subjects is essentially [answering your question which presupposes that AI takes over all jobs] I think it'll be bottom up because [in my opinion] being a visionary CEO is the hardest thing to automate
The fact that similar, often more detailed assertions of the imminent disappearance of work has been a consistent trope since the beginning of the Industrial Revolution (as acknowledged in literally the next question in the interview, complete with an interestingly wrong example) and we've actually ended up with more jobs seems far more like a relevant counterargument than ad hominem tu quoque...
Again, my comment is not about AI, it is about the faulty construction of the argument in the sentence I quoted. X and Y could be anything, that is not my point.
My point is also not really about AI, my point is that pointing out that the same arguments that X implies Y could (and have been) applied to virtually every V, W, Z (where X and V/W/Z are both in the same category, in this case the category of "disruptive inventions") and yet Y didn't happen as predicted isn't ad hominem tu quoque fallacy or anything to do with hypocrisy, it's an observation that arguments about the category resulting in Y have tended to be consistently wrong so we probably should treat claims about Y happening because of something else in the category with scepticism...
> where X and V/W/Z are both in the same category, in this case the category of "disruptive inventions"
This is my point, it is not known whether those are all in the same category. The survivorship bias part is that we only know if something were disruptive after the fact, because, well, they disrupted. Therefore, you cannot even compare them like that, never mind the fact that all disruptive technologies are not the same, so you shouldn't be comparing between them anyway.
Car and motor vehicles in general get you to work and help you do your work. They don't do the work. I guess that's the difference in thinking.
I'm not sure that it's acrually correct: I don't think we'll actually see "AI" actually replace work in general as a concept. Unless it can quite literally do everything and anything, there will always be something that people can do to auction their time and/or health to acquire some token of social value. It might taken generations to settle out who is the farrier who had their industry annihilated and who is the programmer who had it created. But as long as there's scarcity and ambition in the world, there'll be something there, whether it's "good work" or demeaning toil under the bootheel of a fabulously wealthy cadre of AI mill owners. And there will be scarcity as long as there's a speed of light.
Even if I'm wrong and there isn't, that's why it's called the singularity. There's no way to "see" across such an event in order to make predictions. We could equally all be in permanent infinite bliss, be tortured playthings of a mad God, extinct, or transmuted into virtually immortal energy beings or anything in between.
You might as well ask the dinosaurs whether they thought the ultimate result of the meteor would be pumpkin spice latte or an ASML machine for all the sense it makes.
Anyone claiming to be worrying over what happens after a hypothetical singularity is either engaging in intellectual self-gratification, posing or selling something somehow.
They don't do the work, they help you do the work. The work isn't compiling or ploughing, it's writing software and farming, respectively. Both of which are actually just means to the ends of providing some service that someone will pay "human life tokens" for.
AI maximalists are talking about breaking that pattern and having AI literally do the job and provide the service, cutting out the need for workers entirely. Services being provided entirely autonomously and calories being generated without human input in the two analogies.
I'm not convinced by that at all: if services can be fully automated, who are you going to sell ERP or accounting software to, say? What are people going to use as barter for those calories if their time and bodies are worthless?
But I can see why that is a saleable concept to those who consider the idea of needing to pay workers to be a cruel injustice. Though even if it works at all, which, as I said, I dont believe, the actual follow-on consequences of such a shift are impossible to make any sensible inferences about.
Didn’t say that.
If you posit that the future of the corporation is having a visionary CEO with a few minion middle managers and a swath of AI employees, then tell me, what do you do with the thousands of lost and no longer existing salaried jobs?
Or are you saying that the future is a multitude of corporations of one?
We can play with this travesties of intellectual discourse as long as you like, but we’re really one step removed from some stoners’ basement banter
There is no data, just hyperbole from those same "visionaries" who keep claiming their stochastic parrots will replace everyone's jobs and we therefore need UBI
As a Web Typography fan and practitioner of good typographic web standards the answer is no. You’re right.
This stuff is cruft. Displays are fundamentally different from paper, and it is OK that we don’t transfer every typographic standard 1-to-1.
> For context, almost all of our developers are learning web development (TypeScript, React, etc) from scratch, and have little prior experience with programming.
To be fair, having non programmers learn web development like that is even more problematic than using LLMs. What about teaching actual web development like HTML + CSS + JS, in order to have the fundamentals to control LLMs in the future?
This feels like a manual for a completely humorless person to trying and understand why people laugh. I appreciate the effort, but it's quite naive, and honestly most of the example jokes are just bad puns.
reply